You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Goal: Identify what impact badgeholders care about and identify appropriate metrics
Notes from Carl:
I would love to jam on how we can make this happen and who the appropriate resource might be. Ultimately, we want some way of learning what forms of impact people care about and then using that to identify appropriate metrics. There are various ways to frame this exercise. Some ideas:
Voting on different impact vectors and then proposing (in the abstract) relevant metrics and data sources for verifying impact. Examples:
Bring more new users onchain → # of users/addresses who had their first txn through your project —> # of users who had “one of their first” txns through your project —> # of users who had “one of their first” txns through your project and now have a FID
Grow DAUs / MAUs
Reduce churn
Encourage people to use multiple apps in the ecosystem
Encourage people to use multiple chains on the superchain
Increase share of user’s transactions on OP vs mainnet
Creating 2-3 fictional projects and getting people to offer specific metrics they’d like to see about those projects
a Defi project that is active on mainnet and most L2s
an NFT platform that is only on Zora
a consumer app on Farcaster and Base
Starting with a big list of metrics, trying to categorize them, then creating space for new ones, and finally prioritizing some of the best ones
Sequencer fees
Daily active users
Users with FIDs
Days between first commit and first deployment
Filter Criteria
In order to have comparable metrics, there need to be some initial filtering criteria. These can be used both for determining eligible projects as well as creating time buckets for comparing projects’ impact.
For example, project must have deployed something on the Superchain before April 1 to be eligible. Then, we care about all sequencer fees generated between Nov 1 (~R3) and Apr 30, ie, over a 6 month period.
The text was updated successfully, but these errors were encountered:
ccerv1
changed the title
Workshop to capture initial metrics feedback from OP community
Design workshop to capture initial metrics feedback from OP community
Apr 7, 2024
What is it?
See here for RF4 Gov Design Experiments and here for Impact Metrics GMT
Goal: Identify what impact badgeholders care about and identify appropriate metrics
Notes from Carl:
I would love to jam on how we can make this happen and who the appropriate resource might be. Ultimately, we want some way of learning what forms of impact people care about and then using that to identify appropriate metrics. There are various ways to frame this exercise. Some ideas:
Filter Criteria
In order to have comparable metrics, there need to be some initial filtering criteria. These can be used both for determining eligible projects as well as creating time buckets for comparing projects’ impact.
For example, project must have deployed something on the Superchain before April 1 to be eligible. Then, we care about all sequencer fees generated between Nov 1 (~R3) and Apr 30, ie, over a 6 month period.
The text was updated successfully, but these errors were encountered: