Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design workshop to capture initial metrics feedback from OP community #1194

Closed
ccerv1 opened this issue Apr 7, 2024 · 1 comment
Closed
Assignees

Comments

@ccerv1
Copy link
Member

ccerv1 commented Apr 7, 2024

What is it?

See here for RF4 Gov Design Experiments and here for Impact Metrics GMT

Goal: Identify what impact badgeholders care about and identify appropriate metrics

Notes from Carl:
I would love to jam on how we can make this happen and who the appropriate resource might be. Ultimately, we want some way of learning what forms of impact people care about and then using that to identify appropriate metrics. There are various ways to frame this exercise. Some ideas:

  • Voting on different impact vectors and then proposing (in the abstract) relevant metrics and data sources for verifying impact. Examples:
    • Bring more new users onchain → # of users/addresses who had their first txn through your project —> # of users who had “one of their first” txns through your project —> # of users who had “one of their first” txns through your project and now have a FID
    • Grow DAUs / MAUs
    • Reduce churn
    • Encourage people to use multiple apps in the ecosystem
    • Encourage people to use multiple chains on the superchain
    • Increase share of user’s transactions on OP vs mainnet
  • Creating 2-3 fictional projects and getting people to offer specific metrics they’d like to see about those projects
    • a Defi project that is active on mainnet and most L2s
    • an NFT platform that is only on Zora
    • a consumer app on Farcaster and Base
  • Starting with a big list of metrics, trying to categorize them, then creating space for new ones, and finally prioritizing some of the best ones
    • Sequencer fees
    • Daily active users
    • Users with FIDs
    • Days between first commit and first deployment

Filter Criteria

In order to have comparable metrics, there need to be some initial filtering criteria. These can be used both for determining eligible projects as well as creating time buckets for comparing projects’ impact.

For example, project must have deployed something on the Superchain before April 1 to be eligible. Then, we care about all sequencer fees generated between Nov 1 (~R3) and Apr 30, ie, over a 6 month period.

@github-project-automation github-project-automation bot moved this to Backlog in OSO Apr 7, 2024
@ccerv1 ccerv1 self-assigned this Apr 7, 2024
@ccerv1 ccerv1 added this to the Eco: Optimism milestone Apr 7, 2024
@ccerv1 ccerv1 changed the title Workshop to capture initial metrics feedback from OP community Design workshop to capture initial metrics feedback from OP community Apr 7, 2024
@ccerv1 ccerv1 moved this from Backlog to Up Next in OSO Apr 15, 2024
@ccerv1 ccerv1 moved this from Up Next to In Progress in OSO Apr 16, 2024
@ccerv1
Copy link
Member Author

ccerv1 commented Apr 17, 2024

Meetings note from call with Simona here

Next actions:

@ccerv1 ccerv1 closed this as completed Apr 17, 2024
@github-project-automation github-project-automation bot moved this from In Progress to Done in OSO Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

1 participant