I am here to ask the community: Is there a problem statement that data can help address? I am happy to engage with the community either here or elsewhere to discuss this. I tried popping the question on Discord, but I left because of too much noise.
Few examples of what I mean:
A dashboard to track onchain activities of the DAO like token holders voting, delegating tokens, and voting for which specific proposals, etc. This is a good way to identify active tokenholders, and activate/re-activate formant tokenholders.
A dashboard to track zksync’s airdrop stats based on – addresses receiving the token, selling, holding, and accumulating the token post the airdrop compared to the price of the token. etc. This could help understand what kind of addresses are holding and which ones are selling.
Or something else the community is looking for.
About me:
I am a research analyst at PYOR, an onchain data company. We work with the likes of Arbitrum and Cosmos for their data needs. We’ve built data solutions to track governance, capital efficiency, and DEX metrics.
Happy to chat with the zkSync community to see what we can build (if this post gets approved).
One area that I think is lacking is data & visualization tooling at the Elastic Network level.
Currently there are ~10 or so ZK chains live on mainnet, and many more preparing for deployment, and to find info about the Elastic Network, pretty much the best option is to go to L2Beat and filter by ZK Stack but there is so much more that could be done to understand the state of the Elastic Network.
Even something relatively simple/high level along the lines of the Superchain Index would be an interesting & useful thing for the Elastic Network, and I can imagine many more tools to build on a foundation like that.
Network Health: Active zkSync chains, adoption velocity, security contributions, etc.
Comparative Analysis: Performance benchmarking across zkSync chains similar to L2Beat/Superchain Index.
We’re already doing something similar for Avalanche (Avax Pulse), which tracks their L1 subnets. Happy to bring that experience here.
Would love to discuss this further.
Can we hop on a short call with key community members to understand what metrics you’d like to present in such a dashboard? This can help us come up with the right data endpoints, and eventually a format proposal on the forum.
Personally, I don’t have a huge amount of time to chat on a call for at least the next week or more, but will share some thoughts here to keep the conversation moving. I expect others will have good ideas here too!
Firstly, a lot of what you suggested is definitely sensible.
At the Chain-Level, I’d add a few things, such as:
DA-configuration (there are rollups settling to Ethereum, but also validiums using EigenDA, Avail, Celestia, etc, and soon we’ll see private validiums too).
Base token
VM-configuration (e.g. Era is running the EraVM, but soon the ZK Stack will support the EVM, WASM).
As progress on decentralization is released to mainnet I think it’d also be interesting to share info on things like the validator set size (for decentralized sequencers), and info related to the proof system configuration (e.g. some chains will continue w. centralized proving but others will integrate w/ prover networks).
Probably a lot of metrics used at the ZK chain level are also interesting at the Elastic Network level (e.g. TVL, tx volume, number of unique users etc).
tl;dr -
You should invest in open data infra, instead of working with a patchwork of closed-source vendors.
Why?
Dune is awesome, but doesn’t have all the chains you need. There are several great indexing providers (ie, Goldsky) that can quickly onboard new chains. You should make a version of raw chain data available (with latency) to the community as a public utility.
Then, have teams like PYOR compete to build open source models and dashboards on top of those datasets. You can audit their work and you get teams competing to bring you the best insights. No vendor lock-in.
What’s more, this makes it easy to connect other public datasets to the onchain data you care about.
Naturally, we’d love if you did some of this with OSO, but the long-term benefit of open infra is you maintain the option to bundle or unbundle as much of your data work as you need.