[Community Activation RFP 2 – Extension] ZK Stack Content Program
| Initiative Title | ZK Stack Content Program – RFP 2 Extension |
| RFP Category | Product Marketing and Community Activation |
| Duration | 13 weeks (March 16 - June 12) |
| Budget | 1,000,000 ZK |
| Recipients | Selected from RFP 2 cohort (no open submission required) |
Background
This post formalises the extension of Community Activation RFP 2 ZK Stack Content Experiment, following the Foundation’s decision to build on the outcomes of that program.
RFP 2 ran as an 8-week experiment to test scalable, community-driven content models for ZKsync. It successfully identified a cohort of high-quality creators and validated a set of narratives with demonstrated reach and engagement. Rather than reopening a competitive process, the Foundation is extending the program run by the same administrator, with refined deliverables and elevated targets aligned with the increased budget and longer timeline.
This post sets out the scope, deliverables, and success metrics that will be used to evaluate the extension at its conclusion.
Program Goals
The extension has three primary goals:
- Sustain narrative momentum built during RFP 2, with particular focus on:
- ZKsync Privacy & Prividium (incorruptible, institutional-grade finance)
- ZKsync as Bank Stack / institutional infrastructure
- ZKsync beyond L2 (ZK tech superiority, ecosystem breadth, speed)
- MiCA-readiness of the ZK token and Prividium for European and global institutions
- Scale reach and consistency: moving from an experiment to a reliable, always-on content presence that can respond rapidly to ecosystem developments and sustain ZKsync’s share of voice in the market.
- Demonstrate ROI at scale: with a 3x+ increase in budget over RFP 2, the extension must produce proportionally stronger outcomes, and provide a clear template for any future programs.
Participating Creators
Allocation of the 1,000,000 ZK budget across participants will be confirmed separately. The administrator is responsible for selecting participating creators, managing disbursements and ensuring accountability against the deliverables below.
Scope & Deliverables
Each participating creator is expected to deliver:
- Minimum 20–30 high-quality content pieces per month, comprising a mix of:
- Original posts and threads
- Short-form video or visual content
- Memes, infographics, and QRTs
- Rapid-response content tied to news and ecosystem events
- Consistent daily/weekly presence on X (Twitter) as the primary platform, with cross-posting to YouTube, Telegram, or other platforms encouraged but not required.
- Narrative alignment: All content should clearly reinforce one or more of the target narratives identified above. Generic or off-narrative content does not count toward minimums.
- Mid-program report: Due April 29, 2026, covering content output, engagement data, and any narrative or strategic adjustments.
- Final report: Due June 18, 2026, with full performance data, qualitative assessment of what worked, and recommendations for any future program.
Success Metrics
The extension will be evaluated against the following KPIs, measured across the full cohort on a monthly basis:
| Metric | Monthly Target |
|---|---|
| Total views | ≥ 400,000 |
| Total likes | ≥ 5,000 |
| Total comments | ≥ 800 |
| Total RTs / QTs | ≥ 500 |
| Content output | 100% of creators meeting 20-piece minimum |
| Follower growth | ≥ 10% increase per participating account over program duration |
Qualitative indicators will also be assessed:
- Positive sentiment and educational value in comment sections
- Clear reinforcement of target narratives (tracked via mentions of Privacy/Prividium, Bank Stack, MiCA, “beyond L2”)
- Cross-platform amplification
- Community feedback and organic resharing
Timeline
| Milestone | Date |
|---|---|
| Program start | March 16 2026 |
| Mid-program report | April 16, 2026 |
| Program end | June 12 2026 |
| Final report due | June 18 2026 |
| Foundation evaluation | July 2 2026 |
Evaluation
At the conclusion of the program, the Foundation will evaluate performance against the deliverables and metrics set out above. The final report submitted by the administrator and creators will form the primary basis for that evaluation, alongside any publicly verifiable engagement data.
This evaluation will also inform the design of any future community activation programs.
Questions welcome in the thread below.