Token Programmes as a Path Towards a True Cybernetic Governance Framework

This post was prepared with reviews, discussion and insights from @polar and @rafa

Introduction

The ZKSync governance design is different from other protocol governance systems in the wider crypto ecosystem. It is leveraging an innovative approach to funding activity across the ecosystem through the token contract directly, using Token Programmes Proposals (TPPs).

In theory, these Token Programmes can be fully deployed directly from the governor contracts, giving minting rights directly from the ZK token contract in the form of capped minters, thereby creating “permissionless pathways”.

This post proposes a way of thinking about these Token Programmes that leads to the minimal, mechanistic and efficient use of the delegate-driven global consensus. Our goal is to create the cryptoeconomic context for an ecosystem to flourish by extending this mechanic towards a true cybernetic governance framework.

The Governance Proposal Structure

There is a developing wider discussion on the nature and purpose of the ZKSync governance systems, in The Telos of the DAO discussion. However, this is a discussion located explicitly around the nature of the proposal structures and how TPPs can take us towards a self-regulating cybernetic system design.

ZIPs: These are proposals that materially shape the protocol. They involve high-stakes decisions and are best conceptualized as the protocol upgrade mechanism. A key affordance of decentrailsed governance systems is that they allow systems to evolve technologically (rather than static ossified systems), which in the case of the ZKSync ecosystem, will be important as it pushes into the frontier of research and development at the frontier of blockchain architecture.

TPPs: is what we’re primarily focussing on here. They are intended to move beyond the standard grant mechanism structure towards a more mechanistic design space, where tokens are flowed to mechanisms that distribute tokens for a clearly designed purpose. Their goal is to lead to emergent ecosystem outcomes driven by pure incentives and flows in the token economy.

GAPs: governance is the act of governing; it is an implicitly cyclical and self-referential process that requires evolutionary action. GAPs are designed to upgrade the governance processes themselves, sometimes referred to as metagovernance. GAPs unlike off-chain voting systems used elsewhere are fully onchain with their own governor contract, which means this mechanism can be used for both signaling and triggering onchain actions, which also gives this pathway the potential to play into token programme operation. I also see no reason here that GAPs couldn’t be used to add entirely new proposal pathways e.g. futarchy based decision making at some point in the future.

Token Programmes

Let us consider an elementally simple Token Programme.

In this example we see the lifecycle of a TPP. A governance proposal is passed to execute the deployment of a smart contract which is capitalised with 10m tokens via a capped minter. The tokens are emitted linearly block-by-block for a defined period (in this case 3 months) until depleted, leading to onchain outcomes. It then self-terminates due to hitting its minting cap. A simple example of this, could be a simple LP token staking mechanism for incentivising liquidity in a major trading pair.

The decision here would be a simple Yes/No binary decision to execute the programme at the parameters defined in the proposal and transaction payload in the vote, which would be deliberated in the lead-up to a delegate pushing the proposal for vote.

In theory the mechanism here could be deployed with end-to-end trustless execution and no intermediating action whatsoever (a permissionless pathway). This is the goal, but we can go beyond singular mechanisms to a wider programme.

TPP Decision Making

If we follow this idea out to a more extensive programme of action, we may of course wish to continue with the programme if it leads to high Value and productive onchain outcomes.

We therefore propose a simple framework for TPP decision-making, with the intention of building a coherent pipeline of deliberation that can scale effectively without spiraling into bureaucratic decay.

To promote feedback-driven-decision making the programme has an evaluation period at some point T minus PE (programme end), to facilitate a set of mechanistic decisions (for example, a 3 month token programme, 2 week eval period).

STOP: This is simply a case of the minter cap being hit, causing the programme to self-terminate, with no further action necessary. This could be extended to KILL (perhaps through an escalation game) if a mid-programme mechanism is leading to obvious negative outcomes.

PAUSE: the evaluation period determines that the programme could perform well if a set of conditions are hit. These could be done by a delegate vote, but could move out to oracle defined conditions such as the ZK token price, or once an FDV threshold or volume stat is hit. In which case the programme can restart using the same parameters as previously defined.

CONTINUE: a simple decision to maintain continuity in the programme by issuing another capped minter in the same parameters as the previous step in the programme.

ITERATE: a parameter change of the mechanism within the programme, for example more tokens, or less tokens and a shorter programme epoch.

A simple decision framework like this will allow TPPs to be chained in sequence and structured in a way that doesn’t overload the delegate governance layer with voting or deliberative actions. The baked-in evaluation periods leads us to self-regulating feedback loops, a key feature of cybernetic governance.

The above diagram shows a chained sequence of TPPs structured in a fashion so that at no point the delegates needs to focus on more than one decision or evaluation period at any one time. Allowing due consideration and structured decision making periods, promoting the creation of temporal schelling points, increasing the probability of delegate coordination and engagement.

For example these timing structures can be set up at proposal time to sequence decision making periods in the last week of the month (for example), with researchers coordinating in the weeks before to deliver the necessary information to delegates to make effective data informed decisions.

Global Parameters

The GAP system has the potential to set the meta-rules for how token programmes play out as a wider framework, setting system boundaries. For example, setting the cadence of token programmes, evaluation schema and cryptoeconomic boundaries.

To illustrate the benefit of this, setting something like a Token Inflation Threshold as a global parameter adds bounds to the wider token programme framework in terms of how many ZK can make it to market in a given time frame. This will provide assurances to token holders that the ZK will not be overspent and will facilitate sophisticated market actors to build predictive models of the economy. It will also allow delegates to deliberate on whether a particular TPP is worth using X% of the remaining allocation and will promote healthy competition between token programmes.

This kind of framework will simplify the decisions required for delegates to make, allows a structure to form that is easy to map and leads towards an important set of characteristics that can lead towards far more optimal governance action than is convention.

Programmes of Pluralistic Mechanisms

At this stage it is wise to deliberate the nature of the mechanisms we would like to see in a wider token programme framework that effectively steers builders towards desired network outcomes in alignment with the ZK Credo.

This could be the context where the “infinite garden” ethos of Ethereum could be brought to life through the cultivation and curation of a vibrant ecosystem of competing mechanisms. Building resilience through diversity and creating a context where mechanisms are iterated rather than tried and abandoned.

A suggested list of features we might be looking for:

  • End-to-End Trustless Execution: No manual intervention is required once the programme begins. It runs entirely through smart contracts, from vote to execution to end of the programme.

  • Single Vote Triggers Multiple Outcomes: A single governance decision is enough to initiate a cascade of on-chain actions. It will be possible for a single vote payload to execute many instances of a Token Programme. For example, not one, but several instances of the simple token programme shown in the first image.

  • Pipes, Not Buckets: The programme’s design ensures a smooth flow of tokens and decisions, rather than static allocations. This moves us away from using global consensus to deliberate complex and often messy grant-like decisions.

  • Decentralised Decision Making: Outcomes are decided in a decentralized manner, relying on collective decision-making rather than central authority.

  • Radical Openness: Transparency is key, ensuring all stakeholders understand how the programme functions and that every material action is auditable. If all relevant data is onchain and open, we maximally eliminate the potential for collusion and shadow structures to emerge.

  • Evolutionary: Programmes are designed to adapt over time, iterating based on performance data and alignment with governance agreed KPIs. This opens the potential for innovative evolutionary mechanism designs to emerge.

  • Steerable: Minimal interventions are allowed but only when necessary, prioritizing dials (adjustments) over dialogue. Inter-program steering could be possible through sub-structures, leveraging systems like Hats protocol and transparent voting mechanisms.

  • Leverage Account Abstraction: One of ZKSync’s core strengths is native account abstraction. Token programmes could seed innovation in the utilisation of this groundbreaking technological affordance showcasing forkable mechanisms that can be deployed in wider ecosystem.

Maturing Execution

It should be said that this is a new paradigm of thinking about how protocol governance systems operate and it will be non-trivial to realize full permissionless pathways immediately.

A maturation flow could be:

  1. Capped minters + multisig with accountability frameworks.
  2. Capped minters + discrete token emission mechanisms.
  3. Capped minters + logic and proof gating / Account abstraction integration / multi-programme factory contract execution
  4. Composible cross-interacting token programme designs and emergent governance driven by ecosystem wide feedback.

This attitude will move us away from the flawed paradigm of “progressive decentralization” towards progressive automation, allowing emergent complex behavior and ‘living’ ecosystem diversity.

The path towards a true cybernetic governance framework

The dream for DAOs is not to recreate the old world in the new, but to iterate towards fully automated permissionless systems that evolve through self-regulation and data informed decision making.

Blockchain based decentralised governance systems are the site for where this kind of system can evolve. The token programme framework in particular creates the perfect context for it to emerge.

By pushing humans to the edges, we open up the possibility for end-to-end trustless execution, which will minimise the potential for collusion, decision-making paralysis and doom loops of overspending.

Additionally, these systems will allow us to move from Human in the Loop systems to AI-based agentic decision making and the end game of a decentralised self-regulating cybernetic organism that powers ecosystem growth.

Summary

The ZKSync governance system as it is structured currently limits the paths towards conventional grant funding, and that is a good thing.

It opens up the possibility for a more trust minimised and eventually fully permissionless pathways leveraging the ZK token. By building out a highly predictable, maximally simplistic framework of decision-making these onchain mechanisms can become data directed and ultimately fully autonomous. This will free up culture formation to take place in the ecosystem itself (supported by TPPs) rather than at the global consensus layer.

Open questions and prompts for discussion:

  • How do we cultivate a highly pluralistic and competitive pipeline of token programmes that not only overcomes adoption barriers but encourages continuous innovation within the ecosystem without excessive reliance on governance?
  • What foundational set of global parameters should we set to balance token stability, ecosystem growth and innovation within a meta-level token programme?
  • What technical standards and interfaces are required to facilitate maturity towards trustless execution?
  • What are the first set of token programmes we should be looking for?
  • What kind of on-chain metrics, KPIs and oracles are required to enhance feedback loops and adaptive programme adjustments?
  • How do we move from Human in the Loop to Agent in the Loop, and perhaps a Chain in the Loop end game?
  • How can this system nurture values-aligned culture formation in the wider ecosystem?
14 Likes

This is a great post @drnick !

The overall approach of TPPs is attractive to me, and the key concern for me is how we can create effective mechanisms as a socio-technical system (multiple stakeholders + mechanisms) that designs and is impacted TPP mechanisms and other parameters of the system (governance design, etc).

With this framing, allow me to push back on a few things and emphasise others, so we continue to refine our understanding of the possibilities and best approach to take here:

  • Being “data-driven” sounds good but in practice often fails to acknowledge the complexity of reality, becoming counterproductive. Innovation leaps happen through contrarian bets on outcomes that can’t be demonstrated in the short term. Leadership being the act of inspiring others to march into the unknown (and a fundamental component of organisational systems we shouldn’t dismiss). Data driven also has a data availability problem, where what’s easy to measure is often preferred to what’s more representative but harder to measure (a rather problematic bias). We have both worked on inter-subjective mechanisms for a while and I think it’s critical to bring said thinking and research around sense-making and decision making mechanisms beyond majority voting and objective oracles here. Complexity theory provides us ample grounding for this, by having proven time and again that if we try to compress too much complexity into a single metric, said complexity shows up somewhere else in the form of unintended side effects and (negative) externalities).
    Let’s avoid creating perverse incentives. Data-informed good, data-driven (most frequently) bad.

  • The incentives of the actors involved need to be navigated carefully. DAOs are multi-stakeholder constructs by nature and zkSync is no different. We have:

    • users of the infra (protocols, organisations, and individual users),
    • those building the infra and ecosystem
    • token holders (with overlap on the other groups + airdrop hunters and the like),
    • delegates

    from this perspective, the question of who designs mechanisms and for whom’s benefit is critical. Other DAOs have quickly realised that delegate systems lack sufficient incentives for effective participation, and have started to devise a series of complimentary incentive programs to increase governance participation. The risk is one of very low throughput and hence becoming irrelevant and other ecosystems outpace this one. I have often suggested Citizens Assemblies (or the web3 version Multi-Stakeholder Assemblies) as a better form of governance that avoids principal-agent problems and includes focused deliberation and incentives for participants. Irrespective of the system, we currently have a gap in incentives and a higher barrier to the creation of programs that can allocate capital (need to design, audit, get approval, deploy, etc). That makes zkSync more reliant on the foundation or charitable contributions by delegates (related to next point).

  • Designing mechanisms is very hard. Few are skilled at thinking through how socio-technical systems can change with a given mechanism. And DAOs have largely failed to date to design highly effective mechanisms (the exploration and refinement continues). Traditional allocations of capital (pots, instead of pipelines) are easier to manage and operate off-chain, allowing for rapid prototyping and managing ambiguity and uncertainty thanks to human actors that can very rapidly adapt. Here we’re largely devoid of said ability for rapid prototyping. Rafa pointed out how AI tools enable virtually anyone to code solidity, so the barrier is not so much technical, as it’s about the necessary level of clarity that engineering an automatic mechanism requires. Then all those engineering decisions need to be audited and validated through governance and that can give space to even more debate and even longer and more painful governance approval cycles (i.e. low throughout).

Low throughput is already a major issue in other ecosystems with lower mechanism design constraints, and compounded with the limitations around incentives and the requirements for usage of onchain data or otherwise bias for data-driven as opposed to data-informed… and we have quite a challenge here.

Now, I’m not saying this to be all doom and gloom (otherwise I would just be allocating attention somewhere else instead of posting here). I believe there’s an interesting path forward IF we can proactively address the limitations mentioned above.

In that, what I’m thinking moving forward (and would love to know everyone’s thoughts!):

  1. Double down short term on the idea of capped minters + multisigs with accountability frameworks.
  2. Use the above to
  • enable R&D in this space that’s not reliant on charitable contributions (also so we can move to more advanced TPPs as soon as possible)
  • and resolve incentive alignment (more on that below)
  1. Align with the foundation on strategy to know how much the DAO can focus on evolving TPP mechanisms without short term ecosystem development needs, or otherwise whether the DAO should also support ecosystem development short term. Depending on the conclusion, also create some short term capped minters + msings with accountability frameworks focused on ecosystem development strategy and execution.

NOTE ON INCENTIVE ALIGNMENT

4 Likes

Excellent post. I’d also recommend reading the latest post on TPPs here: ℹ️ TPP Frequently Asked Questions

3 Likes

Hey Daniel! Thanks for the thoughtful reply, which I want to comment on.

This is so well put! Big plus one here. What comes to mind is that we probably need to develop verifiable computing for complex on- and off-chain queries on datasets that do allow some degree of (probabilistic?) verification of claims.

In my opinion this mostly due to lack of experimentation and a stubborn belief that basic democracy scales well, when it has already been proven beyond any shadow of a doubt that it doesn’t and actually can’t.

1 Like

I really love the focus on automation, as I believe this is what DAOs promised to do, and so far haven’t delivered. That being said, @daniel-ospina makes a good point here: most DAOs found out very quickly that a lot of issues are quite “squishy” and computationally intractable, especially with smart contracts and their associated cost of compute.

What I think needs to come before is a structured, iterative discussion about what can be automated and what can be measured. How does zkSync have an advantage here with its focus on zero-knowledge and verifiable compute?

Some issues will lend themselves very well to TPPs and others not so much, unless we generalise TPPs to the point where they are grant programs.

Take the question of how a good first run of a TPP might look? Who’s going to design this without compensation in some form? It’s probably just the foundation. So there might be a need for a few action-oriented research grants to speed up the time to market and involve a larger plurality of stakeholders in the effort.

4 Likes

Thanks @drnick for coming up with this, We really like the concept of TPPs and think they have great potential to automate and streamline governance in a way we haven’t seen before. That said, one concern we have is how we can prevent people from gaming the system. With any automated, token-based program, there’s a risk that individuals or groups might find ways to exploit loopholes or manipulate outcomes for their benefit. It would be great to explore mechanisms that add safeguards or checks to ensure that TPPs remain fair and effective

2 Likes

Thanks for putting all of this together @drnick, very thoughtful and detailed!

How many TPPs do we expect to have running throughout the year? Trying to figure out how much work will there be for delegates with 5-10 TPPs with 2 2-week evaluation period for each. Delegate participation starts off enthusiastically in new DAOs but it drastically slows down after some time. It is a real problem for all DAOs and if we solely rely on Delegates for most of the operational decisions that could put us in a tough spot.

STOP action can indicate a contract to self-terminate (autonomous), and PAUSE action can happen autonomously too, but when it comes to ITERATE and CONTINUE, those will involve Delegate involvement when something needs to be iterated and continued, correct?

Yes. Fully support this. Great idea.

I love this. This is how I understood TPPs. Wasn’t this how they are already intended (designed) to work? :sweat_smile:

This sounds fantastic in theory and I cannot wait to have the first couple of TPPs active to see how it will all look like in practice.


Thoughts, questions, concerns

  • Are we putting the complexity of the innovative token distribution mechanism over accessibility? ZKsync is getting in the arena (a year late) with OP, ARB, etc. and reducing the friction to building onchain would be desirable.
  • TPPs are designed to only retroactively fund projects for their achievements/milestones measured by predefined onchain metrics. Will we have grants specifically oriented toward certain things like apps, governance tooling, dev tooling on ZKsync, defi innovations, ifra, etc.?
  • Who will be in charge of designing these detailed contracts for TPPs? Devs submitting them, or will they have a designated place to go if they are not developers?
  • Will anyone other than devs be comfortable, willing, and capable of building them (artists, creators, etc.)?
  • Is it worth considering having a “working group” (for the lack of a better word) that would conduct, oversee, and help with TPPs, from brainstorming to launch?
  • Protecting against gaming the contract’s logic, exploiting the benefits by manipulating visible onchain metrics for its success, and fail-safe mechanism to protect ourselves against those?
  • I love TPPs in the context of competitions!!! This in my opinion is one perfect use case for them that could spark a lot of interest, onchain activity, and build communities around competitions we build TPPs for. (competition example: social apps with most users with some onchain identity verification to sybil protect, and minimum user activity requirements to be eligible for reward etc. all to protect against people gaming the system. Very random and not clearly defined idea but you get the point.)
  • How do we fund initiatives that don’t have an onchain component to track, measure, and automate through contract logic? (creating educational content for example, or R&D, or building a custom ZKsync library for some programing language, etc.)
  • Maybe we should start a discussion about TPPs that focuses on actionable steps on how we put them to work.

I know we’re trying to be automated and involve people as little as possible, but is it worth considering creating a working group (for the lack of a better word) that would at least for the first year or two make sure TPPs are fully understood by everyone, help with the design, coordination, gathering info, data, learnings, and improving them, etc.

I see TPPs as an experimental DAO product designed to do a thing. And like with any other product, to ensure its success, I think it’d be best if we have a dedicated team around it. For now, we rely on the Ecosystem and Delegates to figure them out and put them to work. There are no guarantees they will work effectively without proper management.

Since TPPs are ‘in charge’ of providing the financing for the entire ecosystem, I wonder who’s in charge of conducting and creating that ecosystem around ZKsync. That said, I think it’s equally important to have Ecosystem Leads just like we have Governance Leads. Without someone proactively enforcing TPPs, helping people with creating them, etc., I see a lot of unanswered questions like how do we create the developer, artist, creator, builder culture in ZKsync? This is usually done through sponsoring program grants of what we want people to build (applications, defi, social apps, dev tooling, governance tooling, etc.). It’s also done through Hackathons IRL where people get to meet and connect and build something of value. And many other ways that require more hands-on involvement which, correct me if I’m wrong, we don’t have at the moment. I doubt culture will emerge through an automated process and no human touch, because it mostly relies on emotions. ENS has been doing this quite well with their 3 working groups and now I see Arbitrum is going towards something similar but even bigger in scope (link to the post).

–

In principle, I agree with everything you said. It’s like reading a semi-scientific explanation of an innovative, futuristic, and creative token distribution model with a cypherpunk overtone, and I love it. :slight_smile:

If we look at TPPs for what they are, they are 1) Product and 2) Experimental token distribution mechanism. And if there’s something we all undeniably agree on, it’s that TPP design is absolutely fantastic and I personally see it as the ‘end game’ of how DAOs should work. But I don’t think 1) we can start with it and neglect proven methods that simply worked for other DAOs, and 2) let it launch itself on the market without a dedicated team for it and the entire ecosystem.

–

@daniel-ospina I share your thoughts + concerns, and enthusiasm + optimism about TPPs. +1 for your comment.

2 Likes

The gaming part is of course important, because we have to expect participants to game the system. So for me the best way is to analyse similar projects (in the link mentioned by rafa there are apparently a lot of examples in the wild already ℹ️ TPP Frequently Asked Questions), identify best practices and start experimenting with smaller amounts.

I am not sure I fully understand how this will work yet, but my gut feeling is that we should set up TPPs that foster competition between protocols and only hand out ZK if certain metrics are hit over a longer period of time.

Example:
Instead of
“protocol X will receive Y when KPI 1 is hit”
we should go for
“if one of the protocols A B or n hit KPI 1 (e.g. amount of ETH in smart contract) x ZK are paid out per block for the time the KPI is maintained”

Like that people can’t just move their ETH because of one time effects, we are not incentivizing short term deposits, but long term commitment + competition creates incentives to be fast. Will there be loopholes in this system? Likely! But with time we will find setups that are at least less gameable.

1 Like

I like the idea of having sequential hardcoded checks in TPP contracts (monthly, quarterly) that would trigger a token distribution event which would instead of rewarding one or a few participants, reward all of them who met certain criteria, and automatically calculate the amount based on the results/impact. This spreads the distribution amount across many different participants and doesn’t discourage anyone from participating, and yet rewards the most those who have the highest impact.

1 Like

Thanks for the reply @daniel-ospina this is really what I’m looking for. I think it’s important we have a strong critical discussion about the nature of these systems, this is what’s going to drive the space forward.

I’m a big fan of sociotechnial systems theory it’s what has informed a lot my thinking. As I was driving at in my other post the social element is vital, we need to seed culture on this network. I think what came of that discussion is that this governance system isn’t intended to be a DAO culture, but a chain culture. Thus our work here isn’t to create a governance system that becomes the site where the culture emerges, but it provides the cryptoeconomic context for culture to emerge in the ecosystem, ideally through the creation of consumer facing applications supported by the TPPs.

I agree to an extent, I’ve seen systems fall prey to excessive metrification and have directly called this space the ultimate manifestation of Goodharts Law. Consequently, I was careful to dodge the term “data driven” for “data informed”, with data (in the broader sense) that arrives from systematised evaluation periods with the intention of driving a reflective moment at the end of each of these token programmes. If we bake in eval periods and set research questions at proposal creation, then we can bake the best possible decisions on whether the mechanisms continue, iterate or naturally come to an end. We have data availability problems, because we don’t systematically think about the data we would need to make good decisions about whether a system is functioning well at the time the system is created. This is the case everywhere from governments to crypto.

The “data directed” moment comes much later and is when we end up fully automated. Think, something like the way EIP-1599 functions, a fully decentralised algorithmic “chain in the loop” mechanism that sets the base fees dybamically based on data derived from block occupancy. I would expect a long journey to finding data sources we would want to ossify in that manner, but that would be a desirable end goal in my mind. Still, short token programmes would be the way to experiment with these things and would be far less high stakes than protocol level mechanism design.

Yep, I hugely believe in the value of intersubjective consensus, but too much of that at the global consensus level can lead to spirals of inaction. Having said that, there is no way a system likes this works well without being informed by the needs of the wider ecosystem. That will be need to be fundamentally sensemaking oriented data arriving from the needs of the communities and I would expect it would inform the proposals for TPPs. For example, it’s clearly a need for a system like this to work that smart contract auditing is going to be a systemic need to push through a pipeline of TPP mechanisms, so we should have a TPP focussed on that. Going forward, we would absolutely need intersubjective data sources to fully inform the creation of proposals that met the needs of the wider ecosystem. I could for example imagine a TPP that emitted tokens to users for providing such data.

I think this is an important point. I’m also not a fan of delegate systems for complex decision making. There’s a couple of ways passed that, incentivise the expansion of the breadth and expertise of the delegate set (as others are doing), or use it for what it would be better at, making less frequent and simpler decisions, which was a big part of the motivation of the design above.

I still think those that vote should probably be incentivised to do so, governance work is work etc, but breaking into a radically new delegate set from here is going to be a real challenge. And like @lex_node said in the other thread it’s probably better if this ends up as apps rather than governance people at the end of the day, since they will be ultimately the primary agents interacting with token programmes.

The pacing question is also important, but I think since we’ve got a blank slate here we have the opportunity to not fall into the doom loop recently pointed out by Disruption Joe as an all too common pattern in DAOs. It would be nice if we could avoid this phase all together.

This I am very aware off having spent years doing it. And nothing exasperates me more than the lack of high quality mechanisms making it to market. If anything, what excites me the most about the TPP framework is that it might in fact incentivise the creation of many pluralistic mechanisms to finally arrive.

Now, how that happens is the interesting bit. I can imagine there being a flow which starts at hacks and ends up in audited deployments, through successive open curation. We wouldn’t want that to be done at the global consensus level but in most likely in DAOs across the network.

If this ends up at a group centrally managing money outside of a mechanism, I think it defeats the object. The point is those actions on the multisigs control a mechanism, effectively operating the dials on a smart contract. Now, in my mind that could be a small grant giving mechanism, but that mechanism would ensure that all decisions have full provenance of decisions. If how a capped minter operates is simply structured permissions and the decision making for how funds are distributed become black box in nature then I think it’s somewhat of a fail. There might be a period where it might be absolutely necessary to bootstrap the system, but I think it should be done reluctantly. So, I would push back on double down, and say we should be used as sparingly as possible if at all.

I think the way in which DAOs expect charitable work and even work up front in advance of payment in a locked volatile cryptocurrency is borderline unethical in places, so I get this. Ideally we would discover mechanisms that changed that dynamic.

1 Like

Thanks! and yes I think the opportunity here is to deliver it. I do think the very reason for the lack of delivery is because it hasn’t been a priority. A hard selective pressure on mechanistic approaches as a governance dynamic here could give this entire ecosystem an edge purely because it more committed to doing it properly from the outset.

I would say that if we foresee “squishy” outcomes to a token programme we try to avoid allowing it out of the proposal stage.

My first reaction to this is, don’t do them then. It’s not a job for a governance system such as this, maybe other DAOs in the ecosystem. But it would be interesting to explore what you think these are?

I do think there could be mechanistic systems that could replace a classic grant programme structure, something closer to quadratic funding than replicating the centralised flows of behind closed doors team makes a decision.

For example, I think we’re going to need to coordinate data scientists and researchers for the evaluation periods. One solution could be to say, ‘let’s give a data science outfit the money to just do it’, the other would be to say, ‘let’s set it up as a competitive bounty mechanism’ that emits tokens to researchers who provide research inputs to the system that delegates valued in their decision making. One locates the job around a single entity, the other decentralises it across an open ended set of ecosystem actors. The decentralised mechanism attracts experts to the chain, the closed approach precludes that from happening.

I do think this is a critical question. The system above assumes a flow of mechanisms making into decision making consideration. We need to get there somehow. Perhaps we need to identify a bootstrapping phase that is slightly less mechanistic in framing, but I think it would be important to put boundaries on it so it doesn’t become and entrenched practice.

1 Like

Thanks for the comment! Oh they absolutely will. It is a personal mantra of mine that “every game, will be gamed”, the point of good mechanism is to create games where actors can extract value, but their self-interested non-collaborative action leads to positive ecosystem level outcomes. Bitcoin is the ultimate example of this.

Every game plays out. They either find meta-stable equilibria or burn out and need to ITERATE or STOP. The point here is that we don’t get too stuck in overthinking all of the edge cases of how mechanisms fail, but ship them on time bounded lifecycles and generate global coordinated learning events from them that feeds the next generation.

A standard flow would be:

Mechanism launches > works great! > starts getting gamed > the game is up > STOP

Airdrops are a great example of this. Some mechanisms will be so rekt that we need an entirely new mechanism, some will just need some dial changes, or checks and safeguards as you’re suggesting (we’d want skilled people in the proposal flow deliberation phase good at war gaming these things).

In a world of many competing mechanisms, we will have some on the come up and some on the wind down and even the gaming brings value to the network in key metrics.

3 Likes