Almost Surely Self-Destructing DAOs

A Futarchy with fully open participation has a severe risk problem because traders don’t share the full downside. They are incentivized to favor risky decision that might destroy the Futarchy in the future.

I’ve been thinking about DAOs a lot recently, especially about the decision making aspect. Like others in the field, I think some version of Futarchy has a good chance of being the decision making protocol of choice for any DAO that wants to do something non-trivial, like developing and maintaining a product.

Since I’m a “student” of Nassim Taleb (meaning I’ve read his books more than once) the Black Swan problem always creeps in the back of my mind. Until recently I’ve given Futarchy the benefit of the doubt, hoping that maybe it wouldn’t apply. This turned out to be wrong.

It’s a bit unfortunate that the first time I write about Futarchy I immediately put it in a bad light. I still think it’s promising, but this is the first time I’ve encountered something that seems like a deal breaker (and to my knowledge hasn’t been discussed before). Just let me say this in favor of Futarchy though: I don’t think any of the typical objections (market manipulation, sabotage) are truly problematic.

The problem: Say you wanted to run a decentralized fund using Futarchy. In this thought experiment there are only two assets the fund can invest in. Both true probability distributions are known to us (but not the actors in the experiment).

Asset A yields a 1% profit per year. Asset B yields 10% per year, but has a 0.001% chance of yielding a negative infinite1 reward, i.e. the fund has to shut down (it “blows up”). The true expected values are:

E[A] = 1
E[B] = -∞

But after 100 observations, the most likely empirical means are:

mean_A = 1
mean_B = 10

If the decision about which asset to buy was made by a Futarchy, it’s pretty clear that traders would favor asset B. I didn’t specify what metric the Futarchy is trying to predict, but it doesn’t matter. It’s all about the time it takes for asset B to reveal its true mean, which is 1000 observations on average. Until then, a 10% return is better than 1% by pretty much any standard, except for “amount of risk avoided” maybe, which isn’t measurable.

To be fair, this is a hard problem, but Futarchy does particularly bad: At no point are traders incentivized to ask “common sense” questions like: “What if E[B] is significantly lower than mean_B?”, or “How can this thing have 10 times the payoff without any visible downside?”.

Let’s loosen the restrictions and assume the traders do know what they are doing, i.e. they know the true distributions of A and B. Even with this knowledge there is still no reason to go for asset A. They might lose an arbitrary amount on every 1000th trade (on average), but they would still receive a reward on the 999 others. Unlike the DAO they wouldn’t blow up personally.

It’s clearly a form of market failure, as defined by David Friedman: Everybody acting rational, but the system overall acting irrational. I’d call this the Almost Surely Self-Destructing DAO, because it systematically favors risky decisions, meaning that over the long run (which will be comparatively short) the DAO will destroy itself.

A more realistic example. Let’s consider a more realistic example, in case this one seemed too constructed (the goal was to convey the intuition).

Say Futarchy was used to manage the development of the Bitcoin protocol, which is already fairly distributed, though not in its decision making, as some have criticised.

At any point, the organization has to decide who to assign to various tasks. There are developers who are faster but sloppier and others that are more careful but slower. Each is better suited for a particular kind of task.

However, a Futarchy would favor fast developers unconditionally, because the negative payoff of being sloppy (i.e. introducing a critical security bug) would only show up every once in a while. What is particularly interesting is that unlike in the thought experiment the rare event can cause damage beyond the DAO itself.

The obvious solution is to increase the amount of time until the prediction markets close, but this will discourage people from trading on the market, because their funds will be locked up for a long time. Even if the traders are somewhat idealistic about the DAO, they won’t be able to use these funds to influence future decisions, leading to worse decisions overall.

The wacky solution. The best solution I’ve come up with employs a two-tiered system: A shorter period for resolving the markets but enforcing that trading takes place in a currency of the DAO. This currency is smart-contractually bound to a particular holder for an extended period of time once it is used to trade in the Futarchy. This way funds get released quicker and can be used to give input on other decisions, but it still discourages the most obviously dangerous decisions (if the DAO blows up the currency would become worthless).

Unfortunately, traders are still rewarded for decisions that aren’t likely to cause a blow up within that period. Also, smart traders could diversify and bet on risky proposals across a portfolio of Futarchies, knowing that only a few will blow up during the holding period. Maybe selling the tokens via futures contracts could be a strategy as well.

I’m not particularly satisfied with this solution, but I’m not sure if one can do much better than that. In a future post I’ll be talking about risk, identity, law and Hammurabi’s Code which could provide a better risk management story for Futarchy, albeit a much more complex one. There is also the question which metric a Futarchy should predict, but I’m becoming more and more convinced that it doesn’t matter too much. This could be subject of another post as well.

tl;dr: A Futarchy with fully open participation has a severe risk problem because traders don’t share the full downside. They are incentivized to favor risky decision that might destroy the Futarchy in the future. Increasing the time until markets close might help, but reduces participation overall. A system that forces participants to hold shares of the DAO for an extended period of time could be better but still has problems.

  1. Infinite here just means “no reasonable limit” which from a practical standpoint is indistinguishable from “true” infinity. Once I’m dead I won’t care about the mathematical, philosophical or metaphysical difference between a temperature of infinite degrees and a temperate of “merely” 10,000 degrees. ↩︎


© 2017. All rights reserved.

Powered by Hydejack v7.0.0-beta.0