I’m preparing for a talk on the dynamics of dictatorship or authoritarianism, which touches on many other topics, like polarization, conflict, terror and insurgency, and filter bubbles. I thought I’d share a few references, in the hope of attracting more. I’m primarily interested in mathematical models, or at least conceptual models that have clearly-articulated structure->behavior relationships. Continue reading “Dynamics of Dictatorship”
In the near future I’ll be running an experiment with serving advertisements on this site, starting with Google AdSense.
This is motivated by a little bit of greed (to defray the costs of hosting) and a lot of curiosity.
- What kind of ads will show up here?
- Will it change my perception of this blog?
- Will I feel any editorial pressure? (If so, the experiment ends.)
I’m generally wary of running society’s information system on a paid basis. (Recall the first deadly sin of complex system management.) On the other hand, there are certainly valid interests in sharing commercial information.
I plan to write about the outcome down the road, but first I’d like to get some firsthand experience.
What do you think?
Update: The experiment is over.
A nice TED talk explaining how algorithms can reinforce unfairness, inequity and errors of judgment:
Note the discussion of teacher value added modeling. This corresponds with what I found in my own assessment here.
I’ve been watching the debate over AI with some amusement, as if it were some other planet at risk. The Musk-Zuckerberg kerfuffle is the latest installment. Ars Technica thinks they’re both wrong:
At this point, these debates are largely semantic.
I don’t see how anyone could live through the last few years and fail to notice that networking and automation have enabled an explosion of fake news, filter bubbles and other information pathologies. These are absolutely policy relevant, and smarter AI is poised to deliver more of what we need least. The problem is here now, not from some impending future singularity.
Ars gets one point sort of right:
Plus, computer scientists have demonstrated repeatedly that AI is no better than its datasets, and the datasets that humans produce are full of errors and biases. Whatever AI we produce will be as flawed and confused as humans are.
I don’t think the data is really the problem; it’s the assumptions the data’s treated with and the context in which that occurs that’s really problematic. In any case, automating flawed aspects of ourselves is not benign!
Here’s what I think is going on:
AI, and more generally computing and networks are doing some good things. More data and computing power accelerate the discovery of truth. But truth is still elusive and expensive. On the other hand, AI is making bullsh!t really cheap (pardon the technical jargon). There are many mechanisms by which this occurs:
- CGI and digital editing make it possible to fake anything (“augmented unreality” above).
- Bots that can pass the crude Turing test of social media streams produce disinformation far faster than humans can.
- Sockpuppet automation platforms assist bad actors at disinforming.
- Anonymity limits the effectiveness of reputation.
- Social networks make it easy for people to coalesce into tribes, rejecting information that might disconfirm their biases.
These amplifiers of disinformation serve increasingly concentrated wealth and power elites that are isolated from their negative consequences, and benefit from fueling the process. We wind up wallowing in a sea of information pollution (the deadliest among the sins of managing complex systems).
As BS becomes more prevalent, various reinforcing mechanisms start kicking in. Accepted falsehoods erode critical thinking abilities, and promote the rejection of ideas like empiricism that were the foundation of the Enlightenment. The proliferation of BS requires more debunking, taking time away from discovery. A general erosion of trust makes it harder to solve problems, opening the door for opportunistic rent-seeking non-solutions.
I think it’s a matter of survival for us to do better at critical thinking, so we can shift the balance between truth and BS. That might be one area where AI could safely assist. We have other assets as well, like the explosion of online learning opportunities. But I think we also need some cultural solutions, like better management of trust and anonymity, brakes on concentration, sanctions for lying, rewards for prediction, and more time for reflection.
BLM Public Lands Statistics show that the federal government holds about 643 million acres – about 2 acres for each person.
But what would you really get if these lands were transferred to the states and privatized by sale? Asset sales would distribute land roughly according to the existing distribution of wealth. Here’s how that would look:
Bill Gates, Jeff Bezos, Warren Buffet, Mark Zuckerberg and Larry Ellison alone could split Yellowstone National Park (over 2 million acres).
The other 80% of America would split the remaining 14% of the land. That’s about a third of an acre each, which would be a good-sized suburban lot, if it weren’t in the middle of Nevada or Alaska.
You can’t even see the average person’s share on a graph, unless you use a logarithmic scale:
Otherwise, the result just looks ridiculous, even if you ignore the outliers:
Much has been made of the fact that Trump’s revised tax plan cuts its implications for deficits in half (from ten to five trillion). Oddly, there’s less attention to the equity implications, which border on the obscene. Trump’s plan gives the top bracket a tax cut ten times bigger (as percentage of income) than that given to the bottom three fifths of the income distribution.
That makes the difference in absolute $ tax cuts between the richest and poorest pretty spectacular – a factor of 5000 to 10,000:
Trump tax cut distribution, by income quantile.
To see one pixel of the bottom quintile’s tax cut on this chart, it would have to be over 5000 pixels tall!
For comparison, here are the Trump & Clinton proposals. The Clinton plan proposes negligible increases on lower earners (e.g., $4 on the bottom fifth) and a moderate increase (5%) on top earners:
Trump & Clinton tax cut distributions, by income quantile.
Back in 2002, when invasion of Iraq was on the table and many Democrats were rushing patriotically to the President’s side rather than thinking for themselves, William Nordhaus (staunchest critic of Limits) went out on a limb a bit to attempt a realistic estimate of the potential cost.
All the dangers that lead to ignoring or underestimating the costs of war can be reduced by a thoughtful public discussion. Yet neither the Bush administration nor the Congress – neither the proponents nor the critics of war – has presented a serious estimate of the costs of a war in Iraq. Neither citizens nor policymakers are able to make informed judgments about the realistic costs and benefits of a potential conflict when no estimate is given.
His worst case: about $755 billion direct (military, peacekeeping and reconstruction) plus indirect effects totaling almost $2 trillion for a decade of conflict and its aftermath.
Nordhaus’ worst case is pretty close to actual direct spending in Iraq to date. But with another trillion for Afghanistan and 2 to 4 in the pipeline from future obligations related to the war, the grand total is looking like a lowball estimate. Other pre-invasion estimates, in the low billions, look downright ludicrous.
Recent news makes Nordhaus’ parting thought even more prescient:
Particularly worrisome are the casual promises of postwar democratization, reconstruction, and nation-building in Iraq. The cost of war may turn out to be low, but the cost of a successful peace looks very steep. If American taxpayers decline to pay the bills for ensuring the long-term health of Iraq, America would leave behind mountains of rubble and mobs of angry people. As the world learned from the Carthaginian peace that settled World War I, the cost of a botched peace may be even higher than the price of a bloody war
Tom Perkins thinks votes should be proportional to taxes paid. (As if they weren’t already, to some degree!)
You don’t have to look very far in history to find a system in which political power and ownership of assets were embodied in the same few people. We called its advocates “monarchists,” and there were remedies for that.
The founding fathers were rightfully aware of the need to prevent runaway positive feedback of wealth and power. Perkins evidently fears runaway negative feedback:
“The fear is wealth tax, higher taxes, higher death taxes — just more taxes until there is no more 1%. And that that will creep down to the 5% and then the 10%,” he said.
This is ignores conservation laws. If punitive taxation could really bring the wealth of the 1% down, where would all that money, and its underlying assets, actually go? And how can this be a real concern, when in fact incomes at the top are dramatically increasing by any measure?
So, Perkins is,
- not a student of history
- not a fan of democracy
- not a keen observer of current trends
- bad at economics
— or —
- willing to fib about it all for personal gain
and we should give him a million votes?
Update: I’ve played this game before.
I’m reflecting on Deborah Rogers‘ presentation on equity/equality at the Balaton Group meeting, concerning the apparent evolutionary drivers of the transition from a long human prehistory of egalitarian societies to today’s extreme inequity. A key point of terminology is that equity and equality are not quite the same thing – equality implies similar wealth or resource access, while equity implies something more like Rawlsian justice. But you can’t have one without the other, because inequality leads the haves to tilt the tables of justice against the have-nots.
This might not be a deliberate choice to exploit the masses. It could occur as an evolutionary consequence of the inability to predict the outcome of dynamically complex decisions.
I once described a complex theory of the emergence of inequality to Donella Meadows. I no longer remember the details, but perhaps it was the ancestor of this. Her answer was characteristically simple and insightful, to the effect of, “it doesn’t matter what the specific dynamics are, because the rich control the decisions, so the question boils down to how much inequ(al)ity the elite will tolerate.”
Evidence indicates that high inequality is bad for growth, so a possible irony is that policies that transfer wealth to the wealthy in the short run are bad for them in the long run, because growth eventually dominates allocation, even for the richest.
So, for me, the key question for society is, how much positive feedback should a civilization build into its social organization?
A bit of positive feedback can be helpful, if it creates a gradient that guides individuals who aren’t making the best decisions to imitate the habits of their more successful peers.
However, this probably requires a relatively low level of inequality. As soon as there’s stronger positive feedback, it’s likely that dysfunctional feedbacks take hold, as the wealthiest institutions use their market power to block innovation and good governance in service of maintaining their exalted positions.
I think the evidence that this occurs today is probably fairly simple. Look at the distribution of IQs or any other metric that might be an input to productivity in the economy. It’ll be relatively Normal (Gaussian). But the distributions of wealth and power are heavy tailed (Zipf or Double Laplace). That’s a pretty clear indication that there’s a lot of reinforcing feedback at work.
Discounting has long been controversial in climate integrated assessment models (IAMs), with prevailing assumptions less than favorable to future generations.
The evidence in favor of aggressive discounting has generally been macro in nature – observed returns appear to be consistent with discounting of welfare, so that’s what we should do. To swallow this, you have to believe that markets faithfully reveal preferences and that only on-market returns count. Even then, there’s still the problem of confounding of time preference with inequality aversion. Given that this perspective is contradicted by micro behavior, i.e. actually asking people what they want, it’s hard to see a reason other than convenience for its upper hand in decision making. Ultimately, the situation is neatly self-fulfilling. We observe inflated returns consistent with myopia, so we set myopic hurdles for social decisions, yielding inflated short-term returns.
It gets worse.
Back in 1997, I attended a talk on an early version of the RICE model, a regional version of DICE. In an optimization model with uniform utility functions, there’s an immediate drive to level incomes across all the regions. That’s obviously contrary to the observed global income distribution. A “solution” is to use Negishi weights, which apply weights to each region’s welfare in proportion to the inverse of the marginal utility of consumption there. That prevents income leveling, by explicitly assuming that the rich are rich because they deserve it.
This is a reasonable practical choice if you don’t think you can do anything about income distribution, and you’re not worried that it confounds equity with human capital differences. But when you use the same weights to identify an optimal emissions trajectory, you’re baking the inequity of the current market order into climate policy. In other words, people in developed countries are worth 10x more than people in developing countries.
Way back when, I didn’t have the words at hand to gracefully ask why it was a good idea to model things this way, but I sure wish I’d had the courage to forge ahead anyway.
The silly thing is that there’s no need to make such inequitable assumptions to model this problem. Elizabeth Stanton analyzes Negishi weighting and suggests alternatives. Richard Tol explored alternative frameworks some time before. And there are still more options, I think.
In the intertemporal optimization framework, one could treat the situation as a game between self-interested regions (with Negishi weights) and an equitable regulator (with equal weights to welfare). In that setting, mitigation by the rich might look like a form of foreign aid that couldn’t be squandered by the elites of poor regions, and thus I would expect deep emissions cuts.
Better still, dump notions of equilibrium and explore the problem with behavioral models, reserving optimization for policy analysis with fair objectives.
Thanks to Ramon Bueno for passing along the Stanton article.