Should Systems Thinkers be on Social Media?

Using social media is a bit like dining out these days. You get some tasty gratification and social interaction, but you might catch something nasty, or worse, pass it along. I use Facebook and Twitter to get the word out on my work, and I learn interesting things, but these media are also the source of 90% of my exposure to fake news, filter bubbles, FOMO, AI bias, polarization, and rank stupidity.

If my goal is to make the world a better place by sharing insights about systems, is social media a net positive? Are there particular ways to engage that could make it a win? Since we can’t turn off the system, how do we coax it into working better for us? This causal loop diagram represents my preliminary thinking about some of the issues.

I think there are three key points.

First, social media is not really different from offline movements, or the internet as a whole. It’s just one manifestation. Like others, it is naturally primed to grow, due to positive feedback. Networks confer benefits to members that increase with scale, and networks reinvest in things that make the network more attractive. This is benign and universal (at least until the network uses AI to weaponize users’ information against them). These loops are shown in blue.

Second, there are good reasons to participate. By sharing good content, I can assist the diffusion of knowledge about systems, which helps people to manage the world. In addition, I get personal rewards for doing so, which increases my ability to do more of the same in the future. (Green loop.) There are probably also some rewards to the broader systems thinking community from enhanced ability to share information and act coherently.

But the dark side is that the social media ecosystem is an excellent growth medium for bad ideas and bad people who profit from them. Social platforms have no direct interest in controlling this, because they make as much money from an ad placed by Russian bots as they do from a Nike ad. Worse, they may actively oppose measures to control information pollution by capturing regulators and legislators. (Red loops.)

So far, I’m finding that the structure of the problem – a nest of good and evil positive feedback loops – makes it very hard to decide which effects will win out. Are we getting leverage from a system that helps share good ideas, or merely feeding a monster that will ultimately devour us? The obvious way to find out is to develop a more formal model, but that’s a rather time consuming endeavor. So, what do you think? Retire from the fray? Find a better outlet? Put the technology to good use? Where’s the good work in this area?

Eugenics rebooted – what could go wrong?

Does DNA IQ testing create a meritocracy, or merely reinforce existing biases?

Technology Review covers new efforts to use associations between DNA and IQ.

… Intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait. Recent genome-wide association studies have successfully identified inherited genome sequence differences that account for 20% of the 50% heritability of intelligence. These findings open new avenues for research into the causes and consequences of intelligence using genome-wide polygenic scores that aggregate the effects of thousands of genetic variants.

The new genetics of intelligence

Robert Plomin and Sophie von Stumm

I have no doubt that there’s much to be learned here. However, research is not all they’re proposing:

IQ GPSs will be used to predict individuals’ genetic propensity to learn, reason and solve problems, not only in research but also in society, as direct-to-consumer genomic services provide GPS information that goes beyond single-gene and ancestry information. We predict that IQ GPSs will become routinely available from direct-to-consumer companies along with hundreds of other medical and psychological GPSs that can be extracted from genome-wide genotyping on SNP chips. The use of GPSs to predict individuals’ genetic propensities requires clear warnings about the probabilistic nature of these predictions and the limitations of their effect sizes (BOX 7).

Although simple curiosity will drive consumers’ interests, GPSs for intelligence are more than idle fortune telling. Because intelligence is one of the best predictors of educational and occupational outcomes, IQ GPSs will be used for prediction from early in life before intelligence or educational achievement can be assessed. In the school years, IQ GPSs could be used to assess discrepancies between GPSs and educational achievement (that is, GPS-based overachievement and underachievement). The reliability, stability and lack of bias of GPSs make them ideal for prediction, which is essential for the prevention of problems before they occur. A ‘precision education’ based on GPSs could be used to customize education, analogous to ‘precision medicine’

There are two ways “precision education” might be implemented. An egalitarian model would use information from DNA IQ measurements to customize resource allocations, so that all students could perform up to some common standard:

An efficiency model, by contrast, would use IQ measurements to set achievement expectations for each student, and customize resources to ensure that students who are underperforming relative to their DNA get a boost:

This latter approach is essentially a form of tracking, in which DNA is used to get an early read on who’s destined to flip bonds, and who’s destined to flip burgers.

One problem with this scheme is noise (as the authors note, seemingly contradicting their own abstract’s claim of reliability and stability). Consider the effect of a student receiving a spuriously low DNA IQ score. Under the egalitarian scheme, they receive more educational resources (enabling them to overperform), while under the efficiency scheme, resources would be lowered, leading self-fulfillment of the predicted low performance. The authors seem to regard this as benign and self-correcting:

By contrast, GPSs are ‘less dangerous’ because they are intrinsically probabilistic, not hardwired and deterministic like single-gene disorders. It is important to recall here that although all complex traits are heritable, none is 100% heritable. A similar logic can be applied to IQ scores: although they have great predic­tive validity for key life outcomes, IQ is not determin­istic but probabilistic. In short, an individual is always more than the sum of their genes or their IQ scores.

I think this might be true when you consider the local effects on the negative loops governing resource allocation. But I don’t think that remains true when you put it in context. Education is a nest of positive feedbacks. This creates path dependence that amplifies errors in resource allocation, whether they come from subjective teacher impressions or DNA measurements.

In a perfect world, DNA-IQ provides an independent measurement that’s free of those positive feedbacks. In that sense, it’s perfectly meritocratic:

But how do you decide what to measure? Are the measurements good, or just another way to institutionalize bias? This is hotly contested. Let’s suppose that problems of gender and race/ethnicity bias have been, or can be solved. There are still questions about what measurements correlate with better individual or societal outcomes. At some point, implicit or explicit choices have to be made, and these are not value-free. They create reinforcing feedbacks:

I think it’s inevitable that, like any other instrument, DNA IQ scores are going to reflect the interests of dominant groups in society. (At a minimum, I’d be willing to bet that IQ tests don’t measure things that would result in low scores for IQ test designers.) If that means more Einsteins, Bachs and Ghandis, maybe it’s OK. But I don’t think that’s guaranteed to lead to a good outcome. First, there’s no guarantee that a society composed of apparently high-performing individuals is in itself high-performing. Second, the dominant group may be dominant, not by virtue of faster CPUs in their heads, but something less appetizing.

I think there’s no guarantee that DNA IQ will not reflect attributes that are dysfunctional for society. We would hate to inadvertently produce more Stalins and Mengeles by virtue of inadvertent correlations with high achievement of less virtuous origin. And certainly, like any instrument used for high-stakes decisions, the pressure to distort and manipulate results will increase with use.

Note that if education is really egalitarian, the link between Measured IQ and Educational Resources Allocated reverses polarity, becoming negative. Then the positive loops become negative loops, and a lot of these problems go away. But that’s not often a choice societies make, presumably because egalitarian education is in itself contrary to the interests of dominant groups.

I understand researchers’ optimism for this technology in the long run. But for now, I remain wary, due to the decided lack of systems thinking about the possible side effects. In similar circumstances, society has made poor choices about teacher value added modeling, easily negating any benefits it might have had. I’m expecting a similar outcome here.

Limits to Big Data

I’m skeptical of the idea that machine learning and big data will automatically lead to some kind of technological nirvana, a Star Trek future in which machines quickly learn all the physics needed for us to live happily ever after.

First, every other human technology has been a mixed bag, with improvements in welfare coming along with some collateral damage. It just seems naive to think that this one will be different.


These are not the primary problem.

Second, I think there are some good reasons to think that problems will get harder at the same rate that machines get smarter. The big successes I’ve seen are localized point prediction problems, not integrated systems with a lot of feedback. As soon as causality are separated in time and space by complex mechanisms, you’re into sloppy systems territory, where data may constrain only a few parameters at a time. Making progress in such systems will increasingly require integration of multiple theories and data from multiple sources.

People in domains that have made heavy use of big data increasingly recognize this: Continue reading “Limits to Big Data”

The Nordhaus Nobel

Congratulations to William Nordhaus for winning a Nobel in Economics for work on climate. However … I find that this award leaves me conflicted. I’m happy to see the field proclaim that it’s optimal to do something about climate change. But if this is the best economics has to offer, it’s also an indication of just how far divorced the field is from reality. (Or perhaps not; not all economists agree that we have reached a Neoclassical nirvana.)

Nordhaus was probably the first big name in economics to tackle the problem, and has continued to refine the work over more than two decades. At the same time, Nordhaus’ work has never recommended more than a modest effort to solve the climate problem. In the original DICE model, the optimal policy reduced emissions about 10%, with a tiny carbon tax of $10-15/tonC – a lot less than a buck a gallon on gasoline, for example. (Contrast this perspective with Stopping Climate Change Is Hopeless. Let’s Do It.)

Nordhaus’ mild prescription for action emerges naturally from the model’s assumptions. Ask yourself if you agree with the following statements:

If you find yourself agreeing, congratulations – you’d make a successful economist! All of these and more were features of the original DICE and RICE models, and the ones that most influence the low optimal price of carbon survive to this day. That low price waters down real policies, like the US government’s social cost of carbon.

In any case, you’re not off the hook; even with these rosy assumptions Nordhaus finds that we still ought to have a real climate policy. Perhaps that is the greatest irony here – that even the most Neoclassical view of climate that economics has to offer still recommends action. The perspective that climate change doesn’t exist or doesn’t matter requires assumptions even more contorted than those above, in a mythical paradise where fairies and unicorns cavort with the invisible hand.

Dynamics of Dictatorship

I’m preparing for a talk on the dynamics of dictatorship or authoritarianism, which touches on many other topics, like polarization, conflict, terror and insurgency, and filter bubbles. I thought I’d share a few references, in the hope of attracting more. I’m primarily interested in mathematical models, or at least conceptual models that have clearly-articulated structure->behavior relationships. Continue reading “Dynamics of Dictatorship”

Ad Experiment

In the near future I’ll be running an experiment with serving advertisements on this site, starting with Google AdSense.

This is motivated by a little bit of greed (to defray the costs of hosting) and a lot of curiosity.

  • What kind of ads will show up here?
  • Will it change my perception of this blog?
  • Will I feel any editorial pressure? (If so, the experiment ends.)

I’m generally wary of running society’s information system on a paid basis. (Recall the first deadly sin of complex system management.) On the other hand, there are certainly valid interests in sharing commercial information.

I plan to write about the outcome down the road, but first I’d like to get some firsthand experience.

What do you think?

Update: The experiment is over.

AI is killing us now

I’ve been watching the debate over AI with some amusement, as if it were some other planet at risk. The Musk-Zuckerberg kerfuffle is the latest installment. Ars Technica thinks they’re both wrong:

At this point, these debates are largely semantic.

I don’t see how anyone could live through the last few years and fail to notice that networking and automation have enabled an explosion of fake news, filter bubbles and other information pathologies. These are absolutely policy relevant, and smarter AI is poised to deliver more of what we need least. The problem is here now, not from some impending future singularity.

Ars gets one point sort of right:

Plus, computer scientists have demonstrated repeatedly that AI is no better than its datasets, and the datasets that humans produce are full of errors and biases. Whatever AI we produce will be as flawed and confused as humans are.

I don’t think the data is really the problem; it’s the assumptions the data’s treated with and the context in which that occurs that’s really problematic. In any case, automating flawed aspects of ourselves is not benign!

Here’s what I think is going on:

AI, and more generally computing and networks are doing some good things. More data and computing power accelerate the discovery of truth. But truth is still elusive and expensive. On the other hand, AI is making bullsh!t really cheap (pardon the technical jargon). There are many mechanisms by which this occurs:

These amplifiers of disinformation serve increasingly concentrated wealth and power elites that are isolated from their negative consequences, and benefit from fueling the process. We wind up wallowing in a sea of information pollution (the deadliest among the sins of managing complex systems).

As BS becomes more prevalent, various reinforcing mechanisms start kicking in. Accepted falsehoods erode critical thinking abilities, and promote the rejection of ideas like empiricism that were the foundation of the Enlightenment. The proliferation of BS requires more debunking, taking time away from discovery. A general erosion of trust makes it harder to solve problems, opening the door for opportunistic rent-seeking non-solutions.

I think it’s a matter of survival for us to do better at critical thinking, so we can shift the balance between truth and BS. That might be one area where AI could safely assist. We have other assets as well, like the explosion of online learning opportunities. But I think we also need some cultural solutions, like better management of trust and anonymity, brakes on concentration, sanctions for lying, rewards for prediction, and more time for reflection.

Privatizing Public Lands – Claim your 0.3 acres now!

BLM Public Lands Statistics show that the federal government holds about 643 million acres – about 2 acres for each person.

But what would you really get if these lands were transferred to the states and privatized by sale? Asset sales would distribute land roughly according to the existing distribution of wealth. Here’s how that would look:

The Forbes 400 has a net worth of $2.4 trillion, not quite 3% of US household net worth. If you’re one of those lucky few, your cut would be about 44,000 acres, or 69 square miles.

Bill Gates, Jeff Bezos, Warren Buffet, Mark Zuckerberg and Larry Ellison alone could split Yellowstone National Park (over 2 million acres).

The top 1% wealthiest Americans (35% of net worth) would average 70 acres each, and the next 19% (51% of net worth) would get a little over 5 acres.

The other 80% of America would split the remaining 14% of the land. That’s about a third of an acre each, which would be a good-sized suburban lot, if it weren’t in the middle of Nevada or Alaska.

You can’t even see the average person’s share on a graph, unless you use a logarithmic scale:

landpercaplog

Otherwise, the result just looks ridiculous, even if you ignore the outliers:

landpercap