Detecting the inconsistency of BS

DARPA put out a request for a BS detector for science. I responded with a strategy for combining the results of multiple models (using Mohammad Jalali’s multivariate meta-analysis with some supporting infrastructure like data archiving) to establish whether new findings are consistent with an existing body of knowledge.

DARPA didn’t bite. I have no idea why, but could speculate from the RFC that they had in mind something more like a big data approach that would use text analysis to evaluate claims. Hopefully not, because a text-only approach will have limited power. Here’s why.

Continue reading “Detecting the inconsistency of BS”

AI babble passes the Turing test

Here’s a nice example of how AI is killing us now. I won’t dignify this with a link, but I found it posted by a LinkedIn user.

I’d call this an example of artificial stupidity, not AI. The article starts off sounding plausible, but quickly degenerates into complete nonsense that’s either automatically generated or translated, with catastrophic results. But it was good enough to make it past someone’s cognitive filters.

For years, corporations have targeted on World Health Organization to indicate ads to and once to indicate the ads. AI permits marketers to, instead, specialize in what messages to indicate the audience, therefore, brands will produce powerful ads specific to the target market. With programmatic accounting for 67% of all international show ads in 2017, AI is required quite ever to make sure the inflated volume of ads doesn’t have an effect on the standard of ads.

One style of AI that’s showing important promise during this space is tongue process (NLP). informatics could be a psychological feature machine learning technology which will realize trends in behavior and traffic an equivalent method an individual’s brain will. mistreatment informatics during this method can match ads with people supported context, compared to only keywords within the past, thus considerably increasing click rates and conversions.

 

AI is killing us now

I’ve been watching the debate over AI with some amusement, as if it were some other planet at risk. The Musk-Zuckerberg kerfuffle is the latest installment. Ars Technica thinks they’re both wrong:

At this point, these debates are largely semantic.

I don’t see how anyone could live through the last few years and fail to notice that networking and automation have enabled an explosion of fake news, filter bubbles and other information pathologies. These are absolutely policy relevant, and smarter AI is poised to deliver more of what we need least. The problem is here now, not from some impending future singularity.

Ars gets one point sort of right:

Plus, computer scientists have demonstrated repeatedly that AI is no better than its datasets, and the datasets that humans produce are full of errors and biases. Whatever AI we produce will be as flawed and confused as humans are.

I don’t think the data is really the problem; it’s the assumptions the data’s treated with and the context in which that occurs that’s really problematic. In any case, automating flawed aspects of ourselves is not benign!

Here’s what I think is going on:

AI, and more generally computing and networks are doing some good things. More data and computing power accelerate the discovery of truth. But truth is still elusive and expensive. On the other hand, AI is making bullsh!t really cheap (pardon the technical jargon). There are many mechanisms by which this occurs:

These amplifiers of disinformation serve increasingly concentrated wealth and power elites that are isolated from their negative consequences, and benefit from fueling the process. We wind up wallowing in a sea of information pollution (the deadliest among the sins of managing complex systems).

As BS becomes more prevalent, various reinforcing mechanisms start kicking in. Accepted falsehoods erode critical thinking abilities, and promote the rejection of ideas like empiricism that were the foundation of the Enlightenment. The proliferation of BS requires more debunking, taking time away from discovery. A general erosion of trust makes it harder to solve problems, opening the door for opportunistic rent-seeking non-solutions.

I think it’s a matter of survival for us to do better at critical thinking, so we can shift the balance between truth and BS. That might be one area where AI could safely assist. We have other assets as well, like the explosion of online learning opportunities. But I think we also need some cultural solutions, like better management of trust and anonymity, brakes on concentration, sanctions for lying, rewards for prediction, and more time for reflection.

Data Science should be about more than data

There are lots of “top 10 skills” lists for data science and analytics. The ones I’ve seen are all missing something huge.

Here’s an example:

Business Broadway – Top 10 Skills in Data Science

Modeling barely appears here. Almost all the items concern the collection and analysis of data (no surprise there). Just imagine for a moment what it would be like if science consisted purely of observation, with no theorizing.

What are you doing with all those data points and the algorithms that sift through them? At some point, you have to understand whether the relationships that emerge from your data make any sense and answer relevant questions. For that, you need ways of thinking and talking about the structure of the phenomena you’re looking at and the problems you’re trying to solve.

I’d argue that one’s literacy in data science is greatly enhanced by knowledge of mathematical modeling and simulation. That could be system dynamics, control theory, physics, economics, discrete event simulation, agent based modeling, or something similar. The exact discipline probably doesn’t matter, so long as you learn to formalize operational thinking about a problem, and pick up some good habits (like balancing units) along the way.

Hair of the dog that bit you climate policy

Roy Spencer on reducing emissions by increasing emissions:

COL: Let’s say tomorrow, evidence is found that proves to everyone that global warming as a result of human released emissions of CO2 and methane, is real. What would you suggest we do?

SPENCER: I would say we need to grow the economy as fast as possible, in order to afford the extra R&D necessary to develop new energy technologies. Current solar and wind technologies are too expensive, unreliable, and can only replace a small fraction of our energy needs. Since the economy runs on inexpensive energy, in order to grow the economy we will need to use fossil fuels to create that extra wealth. In other words, we will need to burn even more fossil fuels in order to find replacements for fossil fuels.

via Planet 3.0

On the face of it, this is absurd. Reverse a positive feedback loop by making it stronger? But it could work, if given the right structure – a relative quit smoking by going in a closet to smoke until he couldn’t stand it anymore. Here’s what I can make of the mental model:

Spencer’s arguing that we need to run reinforcing loops R1 and R2 as hard as possible, because loop R3 is too weak to sustain the economy, because renewables (or more generally non-emitting sources) are too expensive. R1 and R2 provide the wealth to drive R&D, in a virtuous cycle R4 that activates R3 and shuts down the fossil sector via B2. There are a number of problems with this thinking.

  • Rapid growth around R1 rapidly grows environmental damage (B1) – not only climate, but also local air quality, etc. It also contributes to depletion (not shown), and with depletion comes increasing cost (weakening R1) and greater marginal damage from extraction technologies (not shown). It makes no sense to manage the economy as if R1 exists and B1 does not. R3 looks much more favorable today in light of this.
  • Spencer’s view discounts delays. But there are long delays in R&D and investment turnover, which will permit more environmental damage to accumulate while we wait for R&D.
  • In addition to the delay, R4 is weak. For example, if economic growth is 3%/year, and all technical progress in renewables is from R&D with a 70% learning rate, it’ll take 44 years to halve renewable costs.
  • A 70% learning curve for R&D is highly optimistic. Moreover, a fair amount of renewable cost reductions are due to learning-by-doing and scale economies (not shown), which require R3 to be active, not R4. No current deployment, no progress.
  • Spencer’s argument ignores efficiency (not shown), which works regardless of the source of energy. Spurring investment in the fossil loop R1 sends the wrong signal for efficiency, by depressing current prices.

In truth, these feedbacks are already present in many energy models. Most of those are standard economic stuff – equilibrium, rational expectations, etc. – assumptions which favor growth. Yet among the subset that includes endogenous technology, I’m not aware of a single instance that finds a growth+R&D led policy to be optimal or even effective.

It’s time for the techno-optimists like Spencer and Breakthrough to put up or shut up. Either articulate the argument in a formal model that can be shared and tested, or admit that it’s a nice twinkle in the eye that regrettably lacks evidence.

Thorium Dreams

The NY Times nails it in In Search of Energy Miracles:

Yet not even the speedy Chinese are likely to get a sizable reactor built before the 2020s, and that is true for the other nuclear projects as well. So even if these technologies prove to work, it would not be surprising to see the timeline for widespread deployment slip to the 2030s or the 2040s. The scientists studying climate change tell us it would be folly to wait that long to start tackling the emissions problem.

Two approaches to the issue — spending money on the technologies we have now, or investing in future breakthroughs — are sometimes portrayed as conflicting with one another. In reality, that is a false dichotomy. The smartest experts say we have to pursue both tracks at once, and much more aggressively than we have been doing.

An ambitious national climate policy, anchored by a stiff price on carbon dioxide emissions, would serve both goals at once. In the short run, it would hasten a trend of supplanting coal-burning power plants with natural gas plants, which emit less carbon dioxide. It would drive some investment into low-carbon technologies like wind and solar power that, while not efficient enough, are steadily improving.

And it would also raise the economic rewards for developing new technologies that could disrupt and displace the ones of today. These might be new-age nuclear reactors, vastly improved solar cells, or something entirely unforeseen.

In effect, our national policy now is to sit on our hands hoping for energy miracles, without doing much to call them forth.

Yep.

h/t Travis Franck

What a real breakthrough might look like

It’s possible that a techno fix will stave off global limits indefinitely, in a Star Trek future scenario. I think it’s a bad idea to rely on it, because there’s no backup plan.

But it’s equally naive to think that we can return to some kind of low-tech golden age. There are too many people to feed and house, and those bygone eras look pretty ugly when you peer under the mask.

But this is a false dichotomy.

Some techno/growth enthusiasts talk about sustainability as if it consisted entirely of atavistic agrarian aspirations. But what a lot of sustainability advocates are after, myself included, is a high-tech future that operates within certain material limits (planetary boundaries, if you will) before those limits enforce themselves in nastier ways. That’s not really too hard to imagine; we already have a high tech economy that operates within limits like the laws of motion and gravity. Gravity takes care of itself, because it’s instantaneous. Stock pollutants and resources don’t, because consequences are remote in time and space from actions; hence the need for coordination. Continue reading “What a real breakthrough might look like”

Is London a big whale?

Why do cities survive atom bombs, while companies routinely go belly up?

Geoffrey West on The Surprising Math of Cities and Corporations:

There’s another interesting video with West in the conversations at Edge.

West looks at the metabolism of cities, and observes scale-free behavior of good stuff (income, innovation, input efficiency) as well as bad stuff (crime, disease – products of entropy). The destiny of cities, like companies, is collapse, except to the extent that they can innovate at an accelerating rate. Better hope the Singularity is on schedule.

Thanks to whoever it was at the SD conference who pointed this out!

The danger of path-dependent information flows on the web

Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:

As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.

Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.

A key feedback is this reinforcing loop, from Sterman & Wittenberg’s model of path dependence in Kuhnian scientific revolutions:

Anomalies

As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.

The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:

The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”

Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.

As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …

The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …

For filter bubbles, I think the key positive loops are as follows:

FilterBubblesLoops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.

Loops R2 involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.

R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.

Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.

The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?

I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.

My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,

The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.

In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.

Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.

I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.

Update: From Tech Review, technology for breaking the bubble

The alien Hail Mary, and other climate policy plays

Cap & Trade is suspended in Europe and dead in the US, and the techno delusion may not be far behind. Some strange bedfellows have lined up behind the idea of R&D-driven climate policy. But now it appears that clean energy research is not a bipartisan no-brainer after all. Energy committee member Rand Paul’s bill would not only cut energy R&D funding by eliminating DOE altogether, it would cut our ability to even monitor the global environment by gutting NOAA and NASA. That only leaves one option:

13 In the otherwise dull year 2327, mankind successfully contacts aliens. Well, technically their answering machine, as the aliens themselves have gone to Alpha Centauri for the summer.

14 Desperate for help, humans leave increasingly stalker-y messages, turning off the aliens with how clingy our species is.

15 The aliens finally agree to equip Earth with a set of planet-saving carbon neutralizers, but work drags on as key parts must be ordered from a foreign supplier in the Small Magellanic Cloud.

16 The job comes in $3.7 quadrillion above estimate. Humanity thinks it is being taken advantage of but isn’t sure.

“20 things you didn’t know about the future,” in Discover

Seriously, where does that leave us? In terms of what we should do, I don’t think much has changed. As I wrote a while back, the climate policy table needs four legs:

  1. Prices
  2. Technology (the landscape of possibilities on which we make decisions)
  3. Institutional rules and procedures
  4. Preferences, operating within social networks

Preferences and technology are really the fundamentals among the four. Technology represents the set of options available to us for transforming energy and resources into life and play. Preferences guide how we choose among those options. Prices and rules are really just the information signals that allow us to coordinate those decisions.

However, neither preferences nor technology are as fundamental as they look. Models generally take preferences as a given, but in fact they’re endogenous. What we want on a day to day basis is far removed from our most existential needs. Instead, we construct preferences on the basis of technologies we know about, prices, rules, and the preferences and choices of others. That creates norms, fads, marketing, keep-up-with-the Joneses and other positive feedback mechanisms. Similarly, technology is more than discovery of principles and invention of devices. Those innovations don’t do anything until they’re woven into the fabric of society, guided by (you guessed it), prices, institutions, and preferences. That creates more positive feedbacks, like the chicken-egg problems of alternative fuel vehicle deployment.

If we could all get up in the morning and work out in our heads how to make Pareto-efficient decisions, we might not need prices and institutions, but we can’t, so we do. Prices matter because they’re a primary carrier of information through the economy. Not every decision is overtly economic, so we also have institutions, rules and routinized procedures to guide behavior. The key is that these signals should serve our values (the deeply held ones we’d articulate upon reflection, which might differ from the preferences revealed by transactions), not the other way around.

Preferences clearly can have a lot of direct leverage on behavior – if we all equated driving a big gas guzzler with breaking wind in a crowded elevator, we’d probably see different cars on the lot. However, most decisions are not so transparent. It’s already hard to choose “paper or plastic?” How about “desktop or server?” When you add multiple layers of supply chain and varied national origins to the picture, it becomes very hard to create a green information system paralleling the price system. It’s probably even harder to get individuals and firms to conform to such a system, when there are overwhelming evolutionary rewards to defection. Borrowing from Giraudoux, the secret to success is sustainability; once you can fake that you’ve got it made.

Similarly, the sheer complexity of society makes it hard to predict which technologies constitute a winning combination for creating low-carbon happiness. A technology-led strategy runs the risk of failing in the attempt to recreate a high-carbon lifestyle with low-carbon inputs.  I don’t think anyone has the foresight to select that portfolio. Even if we could do it, there’s no guarantee that, absent other signals, new technologies will be put to their intended uses, or that they will survive the “valley of death” between R&D and commercialization. It’s like airdropping a tyrannosaurus into an arctic ecosystem – sure, he’s got big teeth, but will he survive?

Complexity also militates against a rules-led approach. It’s simply too cumbersome to codify a rich set of tradeoffs in command-and-control regulations, which can become an impediment to innovation and are subject to regulatory capture. Also, systems like the CAFE standard create shadow prices of compliance, rather than explicit prices. This makes it hard to diagnose the effects of constraints and to coordinate them with other policies. There’s a niche for rules, but they shouldn’t be the big stick (on the other hand, eliminating the legacy of some past measures could be a win-win).

That’s why emissions pricing is really a keystone policy. Once you have prices aligned with the long term value of stable climate (and other resources), it’s easier to align the other legs of the table. Emissions prices create huge incentives for private R&D, leaving a smaller gap for government to fill – just the market failures in appropriation of benefits of technology. The points of pain where institutions are inadequate, or stand in the way of progress, will be more evident and easier to correct, and there will be less burden on policy making institutions, because they won’t have to coordinate many small programs to do the job of one big signal. Preferences will start evolving in a low-carbon direction, with rewards to those who (through luck or altruism) have already done so. Most importantly, emissions pricing gets some changes moving now, not after a decade or two of delay.

Concretely, I still think an upstream, revenue-neutral carbon tax is a practical implementation route. If there’s critical mass among trade partners, it could even evolve into a harmonized global system through the pressure of border carbon adjustments. The question is, how to get started?