Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

This is a very interesting model, both because it tackles ‘soft’ dynamics of paradigm formation in ‘hard’ science, and because it is an aggregate approach to an agent problem. Unfortunately, until now, the model was only available in DYNAMO, which limited access severely. It turns out to be fairly easy to translate to Vensim using the dyn2ven utility, once you know how to map the DYNAMO array FOR loops to Vensim subscripts.

Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

J. Wittenberg and J. D. Sterman, 1999

Abstract

What is the relative importance of structural versus contextual forces in the birth and death of scientific theories? We describe a dynamic model of the birth, evolution, and death of scientific paradigms based on Kuhn’s Structure of Scientific Revolutions. The model creates a simulated ecology of interacting paradigms in which the creation of new theories is stochastic and endogenous. The model captures the sociological dynamics of paradigms as they compete against one another for members. Puzzle solving and anomaly recognition are also endogenous. We specify various regression models to examine the role of intrinsic versus contextual factors in determining paradigm success. We find that situational factors attending the birth of a paradigm largely determine its probability of rising to dominance, while the intrinsic explanatory power of a paradigm is only weakly related to the likelihood of success. For those paradigms that do survive the emergence phase, greater explanatory power is significantly related to longevity. However, the relationship between a paradigm’s ‘strength’ and the duration of normal science is also contingent on the competitive environment during the emergence phase. Analysis of the model shows the dynamics of competition and succession among paradigms to be conditioned by many positive feedback loops. These self-reinforcing processes amplify intrinsically unobservable micro-level perturbations in the environment – the local conditions of science, society, and self faced by the creators of a new theory – until they reach macroscopic significance. Such dynamics are the hallmark of self-organizing evolutionary systems.

We consider the implications of these results for the rise and fall of new ideas in contexts outside the natural sciences such as management fads.

Cite as: J. Wittenberg and J. D. Sterman (1999) Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution. Organization Science, 10.

I believe that this version is faithful to the original, but it’s difficult to be sure because the model is stochastic, so the results differ due to differences in the random number streams. For the moment, this model should be regarded as a beta release.

Continue reading “Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution”

Better Lies

Hoisted from the comments, Miles Parker has a nice reflection on modeling in this video, Why Model Reality.

It might be subtitled, “Better Lies,” a reference to modeling as the pursuit of better stories about the world, which remain never quite true (a variation on the famous Box quote, “All models are wrong but some are useful.”). A few nice points that I picked out along the way,

  • All thinking, even about the future, is retrospective.
  • Big Data is Big Dumb, because we’re collecting more and more detail about a limited subset of reality, and thus suffer from sampling and “if your only tool is a hammer …” bias.
  • A crucial component of a modeling approach is a “bullshit detector” – reality checks that identify problems at various levels on the ladder of inference.
  • Model design is more than software engineering.
  • Often the modeling process is a source of key insights, and you don’t even need to run the model.
  • Modeling is a social process.

Coming back to the comment,

I think one of the greatest values of a model is that it can bring you to the point where you say “There isn’t any way to build a model within this methodology that is not self-contradicting. Therefore everyone in this room is contradicting themselves before they even open their mouths.”

I think that’s close to what Dana Meadows was talking about when she placed paradigms and transcendence of paradigms on the list of places to intervene in systems.

It reminds me of Gödel’s incompleteness theorems. With that as a model, I’d argue that one can construct fairly trivial models that aren’t self-contradictory. They might contradict a lot of things we think we know about the world, but by virtue of their limited expressiveness remain at least true to themselves.

Going back to the elasticity example, if I assert that oilConsumption = oilPrice^epsilon, there’s no internal contradiction as long as I use the same value of epsilon for each proposition I consider. I’m not even sure what an internal contradiction would look like in such a simple framework. However, I could come up with a long list of external consistency problems with the model: dimensional inconsistency, lack of dynamics, omission of unobserved structure, failure to conform to data ….

In the same way, I would tend to argue that general equilibrium is an internally consistent modeling paradigm that just happens to have relatively little to do with reality, yet is sometimes useful. I suppose that Frank Ackerman might disagree with me, on the grounds that equilibria are not necessarily unique or stable, which could raise an internal contradiction by violating the premise of the modeling exercise (welfare maximization).

Once you step beyond models with algorithmically simple decision making (like CGE), the plot thickens. There’s Condorcet’s paradox and Arrow’s impossibility theorem, the indeterminacy of Arthur’s El Farol bar problem, and paradoxes of zero discount rates on welfare.

It’s not clear to me that all interesting models of phenomena that give rise to self-contradictions must be self-contradicting though. For example, I suspect that Sterman & Wittenberg’s model of Kuhnian scientific paradigm succession is internally consistent.

Maybe the challenge is that the universe is self-referential and full of paradoxes and irreconcilable paradigms. Therefore as soon as we attempt to formalize our understanding of such a mess, either with nontrivial models, or trivial models assisting complex arguments, we are dragged into the quagmire of self-contradiction.

Personally, I’m not looking for the cellular automaton that runs the universe. I’m just hoping for a little feedback control on things that might make life on earth a little better. Maybe that’s a paradoxical quest in itself.

Elasticity contradictions

If a global oil shock reduces supply 10%, the price of crude will rise to $20,000/barrel, with fuel expenditures consuming more than the entire GDP of importing nations.

At least that’s what you’d predict if you think the price elasticity of oil demand is about -0.02. I saw that number in a Breakthrough post, citing Kevin Drum, citing Early Warning, citing IMF. It’s puzzling that Breakthrough is plugging small price elasticities here, when their other arguments about the rebound effect require elasticities to have large magnitudes. Continue reading “Elasticity contradictions”

The real constraint on nuclear power: war

A future where everything goes right for nuclear power, with advancing technology driving down costs, making reactors a safe and ubiquitous energy source, and providing a magic bullet for climate change, might bring other surprises.

For example, technology might also make supersonic cruise missiles cheap and ubiquitous.

Brahmos_imds

The Fukushima operators appear to be hanging in there. But imagine how they’d be coping if someone fired a missile at them once in a while.

Fortunately, reactors today are mostly in places where peace and rule of law prevail.

world_map

But peace and good governance aren’t exactly the norm in places where emissions are rising rapidly, or the poor need energy.

governance

Building lots of nuclear power plants is ultimately a commitment to peace, or at least acceptance of rather dreadful consequences of war (not necessarily war with nuclear weapons, but war with conventional weapons turning nuclear reactors into big dirty bombs).

One would hope that abundant, clean energy would reduce the motivation to blow things up, but how much are we willing to gamble on that?

Lakoff on “The Country We Believe In”

George Lakoff has an interesting take on the president’s April 13 budget speech,

Last week, on April 13, 2011, President Obama gave all Democrats and all progressives a remarkable gift. Most of them barely noticed. They looked at the President’s speech as if it were only about budgetary details. But the speech went well beyond the budget. It went to the heart of progressive thought and the nature of American democracy, and it gave all progressives a model of how to think and talk about every issue.

I’m definitely in the “barely noticed” category. The interesting thing, George argues, is that the speech is really about systems. Part concerns a system of values:

The policy topic happened to be the budget, but he called it “The Country We Believe In” for a reason. The real topic was how the progressive moral system defines the democratic ideals America was founded on, and how those ideals apply to specific issues.

More interesting to me, another key theme is systems in the “systems thinking” sense:

Systems Thinking

President Obama, in the same speech, laid the groundwork for another crucial national discussion: systems thinking, which has shown up in public discourse mainly in the form of “systemic risk” of the sort that led to the global economic meltdown. The president brought up systems thinking implicitly, at the center of his budget proposal. He observed repeatedly that budget deficits and “spending” do not occur in isolation. The choice of what to cut and what to keep is a matter of factors external to the budget per se. Long-term prosperity, economic recovery, and job creation, he argued, depend up maintaining “investments” — investments in infrastructure (roads, bridges, long-distance rail), education, scientific research, renewable energy, and so on. The maintenance of American values, he argued, is outside of the budget in itself, but is at the heart of the argument about what to cut. The fact is that the rich have gotten rich because of the government — direct corporate subsidies, access to publicly-owned resources, access to government research, favorable trade agreements, roads and other means of transportation, education that provides educated workers, tax loopholes, and innumerable government resources are taken advantage of by the rich, but paid for by all of us. What is called a ”tax break” for the rich is actually a redistribution of wealth from the poor and middle class—whose incomes have gone down—to those who have considerably more money than they need, money they have made because of tax investments by the rest of America.

The President provided a beautiful example of systems thinking. Under the Republican budget plan, the President would get a $200,000 a year tax break, which would be paid for by cutting programs for seniors, with the result that 33 seniors would be paying $6,000 more a year for health care to pay for his tax break. To see this, you have to look outside of the federal budget to the economic system at large, in which you can see what budget cuts will be balanced by increased in costs to others. A cut here in the budget is balanced by an increase outside the federal budget for real human beings.

When a system has causal effects, as in the above cases, we speak of “systemic causation.” “Systemic risks” are the risks created when there is systemic causation. Systemic causation contrasts with direct causation, as when, say, someone lifts something, or throws something, or shoots someone.

Linguists have discovered that every language studied has direct causation in its grammar, but no language has systemic causation in its grammar. Systemic causation is a harder concept and has to be learned either through socialization or education.

This got me interested in the original speech (transcript, video).

From our first days as a nation, we have put our faith in free markets and free enterprise as the engine of America’s wealth and prosperity. More than citizens of any other country, we are rugged individualists, a self-reliant people with a healthy skepticism of too much government.

But there has always been another thread running throughout our history – a belief that we are all connected; and that there are some things we can only do together, as a nation. We believe, in the words of our first Republican president, Abraham Lincoln, that through government, we should do together what we cannot do as well for ourselves.

There’s some feedback:

Ultimately, all this rising debt will cost us jobs and damage our economy. It will prevent us from making the investments we need to win the future. We won’t be able to afford good schools, new research, or the repair of roads and bridges – all the things that will create new jobs and businesses here in America. Businesses will be less likely to invest and open up shop in a country that seems unwilling or unable to balance its books. And if our creditors start worrying that we may be unable to pay back our debts, it could drive up interest rates for everyone who borrows money – making it harder for businesses to expand and hire, or families to take out a mortgage.

And recognition of systemic pressures for deficits:

But that starts by being honest about what’s causing our deficit. You see, most Americans tend to dislike government spending in the abstract, but they like the stuff it buys. Most of us, regardless of party affiliation, believe that we should have a strong military and a strong defense. Most Americans believe we should invest in education and medical research. Most Americans think we should protect commitments like Social Security and Medicare. And without even looking at a poll, my finely honed political skills tell me that almost no one believes they should be paying higher taxes.

Because all this spending is popular with both Republicans and Democrats alike, and because nobody wants to pay higher taxes, politicians are often eager to feed the impression that solving the problem is just a matter of eliminating waste and abuse –that tackling the deficit issue won’t require tough choices. Or they suggest that we can somehow close our entire deficit by eliminating things like foreign aid, even though foreign aid makes up about 1% of our entire budget.

There’s a bit of dynamics implicit in the discussion (e.g., the role of debt accumulation), but I think one thing is missing: straightforward grappling with worse-before-better behavior. The president proposes to go after waste (a favorite of all politicians) and tax breaks for the rich (far more sensible than the Ryan proposal), but doesn’t quite come to grips with the underlying question of how we can continue to feel prosperous and secure, when fundamentally we can’t (or at least shouldn’t) return to a previous pattern of unsustainable consumption in excess of our income funded by budget, trade and environmental deficits. What we really need, per yesterday’s post, is a reframing of what is now perceived as austerity as an opportunity to live with better health, relationships and security.

I part ways with Lakoff a bit on one topic:

Progressives tend to think more readily in terms of systems than conservatives. We see this in the answers to a question like, “What causes crime?” Progressives tend to give answers like economic hardship, or lack of education, or crime-ridden neighborhoods. Conservatives tend more to give an answer like “bad people — lock ‘em up, punish ‘em.” This is a consequence of a lifetime of thinking in terms of social connection (for progressives) and individual responsibility (for conservatives). Thus conservatives did not see the President’s plan, which relied on systemic causation, as a plan at all for directly addressing the deficit.

Differences in systemic thinking between progressives and conservatives can be seen in issues like global warming and financial reform. Conservatives have not recognized human causes of global warming, partly because they are systemic, not direct. When a huge snowstorm occurred in Washington DC recently, many conservatives saw it as disproving the existence of global warming — “How could warming cause snow?” Similarly, conservatives, thinking in terms of individual responsibility and direct causation, blamed homeowners for foreclosures on their homes, while progressives looked to systemic explanations, seeking reform in the financial system.

Certainly it is true that self-interested denial of feedback (or externalities, as an economist might describe some feedbacks) has found its home in the conservative and libertarian movements. But that doesn’t mean all conservative thought is devoid of systems thinking, and one can easily look back at history and find progressive or liberal policies that have also ignored systemic effects. Indeed, the conservative critique of progressive policies addressing crime and poverty issues has often been evolutionary arguments about the effects of incentives – a very systemic view. The problem is, words don’t provide enough formalism or connection to data to determine whose favorite feedback loops might dominate, so philosophical arguments about the merits of turn-the-other-cheek or an-eye-for-an-eye can go on forever. Models can assist with resolving these philosophical debates. However, at present public discourse is almost devoid of thinking, and often anti-intellectual, which makes it tough to contemplate sophisticated solutions to our problems.

Thanks to James McFarland for the tip.

Tim Jackson on the horns of the growth dilemma

I just ran across a nice talk by Tim Jackson, author of Prosperity Without Growth, on BigIdeas. It’s hard to summarize such a wide-ranging talk, but I’d call it a synthesis of the physical (planetary boundaries and exponential growth) and the behavioral (what is the economy for, how does it influence our choices, and how can we change it?). The horns of the dilemma are that growth can’t go on forever, yet we don’t know how to run an economy that doesn’t grow. (This of course begs the question, “growth of what?” – where the what is a mix of material and non-material things – a distinction that lies at the heart of many communication failures around the Limits to Growth debate.)

There’s an article covering the talk at ABC.au, but it’s really worth a listen at http://mpegmedia.abc.net.au/rn/podcast/2010/07/bia_20100704_1705.mp3

Positive Feedback Pricing

Hat tip to John Sterman & Travis Franck for passing along this cool example of positive feedback, discovered on Amazon by evolutionary biologist Michael Eisen. Two sellers apparently used algorithmic pricing that led to exponential growth of the price of a book:

bookPriceThis reminds me of a phenomenon that’s puzzled me for some time: “new economy” firms have at least as many opportunities for systemic problems as any others, yet modeling remains somewhat “old economy” focused on physical products and supply chains and more traditional services like health care. Perhaps this is just my own observational sampling bias; I’d be curious to know whether others see things the same way.

Vital lessons

SEED asked eleven researchers to share the single most vital lesson from their life’s work. Every answer is about systems. Two samples:

“You can make sense of anything that changes smoothly in space or time, no matter how wild and complicated it may appear, by reimagining it as an infinite series of infinitesimal changes, each proceeding at a constant (and hence much simpler) rate, and then adding all those simple little changes back together to reconstitute the original whole.”
—Steven Strogatz is a mathematician at Cornell University.

“Many social and natural phenomena—societies, economies, ecosystems, climate systems—are complex evolving webs of interdependent parts whose collective behavior cannot be reduced to a sum of parts; small, gradual changes in any component can trigger catastrophic and potentially irreversible changes in the entire system that can propagate, in domino fashion, even across traditional disciplinary boundaries.”
—George Sugihara is a theoretical biologist at the Scripps Institution of Oceanography.

The rest @ SEED.

Modeling the Ryan proposal

Thanks Pete for pointing out that there is modeling behind the Ryan proposal after all. Macroeconomic Advisers has the kind of in-depth scrutiny of the model results that I love, in The Economic Effects of the Ryan Plan: Assuming the Answer?.

You really should read it, but here are some of the juicier excerpts:

Peek-a-boo

There were actually two sets of results. The first showed real GDP immediately rising by $33.7 billion in 2012 (or 0.2%) relative to the baseline, with total employment rising 831 thousand (or 0.6%) and the civilian unemployment rate falling a stunning 2 percentage points, a decline that persisted for a decade. (This path for the unemployment rate is labeled “First Result” in the table.) The decline in the unemployment rate was greeted — quite correctly, in our view — with widespread incredulity. Shortly thereafter, the initial results were withdrawn and replaced with a second set of results that made no mention of the unemployment rate, but not before we printed a hardcopy! (This is labeled “Second Result” in the table.)

Multiplier Mischief

The simulation shows real federal non-defense purchases down by $37.4 billion in 2012, but real GDP up by $33.7 billion, so the short-run “fiscal multiplier” is negative.[11] As noted above, that analysis was prepared using the GI model of the US economy. We are not intimately familiar with this model but have the impression it is a structural macro model in which near-term movements in GDP are governed by aggregate demand while long-term trends in output are determined by the labor force, the capital stock, and total factor productivity. Obviously we can’t object to this paradigm, since we rely on it, too.

However, precisely because we are so familiar with the characteristics of such systems, we doubt that the GI model, used as intended, shows a negative short-run fiscal multiplier. Indeed, GI’s own discussion of its model makes clear the system does, in fact, have a positive short-run fiscal multiplier.[12] This made us wonder how and on what grounds analysts at Heritage manipulated the system to produce the results reported.

Crowding Out Credibility

So, as we parsed the simulation results, we couldn’t see what was stimulating aggregate demand at unchanged interest rates and in the face of large cuts in government consumption and transfer payments…until we read this:

“Economic studies repeatedly find that government debt crowds out private investment, although the degree to which it does so can be debated. The structure of the model does not allow for this direct feedback between government spending and private investment variables. Therefore, the add factors on private investment variables were also adjusted to reflect percentage changes in publicly held debt (MA italics).”

In sum, we have never seen an investment equation specified this way and, in our judgment, adjusting up investment demand in this manner is tantamount to assuming the answer. If Heritage wanted to show more crowding in, it should have argued for a bigger drop in interest rates or more interest-sensitive investment, responses over which there is legitimate empirical debate. These kinds of adjustments would not have reversed the sign of the short-run fiscal multiplier in the manner that simply adjusting up investment spending did.

Hilarious Housing?

In the simulation, the component of GDP that initially increases most, both in absolute and in percentage terms, is residential investment. This is really hard to fathom. There’s no change in pre-tax interest rates to speak of, hence the after-tax mortgage rate presumably rises with the decline in marginal tax rates even as the proposed tax reform curtails some or all of the mortgage interest deduction. …

The list of problems goes on and on, and there are others. MacroAdviser’s bottom line:

In our opinion, however, the macroeconomic analysis released in conjunction with the House Budget Resolution is not relevant to the coming discussion. We believe that the main result — that aggressive deficit reduction immediately raises GDP at unchanged interest rates — was generated by manipulating a model that would not otherwise produce this result, and that the basis for this manipulation is not supported either theoretically or empirically. Other features of the results — while perhaps unintended — seem highly problematic to us and seriously undermine the credibility of the overall conclusions.

This is really unfortunate, both for the policy debate and the modeling profession. Using models as arguments from authority, while manipulating them to produce propagandistic output, poisons the well for all rational inputs to policy debates. Unfortunately, there’s a long history of such practice, particularly in economic forecasting:

Not surprisingly, the forecasts produced by econometric models often don’t square with the modeler’s intuition. When they feel the model output is wrong, many modelers, including those at the “big three” econometric forecasting firms—Chase Econometrics, Wharton Econometric Forecasting Associates, and Data Resources – simply adjust their forecasts. This fudging, or add factoring as they call it, is routine and extensive. The late Otto Eckstein of Data Resources admitted that their forecasts were 60 percent model and 40 percent judgment (“Forecasters Overhaul Models of Economy in Wake of 1982 Errors,” Wall Street Journal, 17 February 1983). Business Week (“Where Big Econometric Models Go Wrong,” 30 March 1981) quotes an economist who points out that there is no way of knowing where the Wharton model ends and the model’s developer, Larry Klein, takes over. Of course, the adjustments made by add factoring are strongly colored by the personalities and political philosophies of the modelers. In the article cited above, the Wall Street Journal quotes Otto Eckstein as conceding that his forecasts sometimes reflect an optimistic view: “Data Resources is the most influential forecasting firm in the country… If it were in the hands of a doom-and- gloomer, it would be bad for the country.” John Sterman, A Skeptic’s Guide to Computer Models

As a historical note, GI – Global Insight, maker of the model used by Heritage CDA for the Ryan analysis – is the product of a Wharton/DRI merger, though it appears that the use of the GI model may have been outside their purview in this case.

What’s the cure? I’m not sure there is one as long as people are cherry-picking plausible sounding arguments to back up their preconceived notions or narrow self-interest. But assuming that some people do want intelligent discourse, it’s fairly easy to get it by having high standards for model transparency and quality. This means more than peer review, which often entails only weak checks of face validity of output. It means actual interaction with models, supported by software that makes it easy to identify causal relationships and perform tests in extreme conditions. It also means archiving of models and results for long-term replication and quality improvement. It requires that modelers invest more in testing the limits of their own insights, communicating their learnings and tools, and fostering understanding of principles that help raise the average level of debate.

The delusional revenue side of the Ryan budget proposal

I think the many chapters of health care changes in the Ryan proposal are actually a distraction from the primary change. It’s this:

  • Provides individual income tax payers a choice of how to pay their taxes – through existing law, or through a highly simplified code …
  • Simplifies tax rates to 10 percent on income up to $100,000 for joint filers, and $50,000 for single filers; and 25 percent on taxable income above these amounts. … [A minor quibble: it’s stupid to have a stepwise tax rate, especially with a huge jump from 10 to 25%. Why can’t congress get a grip on simple ideas like piecewise linearity?]
  • Eliminates the alternative minimum tax [AMT].
  • Promotes saving by eliminating taxes on interest, capital gains, and dividends; also eliminates the death tax.
  • Replaces the corporate income tax – currently the second highest in the industrialized world – with a border-adjustable business consumption tax of 8.5 percent. …

This ostensibly results in a revenue trajectory that rises to a little less than 19% of GDP, roughly the postwar average. The CBO didn’t analyze this; it used a trajectory from Ryan’s staff. The numbers appear to me to be delusional.

For sub-$50k returns in the new 10% bracket, this does not appear to be a break. Of those returns, currently over 2/3 pay less than a 5% average tax rate. It’s not clear what the distribution of income is within this bracket, but an individual would only have to make about $25k to be worse off than the median earner, it appears. The same appears to be true in the $100k-200k bracket. A $150k return with a $39k exemption for a family of four would pay 18.5% on average, while the current median is 10-15%. This is certainly not a benefit to wage earners, though the net effect is ambiguous (to me at least) because of the change in treatment of asset income.

The elimination of tax on interest, dividends and capital gains is really the big story here. For returns over $200k, wages are less than 42% of AGI. Interest, dividends and gains are over 35%. The termination of asset taxes means that taxes fall by about a third on high income returns (the elimination of the mortgage interest deduction does little to change that). The flat 25% marginal rate can’t possibly make up for this, because it’s not different enough from the ~20% median effective tax rate in that bracket. For the top 400 returns in the US, exemption of asset income would reduce the income basis by 70%, and reduce the marginal tax rate from the ballpark of 35% to 25%.

It seems utterly delusional to imagine that this somehow returns to something resembling the postwar average tax burden, unless setting taxes on assets to zero is accompanied by a net increase in other taxes (i.e. wages, which constitute about 70% of total income). That in turn implies a tax increase for the lower brackets, a substantial cut on returns over $200k, and a ginormous cut for the very highest earners.

This is all exacerbated by the simultaneous elimination of corporate taxes, which are already historically low and presumably have roughly the same incidence as individual asset income, making the cut another gift to the top decile. With rates falling from 35% at the margin to 8.5% on “consumption” (a misnomer – the title calls it a “business consumption tax” but the language actually taxes “gross profits”, which is in turn a misnomer because investment is treated as a current year expense). The repeal of the estate tax, of which 80% is currently collected on estates over $5 million (essentially 0% below $2 million) has a similar distributional effect.

I think it’s reasonable to discuss cutting corporate taxes, which do appear to be cross sectionally high. But if you’re going to do that, you need to somehow maintain the distributional characteristics of the tax system, or come up with a rational reason not to, in the face of increasing inequity of wealth.

I can’t help wondering whether there’s any analysis behind these numbers, or if they were just pulled from a hat by lawyers and lobbyists. This simply isn’t a serious proposal, except for people who are serious about top-bracket tax cuts and drowning the government in a bathtub.

Given that the IRS knows the distribution of individual income in exquisite detail, and that much of the aggregate data needed to analyze proposals like those above is readily available on the web, it’s hard to fathom why anyone would even entertain the idea of discussing a complex revenue proposal like Ryan’s without some serious analytic support and visualization. This isn’t rocket science, or even bathtub dynamics. It’s just basic accounting – perfect stuff for a spreadsheet. So why are we reviewing this proposal with 19th century tools – an overwhelming legal text surrounded by a stew of bogus rhetoric?