Tariff dumbnamics in the penguin island autarky

The formula behind the recent tariffs has spawned a lot of analysis, presumably because there’s plenty of foolishness to go around. Stand Up Maths has a good one:

I want to offer a slightly different take.

What the tariff formula is proposing is really a feedback control system. The goal of the control loop is to extinguish trade deficits (which the administration mischaracterized as “tariffs other countries charge us”). The key loop is the purple one, which establishes the tariff required to achieve balance:

The Tariff here is a stock, because it’s part of the state of the system – a posted price that changes in response to an adjustment process. That process isn’t instantaneous, though it seems that the implementation time is extremely short recently.

In this simple model, delta tariff required is the now-famous formula. There’s an old saw, that economics is the search for an objective function that makes revealed behavior optimal. There’s something similar here, which is discovering the demand function for imports m that makes the delta tariff required correct. There is one:

Imports m = Initial Imports*(1-Tariff*Pass Through*Price Elasticity)

With that, the loop works exactly as intended: the tariff rises to the level predicted by the initial delta, and the trade imbalance is extinguished:

So in this case, the model produces the desired behavior, given the assumptions. It just so happens that the assumptions are really dumb.

You could quibble with the functional form and choice of parameters (which the calculation note sources from papers that don’t say what they’re purported to say). But I think the primary problem is omitted structure.

First, in the original model, exports are invariant. That’s obviously not the case, because (a) the tariff increases domestic costs, and therefore export prices, and (b) it’s naive to expect that other countries won’t retaliate. The escalation with China is a good example of the latter.

Second, the prevailing mindset seems to be that trade imbalances can adjust quickly. That’s bonkers. The roots of imbalances are structural, and baked in to the capacity of industries to produce and transport particular mixes of goods (pink below). Turning over the capital stocks behind those capacities takes years. Changing the technology and human capital involved might take even longer. A whole industrial ecosystem can’t just spring up overnight.

Third, in theory exchange rates are already supposed to be equilibrating trade imbalances, but they’re not. I think this is because they’re tied up with monetary and physical capital flows that aren’t in the kind of simple Ricardian barter model the administration is assuming. Those are potentially big issues, for which there isn’t good agreement about the structure.

I think the problem definition, boundary and goal of the system also need to be questioned. If we succeed in balancing trade, other countries won’t be accumulating dollars and plowing them back into treasuries to finance our debt. What will the bond vigilantes do then? Perhaps we should be looking to get our own fiscal house in order first.

Lately I’ve been arguing for a degree of predictability in some systems. However, I’ve also been arguing that one should attempt to measure the potential predictability of the system. In this case, I think the uncertainties are pretty profound, the proposed model has zero credibility, and better models are elusive, so the tariff formula is not predictive in any useful way. We should be treading carefully, not swinging wildly at a pinata where the candy is primarily trading opportunities for insiders.

Hitching our wagon to the ‘Sick Man of Europe’

Tsar Nicholas I reportedly coined the term “Sick Man of Europe” to refer to the declining Ottoman Empire. Ironically, now it’s Russia that seems sick.

Above shows GDP per capita (constant PPP, World Bank). Here’s how this works. GDP growth is basically from two things: capital accumulation, and technology. There are diminishing returns to capital, so in the long run technology is essentially the whole game. Countries that innovate grow, and those that don’t stagnate.

However, when you’re the technology leader, innovation is hard. There’s a tension between making big technical leaps and falling off a cliff because you picked the wrong one. Since the industrial revolution, this has created a glass ceiling of growth at 1.5-2%/year for the leading countries. That rate has prevailed despite huge waves of technology from railroads to microchips, and at low and high tax rates. If you’re below the ceiling, you can grow faster, because you can imitate rather than innovate, and that entails much less risk of failure.

If you’re below the glass ceiling, you can also fail to grow, because growth is essentially a function of MIN( rule of law, functioning markets, etc. ) that enable innovation or imitation. If you don’t have some basic functioning human and institutional capital, you can’t grow. Unfortunately, human rights don’t seem to be a necessary part of the equation, as long as there’s some basic economic infrastructure. On the other hand, there’s some evidence that equity matters, probably because innovation is a bottom-up evolutionary process.

In the chart above, the US is doing pretty well lately at 1.7%/yr. China has also done very well, at 5.8%, which is actually off their highest growth rates, but it’s substantially due to catch-up effects. South Korea, at about 60% of US GDP/cap, grows a little faster at 2.2% (a couple decades back, they were growing much faster, but have now caught up.

So why is Russia, poorer than S. Korea, doing poorly at only 0.96%/year? The answer clearly isn’t resource endowment, because Russia is massively rich in that sense. I think the answer is rampant corruption and endless war, where the powerful steal the resources that could otherwise be used for innovation, and the state squanders the rest on conflict.

At current rates, China could surpass Russia in just 12 years (though it’s likely that growth will slow). At that point, it would be a vastly larger economy. So why are we throwing our lot in with a decrepit, underperforming nation that finds it easier to steal crypto from Americans than to borrow ideas? I think the US was already at risk from overconcentration of firms and equity effects that destroy education and community, but we’re now turning down a road that leads to a moribund oligarchy.

Modeling at the speed of BS

I’ve been involved in some debates over legislation here in Montana recently. I think there’s a solid rational case against a couple of bills, but it turns out not to matter, because the supporters have one tool the opponents lack: they’re willing to make stuff up. No amount of reason can overwhelm a carefully-crafted fabrication handed to a motivated listener.

One aspect of this is a consumer problem. If I worked with someone who told me a nice little story about their public doings, and then I was presented with court documents explicitly contradicting their account, they would quickly feel the sole of my boot on their backside. But it seems that some legislators simply don’t care, so long as the story aligns with their predispositions.

But I think the bigger problem is that BS is simply faster and cheaper than the truth. Getting to truth with models is slow, expensive and sometimes elusive. Making stuff up is comparatively easy, and you never have to worry about being wrong (because you don’t care). AI doesn’t help, because it accelerates fabrication and hallucination more than it helps real modeling.

At the moment, there are at least 3 areas where I have considerable experience, working models, and a desire to see policy improve. But I’m finding it hard to contribute, because debates over these issues are exclusively tribal. I fear this is the onset of a new dark age, where oligarchy and the one party state overwhelm the emerging tools we enjoy.

Was James Madison wrong?

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, self appointed, or elective, may justly be pronounced the very definition of tyranny.

In Federalist 47, Madison considers the adequacy of protections against consolidation of power. He defended the framework as adequate. Maybe not.

… One of the principal objections inculcated by the more respectable adversaries to the Constitution, is its supposed violation of the political maxim, that the legislative, executive, and judiciary departments ought to be separate and distinct. In the structure of the federal government, no regard, it is said, seems to have been paid to this essential precaution in favor of liberty. The several departments of power are distributed and blended in such a manner as at once to destroy all symmetry and beauty of form, and to expose some of the essential parts of the edifice to the danger of being crushed by the disproportionate weight of other parts. No political truth is certainly of greater intrinsic value, or is stamped with the authority of more enlightened patrons of liberty, than that on which the objection is founded.

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, self appointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. …


The scoreboard is not looking good.

Executive GOP “He who saves his country does not violate any law.” – DJT & Napoleon
Legislative H: GOP 218/215
S: GOP 53/45
“Of course, the branches have to respect our constitutional order. But there’s a lot of game yet to be played … I agree wholeheartedly with my friend JD Vance … ” – Johnson
Judiciary GOP 6/3 “Judges aren’t allowed to control the executive’s legitimate power,” – Vance
“The courts should take a step back and allow these processes to play out,” – Johnson
“Held: Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity.” Trump v US 2024
4th Estate ? X/Musk/DOGE
WaPo, Fox …
Tech bros kiss the ring.
States* Threatened “I — well, we are the federal law,” Trump said. “You’d better do it. You’d better do it, because you’re not going to get any federal funding at all if you don’t.”
“l’etat, c’est moi” – Louis XIV

There have been other times in history when the legislative and executive branches fell under one party’s control. I’m not aware of one that led members to declare that they were not subject to separation of powers. I think what Madison didn’t bank on is the combined power of party and polarization. I think our prevailing winner-take-all electoral systems have led us to this point.

*Updated 2/22

Destroying agency competence

Normally, and maybe ideally, provision of government services is managed by a political process that balances enthusiasm for services received against the cost of the taxes required to provide those services (green loops). There are lots of ways this can go wrong, but at present 3 pathologies seem especially prevalent. The driving force behind these is wealth inequality, because it unbalances the benefits of services and the costs. The benefits generally accrue broadly, whereas costs (taxes) fall where the money is (at least in a flat or progressive system). This means that, if you’re wealthy, it’s cheaper to use FedEx than to fund the USPS, and cheaper to move to a place with clean air than to clean up your refinery. This process is shown with heavy lines below.

The oldest pathology this triggers is outright corruption (red loop), by hijacking agency resources for private gain rather than public benefit. I’m thinking of the mysterious award of a $300m contract to restore Puerto Rico’s electric power to a company with 2 employees, coincidentally acquaintances of Interior Secretary Zinke.

While there may not be anything new under the sun, the other two pathologies seem to be ascendant lately. These rely on the fact that you don’t have to steal an agency’s money if your goal is to quit paying for it. If you can’t defeat it politically in an open contest, because a constituency enjoys its services, you can undermine that support by destroying those services (orange loop). This reminds me of destruction of mail sorting machinery and the general degradation of USPS service that has happened under DeJoy’s tenure.

If you can’t destroy the reality of the agency, you can destroy the perception of the agency by attacking its measurement systems. If, for example, the EPA can’t measure air and water quality, or climate, it not only undermines the ability to operate standards and enforcement, it destroys the ability to even perceive the need for these measurements. This is often easy to do, because measurements don’t have direct constituencies, unlike roads or education. This is the first deadly sin of complex system management, and will leave us effectively flying an airplane with a clown car cockpit. Even worse, it makes it easier for the leaders of these misguided efforts to believe their own BS, and get away with it – at least in the short run.

Limits and Markets

Almost fifty years ago, economists claimed that markets would save us from Limits to Growth. Here’s William Nordhaus, writing about World Dynamics in Measurement without Data (1973):

How’s that working out? I would argue, not well.

Certainly there are functional markets for commodities like oil and gas, but even then a substantial share of the resources are allocated by myopic regulators captive to industry interests.

But for practically everything else, the markets that would in theory allocate across resources, time and space simply don’t exist, even today.

Water markets haven’t prevented the decline of Lake Mead, and they’re resisted widely, including here in Bozeman:

Joseph Stiglitz explained in the WSJ:

A similar pattern could unfold again. But economic forces alone may not be able to fix the problems this time around. Societies as different as the U.S. and China face stiff political resistance to boosting water prices to encourage efficient use, particularly from farmers. …

This troubles some economists who used to be skeptical of the premise of “The Limits to Growth.” As a young economist 30 years ago, Joseph Stiglitz said flatly: “There is not a persuasive case to be made that we face a problem from the exhaustion of our resources in the short or medium run.”

Today, the Nobel laureate is concerned that oil is underpriced relative to the cost of carbon emissions, and that key resources such as water are often provided free. “In the absence of market signals, there’s no way the market will solve these problems,” he says. “How do we make people who have gotten something for free start paying for it? That’s really hard. If our patterns of living, our patterns of consumption are imitated, as others are striving to do, the world probably is not viable.”

What is the price of declining rainforests, reefs or insects? What would markets quote for killing a bird with neonicotinoids, or a wind turbine, or for your Italian songbird pan-fry? What do gravel pits pay for dust and noise emissions, and what will autonomous EVs pay for increased congestion? The answer is almost universally zero. Even things that have received much attention, like emissions of greenhouse gases and criteria air pollutants, are free in most places.

These public goods aren’t free because they’re abundant or unimportant. They’re free because there are no property rights for them, and people resist creating the market mechanisms needed. Everyone loves the free market, until it applies to them. This might be OK if other negative feedback mechanisms picked up the slack, but those clearly aren’t functioning sufficiently either.

Lake Mead and incentives

Since I wrote about Lake Mead ten years ago (1 2 3), things have not improved. It’s down to 1068 feet, holding fairly steady after a brief boost in the wet year 2011-12. The Reclamation outlook has it losing another 60 feet in the next two years.

The stabilization has a lot to do with successful conservation. In Phoenix, for example, water use is down even though population is up. Some of this is technology and habits, and some of it is banishment of “useless grass” and other wasteful practices. MJ describes water cops in Las Vegas:

Investigator Perry Kaye jammed the brakes of his government-issued vehicle to survey the offense. “Uh oh this doesn’t look too good. Let’s take a peek,” he said, exiting the car to handle what has become one of the most existential violations in drought-stricken Las Vegas—a faulty sprinkler.

“These sprinklers haven’t popped up properly, they are just oozing everywhere,” muttered Kaye. He has been policing water waste for the past 16 years, issuing countless fines in that time. “I had hoped I would’ve worked myself out of a job by now. But it looks like I will retire first.”

Enforcement undoubtedly helps, but it strikes me as a band-aid where a tourniquet is needed. While the city is out checking sprinklers, people are free to waste water in a hundred less-conspicuous ways. That’s because standards say “conserve” but the market says “consume” – water is still cheap. As long as that’s true, technology improvements are offset by rebound effects.

Often, cheap water is justified as an equity issue: the poor need low-cost water. But there’s nothing equitable about water rates. The symptom is in the behavior of the top users:

Total and per-capita water use in Southern Nevada has declined over the last decade, even as the region’s population has increased by 14%. But water use among the biggest water users — some of the valley’s wealthiest, most prominent residents — has held steady.

The top 100 residential water users serviced by the Las Vegas Valley Water District used more than 284 million gallons of water in 2018 — over 11 million gallons more than the top 100 users of 2008 consumed at the time, records show. …

Properties that made the top 100 “lists” — which the Henderson and Las Vegas water districts do not regularly track, but compiled in response to records requests — consumed between 1.39 million gallons and 12.4 million gallons. By comparison, the median annual water consumption for a Las Vegas water district household was 100,920 gallons in 2018.

In part, I’m sure the top 100 users consume 10 to 100x as much water as the median user because they have 10 to 100x as much money (or more). But this behavior is also baked into the rate structure. At first glance, it’s nicely progressive, like the price tiers for a 5/8″ meter:

A top user (>20k gallons a month) pays almost 4x as much as a first-tier user (up to 5k gallons a month). But … not so fast. There’s a huge loophole. High users can buy down the rate by installing a bigger meter. That means the real rate structure looks like this:

A high user can consume 20x as much water with a 2″ meter before hitting the top rate tier. There’s really no economic justification for this – transaction costs and economies of scale are surely tiny compared to these discounts. The seller (the water district) certainly isn’t trying to push more sales to high-volume users to make a profit.

To me, this looks a lot like CAFE, which allocates more fuel consumption rights to vehicles with larger footprints, and Energy Star, which sets a lower bar for larger refrigerators. It’s no wonder that these policies have achieved only modest gains over multiple decades, while equity has worsened. Until we’re willing to align economic incentives with standards, financing and other measures, I fear that we’re just not serious enough to solve water or energy problems. Meanwhile, exhorting virtue is just a way to exhaust altruism.

Election Fraud and Benford’s Law

Statistical tests only make sense when the assumed distribution matches the data-generating process.

There are several analyses going around that purport to prove election fraud in PA, because the first digits of vote counts don’t conform to Benford’s Law. Here’s the problem: first digits of vote counts aren’t expected to conform to Benford’s Law. So, you might just as well say that election fraud is proved by Newton’s 3rd Law or Godwin’s Law.

Example of bogus conclusions from naive application of Benford’s Law.

Benford’s Law describes the distribution of first digits when the set of numbers evaluated derives from a scale-free or Power Law distribution spanning multiple orders of magnitude. Lots of processes generate numbers like this, including Fibonacci numbers and things that grow exponentially. Social networks and evolutionary processes generate Zipf’s Law, which is Benford-conformant.

Here’s the problem: vote counts may not have this property. Voting district sizes tend to be similar and truncated above (dividing a jurisdiction into equal chunks), and vote proportions tend to be similar due to gerrymandering and other feedback processes. This means the Benford’s Law assumptions are violated, especially for the first digit.

This doesn’t mean the analysis can’t be salvaged. As a check, look at other elections for the same region. Check the confidence bounds on the test, rather than simply plotting the sample against expectations. Examine the 2nd or 3rd digits to minimize truncation bias. Best of all, throw out Benford and directly simulate a distribution of digits based on assumptions that apply to the specific situation. If what you’re reading hasn’t done these things, it’s probably rubbish.

This is really no different from any other data analysis problem. A statistical test is meaningless, unless the assumptions of the test match the phenomena to be tested. You can’t look at lightning strikes the same way you look at coin tosses. You can’t use ANOVA when the samples are non-Normal, or have unequal variances, because it assumes Normality and equivariance. You can’t make a linear fit to a curve, and you can’t ignore dynamics. (Well, you can actually do whatever you want, but don’t propose that the results mean anything.)

CAFE and Policy Resistance

In 2011, the White House announced big increases in CAFE fuel economy standards.

The result has been counterintuitive. But before looking at the outcome, let me correct a misconception. The chart above refers to the “fleetwide average” – but this is the new vehicle fleetwide average, not the average of vehicles on the road. Of course it is the latter that matters for CO2 emissions and other outcomes. The on-the-road average lags the standards by a long time, because the fleet turns over slowly, due to the long lifetime of vehicles. It’s worse than that, because actual performance lags the standards due to loopholes and measurement issues. The EPA puts the 2017 model year here:

But wait … it’s still worse than that. Notice that the future fleetwide average is closer to the car standard than to the truck standard:

That implies that the market share of cars is more than 50%. But look what’s been happening:

The market share of cars is collapsing. (If you look at longer series, it looks like the continuation of a long slide.) Presumably this is because, faced with consumer appetites guided by cheap gas and a standards gap between cars and trucks, automakers are doing the rational thing: they’re dumping their cars fleets and switching to trucks and SUVs. In other words, they’re moving from the upper curve to the less-constrained lower curve:

It’s actually worse than that, because within each vehicle class, EPA uses a footprint methodology that essentially assigns greater emissions property rights to larger vehicles.

So, while the CAFE standards seemingly require higher performance, they simultaneously incentivize behavioral responses that offset much of the improvement. The NRC actually wondered if this would happen when it evaluated CAFE about 5 years ago.

Three outcomes related to the size of vehicles in the fleet are possible due to the regulations: Manufacturers could change the size of individual vehicles, they could change the mix of vehicle sizes in their portfolio (i.e., more large cars relative to small cars), or they could change the mix of cars and light trucks.

I think it’s safe to say that yes, we’re seeing exactly these effects in the US fleet. That makes aggregate progress on emissions rather glacial. Transportation emissions are currently rising, interrupted only by the financial crisis. That’s because we’re not working all the needed leverage points in the system. We have one rule (CAFE) and technology (EVs) but we’re not doing anything about prices (carbon tax) or preferences (e.g., walkable cities). We need a more comprehensive approach if we’re going to beat the unintended consequences.