Was James Madison wrong?

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, self appointed, or elective, may justly be pronounced the very definition of tyranny.

In Federalist 47, Madison considers the adequacy of protections against consolidation of power. He defended the framework as adequate. Maybe not.

… One of the principal objections inculcated by the more respectable adversaries to the Constitution, is its supposed violation of the political maxim, that the legislative, executive, and judiciary departments ought to be separate and distinct. In the structure of the federal government, no regard, it is said, seems to have been paid to this essential precaution in favor of liberty. The several departments of power are distributed and blended in such a manner as at once to destroy all symmetry and beauty of form, and to expose some of the essential parts of the edifice to the danger of being crushed by the disproportionate weight of other parts. No political truth is certainly of greater intrinsic value, or is stamped with the authority of more enlightened patrons of liberty, than that on which the objection is founded.

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, self appointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. …

The scoreboard is not looking good.

Executive GOP “He who saves his country does not violate any law.” – DJT & Napoleon
Legislative H: GOP 218/215
S: GOP 53/45
“Of course, the branches have to respect our constitutional order. But there’s a lot of game yet to be played … I agree wholeheartedly with my friend JD Vance … ” – Johnson
Judiciary GOP 6/3 “Judges aren’t allowed to control the executive’s legitimate power,” – Vance
“The courts should take a step back and allow these processes to play out,” – Johnson
“Held: Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity.” Trump v US 2024
4th Estate ? X/Musk/DOGE
WaPo, Fox …
Tech bros kiss the ring.

There have been other times in history when the legislative and executive branches fell under one party’s control. I’m not aware of one that led members to declare that they were not subject to separation of powers. I think what Madison didn’t bank on is the combined power of party and polarization. I think our prevailing winner-take-all electoral systems have led us to this point.

AI & Copyright

The US Copyright office has issued its latest opinion on AI and copyright:

https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability

The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection. The Office offers a framework for assessing human authorship for works involving AI, outlining three scenarios: (1) using AI as an assistive tool rather than a replacement for human creativity, (2) incorporating human-created elements into AI-generated output, and (3) creatively arranging or modifying AI-generated elements.

The office’s approach to use of models seems fairly reasonable to me.

I’m not so enthusiastic about the de facto policy for ingestion of copyrighted material for training models, which courts have ruled to be fair use.

https://www.arl.org/blog/training-generative-ai-models-on-copyrighted-works-is-fair-use/

On the question of whether ingesting copyrighted works to train LLMs is fair use, LCA points to the history of courts applying the US Copyright Act to AI. For instance, under the precedent established in Authors Guild v. HathiTrust and upheld in Authors Guild v. Google, the US Court of Appeals for the Second Circuit held that mass digitization of a large volume of in-copyright books in order to distill and reveal new information about the books was a fair use. While these cases did not concern generative AI, they did involve machine learning. The courts now hearing the pending challenges to ingestion for training generative AI models are perfectly capable of applying these precedents to the cases before them.

I get that there are benefits to inclusive data for LLMs,

Why are scholars and librarians so invested in protecting the precedent that training AI LLMs on copyright-protected works is a transformative fair use? Rachael G. Samberg, Timothy Vollmer, and Samantha Teremi (of UC Berkeley Library) recently wrote that maintaining the continued treatment of training AI models as fair use is “essential to protecting research,” including non-generative, nonprofit educational research methodologies like text and data mining (TDM). …

What bothers me is that allegedly “generative” AI is only accidentally so. I think a better term in many cases might be “regurgitative.” An LLM is really just a big function with a zillion parameters, trained to minimize prediction error on sentence tokens. It may learn some underlying, even unobserved, patterns in the training corpus, but for any unique feature it may essentially be compressing information rather than transforming it in some way. That’s still useful – after all, there are only so many ways to write a python script to suck tab-delimited text into a dataframe – but it doesn’t seem like such a model deserves much IP protection.

Perhaps the solution is laissez faire – DeepSeek “steals” the corpus the AI corps “transformed” from everyone else, commencing a race to the bottom in which the key tech winds up being cheap and hard to monopolize. That doesn’t seem like a very satisfying policy outcome though.

Free the Waters

San Joaquin Valley water managers were surprised and baffled by water releases initiated by executive order, and the president’s bizarre claims about them.

https://sjvwater.org/trumps-emergency-water-order-responsible-for-water-dump-from-tulare-county-lakes/

It was no game on Thursday when area water managers were given about an hour’s notice that the Army Corps planned to release water up to “channel capacity,” the top amount rivers can handle, immediately.

This policy is dumb in several ways, so it’s hard to know where to start, but I think two pictures tell a pretty good story.

The first key point is that the reservoirs involved are in a different watershed and almost 200 miles from LA, and therefore unlikely to contribute to LA’s water situation. The only connection between the two regions is a massive pumping station that’s expensive to run. Even if it had the capacity, you can’t simply take water from one basin to another, because every drop is spoken for. These water rights are private property, not a policy plaything.

Even if you could magically transport this water to LA, it wouldn’t prevent fires. That’s because fires occur due to fuel and weather conditions. There’s simply no way for imported water to run uphill into Pacific Palisades, moisten the soil, and humidify the air.

In short, no one with even the crudest understanding of SoCal water thinks this is a good idea.

“A decision to take summer water from local farmers and dump it out of these reservoirs shows a complete lack of understanding of how the system works and sets a very dangerous precedent,” said Dan Vink, a longtime Tulare County water manager and principal partner at Six-33 Solutions, a water and natural resource firm in Visalia.

“This decision was clearly made by someone with no understanding of the system or the impacts that come from knee-jerk political actions.”

Health Payer-Provider Escalation and its Side Effects

I was at the doctor’s office for a routine checkup recently – except that it’s not a checkup anymore, it’s a “wellness visit”. Just to underscore that, there’s a sign on the front door, stating that you can’t talk to the doctor about any illnesses. So basically you can’t talk to your doctor if you’re sick. Let the insanity of that sink in. Why? Probably because an actual illness would take more time than your insurance would pay for. It’s not just my doc – I’ve seen this kind of thing in several offices.

To my eyes, US health care is getting worse, and its pathologies are much more noticeable. I think consolidation is a big part of the problem. In many places, the effective number of competing firms on both the provider side is 1 or 2, and the number of payers is 2 or 3. I roughly sketched out the effects of this as follows:

The central problem is that the few payers and providers left are in a war of escalation with each other. I suspected that it started with the payers’ systematic denial of, or underpayment for, procedure claims. The providers respond by increasing the complexity of their billing, so they can extract every penny. This creates loops R1 and R2.

Then there’s a side effect: as billing complexity and arbitrary denials increase, small providers lack the economies of scale to fight the payer bureaucracy; eventually they have to fold and accept a buyout from a hospital network (R3). The payers have now shot themselves in the foot, because they’re up against a bigger adversary.

But the payers also benefit from this complexity, because transparency becomes so bad that consumers can no longer tell what they’re paying for, and are more likely to overpay for insurance and copays (R7). This too comes with a cost for the payers though, because consumers no longer contribute to cost control (R6).

As this goes on, consumers no longer have choices, because self-insuring becomes infeasible (R5) – there’s nowhere to shop around (R4) and providers (more concentrated due to R3) overcharge the unlucky to compensate for their losses on covered care. The fixed costs of fighting the system are too high for an individual.

I don’t endorse what Luigi did – it was immoral and ineffective – but I absolutely understand why this system produces rage, violence and the worst bang-for-the-buck in the world.

The insidious thing about the escalating complexity of this system (R1 and R2 again) is that it makes it unfixable. Neither the payers nor the providers can unilaterally arrest this behavior. Nor does tinkering with the rules (as in the ACA) lead to a solution. I don’t know what the right prescription is, but it will have to be something fairly radical: adversaries will have to come together and figure out ways to deliver some real value to consumers, which flies in the face of boardroom pressures for results next quarter.

I will be surprised if the incoming administration dares to wade into this mess without a plan. De-escalating conflict to the benefit of the public is not their forte, nor is grappling with complex systems. Therefore I expect healthcare oligarchy to get worse. I’m not sure what to prescribe other than to proactively do everything you can to stay out of this dysfunctional system.

Meta thought: I think the CLD above captures some of the issues, but like most CLDs, it’s underspecified (and probably misspecified), and it can’t be formally tested. You’d need a formal model to straighten things out, but that’s tricky too: you don’t want a model that includes all the complexity of the current system, because that’s electronic concrete. You need a modestly complex model of the current system and alternatives, so you can use it to figure out how to rewrite the system.

AI in Climate Sci

RealClimate has a nice article on emerging uses of AI in climate modeling:

To summarise, most of the near-term results using ML will be in areas where the ML allows us to tackle big data type problems more efficiently than we could do before. This will lead to more skillful models, and perhaps better predictions, and allow us to increase resolution and detail faster than expected. Real progress will not be as fast as some of the more breathless commentaries have suggested, but progress will be real.

I think a key point is that AI/ML is not a silver bullet:

Climate is not weather

This is all very impressive, but it should be made clear that all of these efforts are tackling an initial value problem (IVP) – i.e. given the situation at a specific time, they track the evolution of that state over a number of days. This class of problem is appropriate for weather forecasts and seasonal-to-sub seasonal (S2S) predictions, but isn’t a good fit for climate projections – which are mostly boundary value problems (BVPs). The ‘boundary values’ important for climate are just the levels of greenhouse gases, solar irradiance, the Earth’s orbit, aerosol and reactive gas emissions etc. Model systems that don’t track any of these climate drivers are simply not going to be able to predict the effect of changes in those drivers. To be specific, none of the systems mentioned so far have a climate sensitivity (of any type).

But why can’t we learn climate predictions in the same way? The problem with this idea is that we simply don’t have the appropriate training data set. …

I think the same reasoning applies to many problems that we tackle with SD: the behavior of interest is way out of sample, and thus not subject to learning from data alone.

Better Documentation

There’s a recent talk by Stefan Rahmstorf that gives a good overview of the tipping point in the AMOC, which has huge implications.

I thought it would be neat to add the Stommel box model to my library, because it’s a nice low-order example of a tipping point. I turned to a recent update of the model by Wei & Zhang in GRL.

It’s an interesting paper, but it turns out that documentation falls short of the standards we like to see in SD, making it a pain to replicate. The good part is that the equations are provided:

The bad news is that the explanation of these terms is brief to the point of absurdity:

This paragraph requires you to maintain a mental stack of no less than 12 items if you want to be able to match the symbols to their explanations. You also have to read carefully if you want to know that ‘ means “anomaly” rather than “derivative”.

The supplemental material does at least include a table of parameters – but it’s incomplete. To find the delay taus, for example, you have to consult the text and figure captions, because they vary. Initial conditions are also not conveniently specified.

I like the terse mathematical description of a system because you can readily take in the entirety of a state variable or even the whole system at a glance. But it’s not enough to check the “we have Greek letters” box. You also need to check the “serious person could reproduce these results in a reasonable amount of time” box.

Code would be a nice complement to the equations, though that comes with it’s own problems: tower-of-Babel language choices and extraneous cruft in the code. In this case, I’d be happy with just a more complete high-level description – at least:

  • A complete table of parameters and units, with values used in various experiments.
  • Inclusion of initial conditions for each state variable.
  • Separation of terms in the RhoH-RhoL equation.

A lot of these issues are things you wouldn’t even know are there until you attempt replication. Unfortunately, that is something reviewers seldom do. But electrons are cheap, so there’s really no reason not to do a more comprehensive documentation job.

 

Destroying agency competence

Normally, and maybe ideally, provision of government services is managed by a political process that balances enthusiasm for services received against the cost of the taxes required to provide those services (green loops). There are lots of ways this can go wrong, but at present 3 pathologies seem especially prevalent. The driving force behind these is wealth inequality, because it unbalances the benefits of services and the costs. The benefits generally accrue broadly, whereas costs (taxes) fall where the money is (at least in a flat or progressive system). This means that, if you’re wealthy, it’s cheaper to use FedEx than to fund the USPS, and cheaper to move to a place with clean air than to clean up your refinery. This process is shown with heavy lines below.

The oldest pathology this triggers is outright corruption (red loop), by hijacking agency resources for private gain rather than public benefit. I’m thinking of the mysterious award of a $300m contract to restore Puerto Rico’s electric power to a company with 2 employees, coincidentally acquaintances of Interior Secretary Zinke.

While there may not be anything new under the sun, the other two pathologies seem to be ascendant lately. These rely on the fact that you don’t have to steal an agency’s money if your goal is to quit paying for it. If you can’t defeat it politically in an open contest, because a constituency enjoys its services, you can undermine that support by destroying those services (orange loop). This reminds me of destruction of mail sorting machinery and the general degradation of USPS service that has happened under DeJoy’s tenure.

If you can’t destroy the reality of the agency, you can destroy the perception of the agency by attacking its measurement systems. If, for example, the EPA can’t measure air and water quality, or climate, it not only undermines the ability to operate standards and enforcement, it destroys the ability to even perceive the need for these measurements. This is often easy to do, because measurements don’t have direct constituencies, unlike roads or education. This is the first deadly sin of complex system management, and will leave us effectively flying an airplane with a clown car cockpit. Even worse, it makes it easier for the leaders of these misguided efforts to believe their own BS, and get away with it – at least in the short run.

A case for strict unit testing

Over on the Vensim forum, Jean-Jacques Laublé points out an interesting bug in the World3 population sector. His forum post includes the model, with a revealing extreme conditions test and a correction. I think it’s important enough to copy my take here:

This is a very interesting discovery. The equations in question are:

maturation 14 to 15 =
 ( ( Population 0 To 14 ) )
 * ( 1
 - mortality 0 to 14 )
 / 15
 Units: Person/year
 The fractional rate at which people aged 0-14 mature into the
 next age cohort (MAT1#5).

**************************************************************
 mortality 0 to 14=
 IF THEN ELSE(Time = 2020 * one year, 1 / one year, mortality 0 to 14 table
 ( life expectancy/one year ) )
 Units: 1/year
 The fractional mortality rate for people aged 0-14 (M1#4).

**************************************************************

(The second is the one modified for the pulse mortality test.)

In the ‘maturation 14 to 15′ equation, the obvious issue is that ’15’ is a hidden dimensioned parameter. One might argue that this instance is ‘safe’ because 15 years is definitionally the residence time of people in the 0 to 15 cohort – but I would still avoid this usage, and make the 15 yrs a named parameter, like “child cohort duration”, with a corresponding name change to the stock. If nothing else, this would make the structure easier to reuse.

The sneaky bit here, revealed by JJ’s test, is that the ‘1’ in the term (1 – mortality 0 to 14) is not a benign dimensionless number, as we often assume in constructions like 1/(1+a*x). This 1 actually represents the maximum feasible stock outflow rate, in fraction/year, implying that a mortality rate of 1/yr, as in the test input, would consume the entire outflow, leaving no children alive to mature into the next cohort. This is incorrect, because the maximum feasible outflow rate is 1/TIME STEP, and TIME STEP = 0.5, so that 1 should really be 2 ~ frac/year. This is why maturation wrongly goes to 0 in JJ’s experiment, where some children remain to age into the next cohort.

In addition, this construction means that the origin of units in the equation are incorrect – the ’15’ has to be assumed to be dimensionless for this to work. If we assign correct units to the inputs, we have a problem:

maturation 14 to 15 = ~ people/year/year
 ( ( Population 0 To 14 ) ) ~ people
 * ( 1 - mortality 0 to 14 ) ~ fraction/year
 / 15 ~ 1/year

Obviously the left side of this equation, maturation, cannot be people/year/year.

JJ’s correction is:

maturation 14 to 15=
 ( ( Population 0 To 14 ) )
 * ( 1 - (mortality 0 to 14 * TIME STEP))
 / size of the 0 to 14 population

In this case, the ‘1’ represents the maximum fraction of the population that can flow out in a time step, so it really is dimensionless. (mortality 0 to 14 * TIME STEP) represents the fractional outflow from mortality within the time step, so it too is properly dimensionless (1/year * year). You could also write this term as:

( 1/TIME STEP - mortality 0 to 14 ) / (1/TIME STEP)

In this case you can see that the term is reducing maturation by the fraction of cohort residents who don’t make it to the next age group. 1/TIME STEP represents the maximum feasible outflow, i.e. 2/year if TIME STEP = 0.5 year. In this form, it’s easy to see that this term approaches 1 (no effect) in the continuous time limit as TIME STEP approaches 0.

I should add that these issues probably have only a tiny influence on the kind of experiments performed in Limits to Growth and certainly wouldn’t change the qualitative conclusions. However, I think there’s still a strong argument for careful attention to units: a model that’s right for the wrong reasons is a danger to future users (including yourself), who might use it in unanticipated ways that challenge the robustness in extremes.

AI, population and limits

Elon says we’re in danger of a population crash.

Interestingly, he invokes Little’s Law: “UN projections are utter nonsense. Just multiply last year’s births by life expectancy.” Doing his math, 135 million births/year * 71 years life expectancy = 9.6 billion people in equilibrium. Hardly a crash. And, of course, life expectancy is going up (US excepted).

But Elon also says AI is going to do all the work.

So what exactly do we need all those people for? A lower population, with no work to do and more fun resources per capita sounds pretty good to me. But apparently, they’re not for here. “If there aren’t enough people for Earth, then there definitely won’t be enough for Mars.”

Surely he knows that the physics of moving a significant chunk of Earth’s population to Mars is sketchy, and that it will likely be a homegrown effort, unconstrained by the availability of Earthlings?

 

 

Morons Controlling Weather

For the last 30 years, I’ve been hearing from climate skeptics that man can’t possibly affect the climate. Now MTG says it’s all a lie!

Hilarious that this reverses the usual conflation of weather and climate. I’d say this is so dumb it beggars the imagination, but I’ve heard so much dumb climate denial, this is barely top-10.

Still waiting for that new Maunder Minimum, by the way.