Cheese is Murder

Needlessly provocative title notwithstanding, the dairy industry has to be one of the most spectacular illustrations of the battle for control of system leverage points. In yesterday’s NYT:

Domino’s Pizza was hurting early last year. Domestic sales had fallen, and a survey of big pizza chain customers left the company tied for the worst tasting pies.

Then help arrived from an organization called Dairy Management. It teamed up with Domino’s to develop a new line of pizzas with 40 percent more cheese, and proceeded to devise and pay for a $12 million marketing campaign.

Consumers devoured the cheesier pizza, and sales soared by double digits. “This partnership is clearly working,” Brandon Solano, the Domino’s vice president for brand innovation, said in a statement to The New York Times.

But as healthy as this pizza has been for Domino’s, one slice contains as much as two-thirds of a day’s maximum recommended amount of saturated fat, which has been linked to heart disease and is high in calories.

And Dairy Management, which has made cheese its cause, is not a private business consultant. It is a marketing creation of the United States Department of Agriculture — the same agency at the center of a federal anti-obesity drive that discourages over-consumption of some of the very foods Dairy Management is vigorously promoting.

Urged on by government warnings about saturated fat, Americans have been moving toward low-fat milk for decades, leaving a surplus of whole milk and milk fat. Yet the government, through Dairy Management, is engaged in an effort to find ways to get dairy back into Americans’ diets, primarily through cheese.

Now recall Donella Meadows’ list of system leverage points:

Leverage points to intervene in a system (in increasing order of effectiveness)
12. Constants, parameters, numbers (such as subsidies, taxes, standards)
11. The size of buffers and other stabilizing stocks, relative to their flows
10. The structure of material stocks and flows (such as transport network, population age structures)
9. The length of delays, relative to the rate of system changes
8. The strength of negative feedback loops, relative to the effect they are trying to correct against
7. The gain around driving positive feedback loops
6. The structure of information flow (who does and does not have access to what kinds of information)
5. The rules of the system (such as incentives, punishment, constraints)
4. The power to add, change, evolve, or self-organize system structure
3. The goal of the system
2. The mindset or paradigm that the system – its goals, structure, rules, delays, parameters – arises out of
1. The power to transcend paradigms

The dairy industry has become a master at exercising these points, in particular using #4 and #5 to influence #6, resulting in interesting conflicts about #3.

Specifically, Dairy Management is funded by a “checkoff” (effectively a tax) on dairy output. That money basically goes to marketing of dairy products. A fair amount of that is done in stealth mode, through programs and information that appear to be generic nutrition advice, but happen to be funded by the NDC, CNFI, or other arms of Dairy Management. For example, there’s http://www.nutritionexplorations.org/ – for kids, they serve up pizza:

nutritionexplorations

That slice of “combination food” doesn’t look very nutritious to me, especially if it’s from the new Dominos line DM helped create. Notice that it’s cheese pizza, devoid of toppings. And what’s the gratuitous bowl of mac & cheese doing there? Elsewhere, their graphics reweight the food pyramid (already a grotesque product of lobbying science), to give all components equal visual weight. This systematic slanting of nutrition information is a nice example of my first deadly sin of complex system management.

A conspicuous target of dubious dairy information is school nutrition programs. Consider this, from GotMilk:

Flavored milk contributes only small amounts of added sugars to children ‘s diets. Sodas and fruit drinks are the number one source of added sugars in the diets of U.S. children and adolescents, while flavored milk provides only a small fraction (< 2%) of the total added sugars consumed.

It’s tough to fact-check this, because the citation doesn’t match the journal. But it seems likely that the statement that flavored milk provides only a small fraction of sugars is a red herring, i.e. that it arises because flavored milk is a small share of intake, rather than because the marginal contribution of sugar per unit flavored milk is small. Much of the rest of the information provided is a similar riot of conflated correlation and causation and dairy-sponsored research. I have to wonder whether innovations like flavored milk are helpful, because they displace sugary soda, or just one more trip around a big eroding goals loop that results in kids who won’t eat anything without sugar in it.

Elsewhere in the dairy system, there are price supports for producers at one end of the supply chain. At the consumer end, their are price ceilings, meant to preserve the affordability of dairy products. It’s unclear what this bizarre system of incentives at cross-purposes really delivers, other than confusion.

The fundamental problem, I think, is that there’s no transparency: no immediate feedback from eating patterns to health outcomes, and little visibility of the convoluted system of rules and subsidies. That leaves marketers and politicians free to push whatever they want.

So, how to close the loop? Unfortunately, many eaters appear to be uninterested in closing the loop themselves by actively seeking unbiased information, or even actively resist information contrary to their current patterns as the product of some kind of conspiracy. That leaves only natural selection to close the loop. Not wanting to experience that personally, I implemented my own negative feedback loop. I bought a cholesterol meter and modified my diet until I routinely tested OK. Sadly, that meant no more dairy.

Election Reflection

Jay Forrester’s 1971 Counter Intuitive Behavior of Social Systems sums up this election pretty well for me.

… social systems are inherently insensitive to most policy changes that people choose in an effort to alter the behavior of systems. In fact, social systems draw attention to the very points at which an attempt to intervene will fail. Human intuition develops from exposure to simple systems. In simple systems, the cause of a trouble is close in both time and space to symptoms of the trouble. If one touches a hot stove, the burn occurs here and now; the cause is obvious. However, in complex dynamic systems, causes are often far removed in both time and space from the symptoms. True causes may lie far back in time and arise from an entirely different part of the system from when and where the symptoms occur. However, the complex system can mislead in devious ways by presenting an apparent cause that meets the expectations derived from simple systems. A person will observe what appear to be causes that lie close to the symptoms in both time and space—shortly before in time and close to the symptoms. However, the apparent causes are usually coincident occurrences that, like the trouble symptom itself, are being produced by the feedback-loop dynamics of a larger system.

Translation: economy collapses under a Republican administration. Democrats fail to fix it, partly for lack of knowledge of correct action but primarily because it’s unfixable on a two-year time scale. Voters who elected the Dems by a large margin forget the origins of the problem, become dissatisfied and throw the bums out, but replace them with more clueless bums.

… social systems seem to have a few sensitive influence points through which behavior can be changed. These high-influence points are not where most people expect. Furthermore, when a high-influence policy is identified, the chances are great that a person guided by intuition and judgment will alter the system in the wrong direction.

Translation: everyone suddenly becomes a deficit hawk at the worst possible time, even though they don’t know whether Obama is a Keynesian.

The root of the problem:

Mental models are fuzzy, incomplete, and imprecisely stated. Furthermore, within a single individual, mental models change with time, even during the flow of a single conversation. The human mind assembles a few relationships to fit the context of a discussion. As debate shifts, so do the mental models. Even when only a single topic is being discussed, each participant in a conversation employs a different mental model to interpret the subject. Fundamental assumptions differ but are never brought into the open. Goals are different but left unstated.

It is little wonder that compromise takes so long. And even when consensus is reached, the underlying assumptions may be fallacies that lead to laws and programs that fail.

Still,

… there is hope. It is now possible to gain a better understanding of dynamic behavior in social systems. Progress will be slow. There are many cross-currents in the social sciences which will cause confusion and delay. … If we proceed expeditiously but thoughtfully, there is a basis for optimism.

Modelers: you're not competing

Well, maybe a little, but it doesn’t help.

From time to time we at Ventana encounter consulting engagements where the problem space is already occupied by other models. Typically, these are big, detailed models from academic or national lab teams who’ve been working on them for a long time. For example, in an aerospace project we ran into detailed point-to-point trip generation models and airspace management simulations with every known airport and aircraft in them. They were good, but cumbersome and expensive to run. Our job was to take a top-down look at the big picture, integrating the knowledge from the big but narrow models. At first there was a lot of resistance to our intrusion, because we consumed some of the budget, until it became evident that the existence of the top-down model added value to the bottom-up models by placing them in context, making their results more relevant. The benefit was mutual, because the bottom-up models provided grounding for our model that otherwise would have been very difficult to establish. I can’t quite say that we became one big happy family, but we certainly developed a productive working relationship.

I think situations involving complementary models are more common than head-to-head competition among models that serve the same purpose. Even where head-to-head competition does exist, it’s healthy to have multiple models, especially if they embody different methods. (The trouble with global climate policy is that we have many models that mostly embody the same general equilibrium assumptions, and thus differ only in detail.) Rather than getting into methodological pissing matches, modelers should be seeking the synergy among their efforts and making it known to decision makers. That helps to grow the pie for all modeling efforts, and produces better decisions.

Certainly there are exceptions. I once ran across a competing vendor doing marketing science for a big consumer products company. We were baffled by the high R^2 values they were reporting (.92 to .98), so we reverse engineered their model from the data and some slides (easy, because it was a linear regression). It turned out that the great fits were due to the use of 52 independent parameters to capture seasonal variation on a weekly basis. Since there were only 3 years of data (i.e. 3 points per parameter), we dubbed that the “variance eraser.” Replacing the 52 parameters with a few targeted at holidays and broad variations resulted in more realistic fits, and also revealed problems with inverted signs (presumably due to collinearity) and other typical pathologies. That model deserved to be displaced. Still, we learned something from it: when we looked cross-sectionally at several variants for different products, we discovered that coefficients describing the sales response to advertising were dependent on the scale of the product line, consistent with our prior assertion that effects of marketing and other activities were multiplicative, not additive.

The reality is that the need for models is almost unlimited.  The physical sciences are fairly well formalized, but models span a discouragingly small fraction of the scope of human behavior and institutions. We need to get the cost of providing insight down, not restrict the supply through infighting. The real enemy is seldom other models, but rather superstition, guesswork and propaganda.

Ben Franklin, systems thinker

I find that many great thinkers are systems thinkers, even if they don’t use the lingo of feedback. Here’s a great example, in which Ben Franklin anticipates the American revolution, describing forces that could bring it about:

TO THE COMMITTEE OF CORRESPONDENCE IN MASSACHUSETTS

London, May 15, 1771.

GENTLEMEN,

I have received your favour of the 27th of February, with the journal of the House of Representatives, and copies of the late oppressive prosecutions in the Admiralty Court, which I shall, as you direct, communicate to Mr. Bollan, and consult with him on the most advantageous use to be made of them for the interest of the province.

I think one may clearly see, in the system of customs [import taxes] to be exacted in America by act of Parliament, the seeds sown of a total disunion of the two countries, though, as yet, that event may be at a considerable distance. The course and natural progress seems to be, first, the appointment of needy men as officers, for others do not care to leave England; then, their necessities make them rapacious, their office makes them proud and insolent, their insolence and rapacity make them odious, and, being conscious that they are hated, they become malicious; their malice urges them to a continual abuse of the inhabitants in their letters to administration, representing them as disaffected and rebellious, and (to encourage the use of severity) as weak, divided, timid, and cowardly. Government believes all; thinks it necessary to support and countenance its officers; their quarrelling with the people is deemed a mark and consequence of their fidelity; they are therefore more highly rewarded, and this makes their conduct still more insolent and provoking.

The resentment of the people will, at times and on particular incidents, burst into outrages and violence upon such officers, and this naturally draws down severity and acts of further oppression from hence. The more the people are dissatisfied, the more rigor will be thought necessary; severe punishments will be inflicted to terrify; rights and privileges will be abolished; greater force will then be required to secure execution and submission; the expense will become enormous; it will then be thought proper, by fresh exactions, to make the people defray it; thence, the British nation and government will become odious, the subjection to it will be deemed no longer tolerable; war ensues, and the bloody struggle will end in absolute slavery to America, or ruin to Britain by the loss of her colonies; the latter most probable, from America’s growing strength and magnitude.

….

I do not pretend to the gift of prophecy. History shows, that, by these steps, great empires have crumbled heretofore; and the late transactions we have so much cause to complain of show, that we are in the same train, and that, without a greater share of prudence and wisdom, than we have seen both sides to be possessed of, we shall probably come to the same conclusion….

With great esteem and respect, I have the honour to be, &c.

B. FRANKLIN.

This translates readily into a rich causal loop diagram (click the image to enlarge):

Franklin anticipates the revolution

My CLD here is basically a direct translation of the letter. That makes it sound a little more like a cycle of events, and less like interaction of quantities that can vary, than I would like. I think it could be refined somewhat by aggregating related concepts and rearranging a few links. For example, war is really just an escalation of violence, so one could simplify by treating the level of violence more generically.

The interesting thing about this diagram is that it’s all positive loops. Presumably the “prudence and wisdom” that Franklin noted would have created negative loops that would have stabilized the situation. What were they?

I bet a lot of the same dynamics are in the DOD Afghanistan counterinsurgency diagram.

Thanks to Dan Proctor for the original letter & idea.

The Vensim CLD is here if you want to play: franklin.mdl

Football physics & perception

SEED has a nice story on perception of curving shots in football (soccer).

The physics of the curving trajectory is interesting. In short, a light spinning ball can transition from a circular trajectory to a tighter spiral, surprising the goalkeeper.

What I find really interesting, though, is that goalkeepers don’t anticipate this.

But goalkeepers see hundreds of free kicks in practice on a daily basis. Surely they’d eventually adapt to bending shots, wouldn’t they?

… Elite professionals from some of the top soccer clubs in the world were shown simulations of straight and bending free kicks, which disappeared from view 10 to 12.5 meters from the goal. They then had to predict the path of the ball. The players were accurate for straight kicks, but they made systematic errors on bending shots. Instead of taking the curve into account, players tended to assume the ball would continue straight along the path it was following when it disappeared. Even more surprisingly, goalkeepers were no better at predicting the path of bending balls than other players. …

I think the interesting question is, could they be trained to anticipate this? It’s fairly easy for the goalie to observe the early trajectory of a ball, but due to the nonlinear transition to a new curvature, that’s not helpful. To guess whether the ball might suddenly take a wicked turn, one would have to judge its spin, which has to be much harder. My guess is that prediction is difficult, so the only option is to take robust action. In the case of the famous Carlos shot, one might guess that the goalie should have moved to cover the pole, even if he judged that the ball would be wide. (But who am I to say? I’m a lousy soccer player – I find 9 year olds to be stiff competition.)

SEED has another example:

I wrote about a similar problem on my blog earlier this year: How baseball fielders track fly balls. Researchers found that even when the ball is not spinning, outfielders don’t follow the optimum path to the ball—instead they constantly update their position in response to the ball’s motion.

At first this sounds like a classic lion-gazelle pursuit problem. But there’s one key difference: in pursuit problems I’ve seen, the opponent’s location is known, so the questions are all about physics and (maybe) strategic behavior. In soccer and baseball, at least part of the ball’s state (spin, for example) is at best poorly observed by the receiver. Therefore trajectories that appear to be suboptimal might actually be robust responses to imperfect measurement.

The problems faced by goalies and outfielders are in some ways much like those facing managers: what do you do, given imperfect information about a surprisingly nonlinear world?

SD & ABM: Don't throw stones; build bridges

There’s an old joke:

Q: Why are the debates so bitter in academia?

A: Because the stakes are so low.

The stakes are actually very high when models intersect with policy, I think, but sometimes academic debates come across as needlessly petty. That joke came to mind when a colleague shared this presentation abstract:

Pathologies of System Dynamics Models or “Why I am Not a System Dynamicst”

by Dr. Robert Axtell

So-called system dynamics (SD) models are typically interpreted as a summary or aggregate representation of a dynamical system composed of a large number of interacting entities. The high dimensional microscopic system is abstracted-notionally if not mathematically-into a ‘compressed’ form, yielding the SD model. In order to be useful, the reduced form representation must have some fidelity to the original dynamical system that describes the phenomena under study. In this talk I demonstrate formally that even so-called perfectly aggregated SD models will in general display a host of pathologies that are a direct consequence of the aggregation process. Specifically, an SD model can exhibit spurious equilibria, false stability properties, modified sensitivity structure, corrupted bifurcation behavior, and anomalous statistical features, all with respect to the underlying microscopic system. Furthermore, perfect aggregation of a microscopic system into a SD representation will generally be either not possible or not unique.

Finally, imperfectly aggregated SD models-surely the norm-can possess still other troublesome features. From these purely mathematical results I conclude that there is a definite sense in which even the best SD models are at least potentially problematical, if not outright mischaracterizations of the systems they purport to describe. Such models may have little practical value in decision support environments, and their use in formulating policy may even be harmful if their inadequacies are insufficiently understood.

In a technical sense, I agree with everything Axtell says.

However, I could equally well give a talk titled, “pathologies of agent models.” The pathologies might include ungrounded representation of agents, overuse of discrete logic and discrete time, failure to nail down alternative hypotheses about agent behavior, and insufficient exploration of sensitivity and robustness. Notice that these are common problems in practice, rather than problems in principle, because in principle one would always prefer a disaggregate representation. The problem is that we don’t build models in principle; we build them in practice. In reality resources – including data, time, computing, statistical methods, and decision maker attention – are limited. If you want more disaggregation, you’ve got to have less of something else.

Clearly there are times when an aggregate approach could be misleading. To leap from the fact that one can demonstrate pathological special cases to the idea that aggregate models are dangerous strikes me as a gross overstatement. Is the danger of aggregating agents really any greater than the danger of omitting feedback by reducing scope in order to enable modeling disaggregate agents? Hopefully this talk will illuminate some of the ways that one might think about whether a situation is dangerous or not, and therefore make informed choices of method and tradeoffs between scope and detail.

Also, models seldom inform policy directly; their influence occurs through improvement of mental models. Agent models could have a lot to offer there, but I haven’t seen many instances where authors developed the lingo to communicate insights to decision makers at their level. (Examples appreciated – any links?) That relegates many agent models to the same role as other black-box models: propaganda.

It’s strange that Axtell is picking on SD. Why not tackle economics? Most economic models have the same aggregation issues, plus they assume equilibrium and rationality from the start, so any representational problems with SD are greatly amplified. Plus the economic models are far more numerous and influential on policy. It’s like Axtell is bullying the wimpy kid in the class, because he’s scared to take on the big one who smokes at recess and shaves in 5th grade.

The sad thing about this confrontational framing is that SD and agent based modeling are a match made in heaven. At some level disaggregate models still need aggregate representations of agents; modelers could learn a lot from SD about good representation of behavior and dynamics, not to mention good habits like units checking that are seldom followed. At the same time, SD modelers could learn a lot about emergent phenomena and the limitations of aggregate representations. A good example of a non-confrontational approach, recognizing shades of gray:

Heterogeneity and Network Structure in the Dynamics of Diffusion: Comparing Agent-Based and Differential Equation Models

Hazhir Rahmandad, John Sterman

When is it better to use agent-based (AB) models, and when should differential equation (DE) models be used? Whereas DE models assume homogeneity and perfect mixing within compartments, AB models can capture heterogeneity across individuals and in the network of interactions among them. AB models relax aggregation assumptions, but entail computational and cognitive costs that may limit sensitivity analysis and model scope. Because resources are limited, the costs and benefits of such disaggregation should guide the choice of models for policy analysis. Using contagious disease as an example, we contrast the dynamics of a stochastic AB model with those of the analogous deterministic compartment DE model. We examine the impact of individual heterogeneity and different network topologies, including fully connected, random, Watts-Strogatz small world, scale-free, and lattice networks. Obviously, deterministic models yield a single trajectory for each parameter set, while stochastic models yield a distribution of outcomes. More interestingly, the DE and mean AB dynamics differ for several metrics relevant to public health, including diffusion speed, peak load on health services infrastructure, and total disease burden. The response of the models to policies can also differ even when their base case behavior is similar. In some conditions, however, these differences in means are small compared to variability caused by stochastic events, parameter uncertainty, and model boundary. We discuss implications for the choice among model types, focusing on policy design. The results apply beyond epidemiology: from innovation adoption to financial panics, many important social phenomena involve analogous processes of diffusion and social contagion. (Paywall; full text of a working version here)

Details, in case anyone reading can attend – report back here!

Thursday, October 21 at 6:00 – 8:00 PM ** New Time **

Networking 6:00 – 6:45 PM (light refreshments) Presentation 6:45 – 8:00 PM Free and open to the public

** NEW Location **

Booz Allen Hamilton – Ballston-Virginia Square

3811 N. Fairfax Drive, Suite 600

Arlington, VA 22203

(703) 816-5200

Between Virginia Square and Ballston Metro stations, between Pollard St.

and Nelson St.

On-street parking is available, especially on 10th Street near the Arlington Library.

There will be a Booz Allen representative at the front of the building until 7:00 to greet and escort guests, or call 703-627-5268 to be let in.

RSVP by e-mail to Nicholas Nahas, nahas_nicholas@bah.com<mailto:nahas_nicholas@bah.com>, in order to have a rough count of attendees prior to the meeting. Come anyway even if you do not RSVP.

By METRO:

Take the Orange Line to the Ballston station. Exit Metro Station, walk towards the IHOP (right on N. Fairfax) continue for approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the left between Pollard St. and Nelson St.

OR Take the Orange Line to the Virginia Square station. Exit Metro Station and go left and walk approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the right between Pollard St. and Nelson St.

Brian Eno, meet Stafford Beer

Brian Eno reflects on feedback and self-organization in musical composition, influenced by the organization of complex systems in Stafford Beer’s The Brain of the Firm.

Stafford Beer was a member of the cybernetics thread of systems thought (if that sounds baffling, read George Richardson’s excellent book on the evolution of thinking about systems).

Interactive diagrams – obesity dynamics

Food-nutrition-health-exercise-energy interactions are an amazing nest of positive feedbacks, with many win-win opportunities, but more on that another time.

Instead, I’m hoisting an interesting influence diagram about obesity from the comments. At first glance, it’s just another plate of spaghetti.

ForesightObesity

But when you follow the link (do it now), there’s an interesting innovation: the diagram is interactive. You can zoom, scroll, and highlight particular sectors and dynamics. There’s some narrative here and here. (Update: the interactive link seems to be down, but the diagram is still here: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/295153/07-1177-obesity-system-atlas.pdf)

It took me a while to decide whether I’d call this a causal loop diagram or not. I think the primary distinction between a CLD and other kinds of mindmaps or process diagrams is the use of variables. On a CLD, each label represents a quantity that can vary, with a definite direction – TV Watching, Stress, Use of Medicines. Items on other kinds of diagrams might represent events or fuzzier constellations of concepts. This diagram doesn’t have link polarities (too bad) or loop polarities (which would be pretty incomprehensible anyway), but many other CLDs also avoid such labels for simplicity.

I think there’s a lot of potential for further exploration of this idea. There’s a lot you could do to relate structure to behavior, or at least to explain the rationale for structure (both shortcomings of the diagram). Each link, for example, could have its tale revealed when clicked, and key loops could be animated individually, with stories told. Drill-down could be extended to provide links between top-level subsystem relationships and more microscopic views.

I think huge diagrams like the one above are always going to be overwhelming to a layperson. Also, it’s hard to make even a small CLD good, so making a big one really accurate is tough. Therefore, I’d rather see advanced CLD presentations used to improve the communication of simpler stories, with a few loops. However, big or small, there might be many common technological benefits from dedicated diagramming software.

Gallatin County's Zoning Enforcement Trap

I’m playing a big role in a local effort to get the regulations of our zoning district enforced in the case of an egregious violation. Our planning and zoning commission’s habit, and apparent preference in this case, is not to enforce. Instead, it is proposed to enable the violation through a PUD amendment, and issue a trivial fine ($200, or 0.2% of the stated value of the structure).

Unfortunately, this proposal is illegal, because it contradicts existing covenants and a variety of goals and specific provisions of our General Plan and Zoning Regulation. This action might make sense if it were a naked political ploy to undermine the zoning through administrative rather than legislative means, which I hope is not the case. I think it is more likely an effort to “play nice” with violators and to avoid costly enforcement action.

If so, the resulting weak enforcement posture is a short-sighted avoidance of conflict, that encourages far more problems in the long run. As the diagram below illustrates, backing down on the case at hand solves the immediate problem, but has terrible consequences.

Enforcement Dynamics

  • The precedent for non-enforcement and amendments to legalize violations erodes the legal basis for future enforcement actions.
  • Accommodation creates an expectation of forgiveness, encouraging owners and builders to violate in the future.
  • Exceptions created to accommodate violations make planning documents and title histories more complex, creating more opportunities for errors.

These side effects of lax enforcement accumulate. As violations mount, time that could be spent on productive activity (ensuring a thorough permitting process, or revising zoning regulations to clarify standards and streamline processes) gets squeezed out by time wasted on enforcement.

These reinforcing feedbacks create a deadly trap, into which the unsuspecting can easily step. Once triggered, the vicious cycle creates more pressure to relax enforcement standards, capturing the county in an undesirable equilibrium with many violations and no meaningful enforcement. Ultimately, the citizens (who initiated the zoning district) suffer from the side effects of density granted to violators, that is unavailable to those who comply with the law.

Fortunately, with a little fortitude, the process can be reversed. A single forceful enforcement action has a salutary effect on expectations, stemming the tide of violations and freeing up time for the improvement of regulations. There’s still the hangover of side effects of past accommodation to contend with, but surely the withdrawal is better than the addiction to accommodation.