The Cronk Capacitor

Without a validated housing capacity model, land development code reforms might not add up to what we think they will.

frego_envision_tomorrow_capacity

A presentation slide from Fregonese Associates highlighting their model’s approach to defining potential housing development

Austin’s City Manager, Spencer Cronk, recently re-started the seemingly eternal process of reforming the city’s land development code by sending the Austin City Council a memo that delivered some realtalk and asked for some clear decisions on a variety of foundational policy topics affecting the code.

The code largely helps determine our community’s approach to meeting regional demand for housing and commercial space in Austin, and therefore impacts almost all of the most pressing local policy issues: how much housing costs, what transportation options we can use to move around, flooding risks, and whether or not we can have economically- and racially-integrated neighborhoods.

As the CodeNEXT process starts up again, the City Manager and policymakers should address a critical leftover problem: the City does not posses a validated model to assess how much housing we can reasonably expect as a result of code changes.

To put in simpler terms, Council is going to try to finally land the CodeNEXT plane without reliable instruments. In a dense fog. With crosswinds.

To understand how poor our instrumentation is, we need to talk about the concept of “zoning capacity”.

During the previous phases of CodeNEXT, a firm called Fregonese Associates (“Frego” in City Hall parlance) utilized permutations of their “Envision Tomorrow” suite (ET) to determine “zoning capacity”.  

Now, when I (and many other housing professionals and zoning enthusiasts) use the term “zoning capacity”, it is a reference to what is the maximum development that could be built on a land parcel – regardless of economics – without requesting special permission from City Council.

ET does not calculate that version of zoning capacity.

The ET variants we’ve seen so far during CodeNEXT allocate a pre-computed level of regional economic activity based on countless inputs – mostly derived from past economic performance – into parcels. The ET model calculates how much could be built on each parcel in the city given the economics generated by model inputs, some of which are coarse-grained estimates of real estate and construction costs.  Frego’s consultants (and their ET product site) refer to this as “painting”. Here’s a slide from one of their presentations that explains what is at the core of the method they used to assess development feasibility for a parcel:

fregonese_codenext_model_economis

 

The Frego team referred to their financial feasibility estimates as “capacity” and repeatedly indicated it was not an actual forecast.  If there was a recession or a change in consumer preferences, then the economics and inputs that drove their “painting” would change and what would be feasible for each parcel would change along with it. Their model was not predictive because it didn’t take into consideration the business cycle or consumer trends. It wasn’t saying what would happen.

The infographic below, which was presented by Fregonese and linked to in Cronk’s let’s-stop-messing-around memo, explains the difference between the conventional view of zoning capacity (the cream-colored bar on the far left) and Frego’s situational, economics-aware version produced by ET.

frego_envision_tomorrow_capacity

 

So, what’s the problem? Well, let’s title it the “scatter plot problem”.

A straightforward and layperson-friendly way to check if a model is good at estimating what it is supposed to estimate is to line up its outputs with what actually ended up happening. In the case of the ET model, it would be to compare the expected development on a parcel against the actual development. Each parcel would be an individual point in the scatterplot. If the estimate and the observed results are the same then you’d see something resembling a linear relationship.

scatterplots

ET’s scatter plot problem is that it appears no one has actually done the scatter plot!

While specific worksheets and functions within the Excel version of ET are based on academic research, the whole, massive thing doesn’t appear to be validated. And during the previous iterations of CodeNEXT, the ET model seemed to be repeatedly adjusted as a result of lobbying, making it depart even further from its research-based core.

Look again at the bottorm part of the housing capacity presentation slide I showed above. “Capacity = 2x Forecast (Or More),” it reads.  

The “(Or More)” should grab your attention.

Just going from “2x” to “2.5x” means going from needing 270,000 units of new development capacity to 337,500 in order to meet the Strategic Housing Blueprint’s goal. At “3x” it’s 405,000 units. If you followed the consultant and policymaker conversations about zoning capacity as they were happening, it became clear that we hadn’t invested the staff and consultant time necessary to develop a convincing case for a specific multiplier based on real-world data.  

This bewildering fuzziness about the capacity multiplier shows up in the Cronk memo. “While there is no definite rule regarding how much housing capacity should exceed the desired number of new units, providing for little excess capacity is generally regarded as insufficient,” it reads.

As a way of checking for the reasonability of the capacity figures being thrown around during the now-deceased CodeNEXT, I adjusted the Strategic Housing Blueprint’s goal upwards towards a more realistic number and looked at the capacity multipliers adopted by Seattle (3x) and Los Angeles (4x) during bursts in their affordability. It became clear that the needed zoning capacity (in the conventional, non-ET version of the term) could easily exceed 1 million new units of capacity. Right now, the ET-flavored zoning capacity number that anchors City Council’s expecations is closer to 260,000 new units of capacity.

It’s quite possible that Council Members are aware of the potentially faulty analytical tools they have, but are content knowing that they are at the very least directionally correct. Enabling more units is good enough, even if they are insufficient to meet the Blueprint’s conservative goal.

But it would be tragic for the community to have invested all of this time, endured so many political skirmishes, and navigated “tough votes” only to fail because of a lack of attention to models and benchmarks. Hopefully, pro-housing and/or empiricist Council Members will request the City Manager provide a validated housing capacity model, as well as afford him the space to dramatically depart from the existing anchoring around a 2x multiplier, if that’s what the evidence indicates is necessary.

This entry was posted in Housing. Bookmark the permalink.