Austin’s policy deliberations often feature counterproductive uses of data. This practice – which I refer to as “data theatre” – aims to establish a veneer of credibility and objectivity for its purveyors by wrapping policy recommendations around charts, calculations, and allusions to best practices. But in reality, it’s just an exercise in storytelling where facts are incidental instead of the foundation for recommendations.
Data theatre is a problem: local government is less effective because resources are not deployed in an optimal way and policy is not designed to effectively address meaningful problems. It is practiced by civil servants, advocates in civil society, and journalists. We all do it and we all should help each other stop.
Here are some illustrative examples.
- During the debate on whether to build the WTP4 treatment plant, there was significant amounts of documentation and data thrown around – but with one notable exception – it was noise used to justify existing ideas by partisans in the growth vs. no growth debate. There was never any analysis that indicated ‘this is the menu of water capacity-generating options and how much they cost per capacity unit’.
- I’ve previously discussed how our urban rail decision-making does not feature rigorous fact-based benchmarks or comparisons.
- A recent report prepared by the Police Executive Research Forum determined our required police staffing levels not by calculating the marginal public safety value of police expenditures but instead by relying on whether existing officers deem themselves busy.
It’s not just official reports. It’s also in our local journalism. Even when our local press corps does a great job of explaining broad trends, a quantitative context is rarely set and solutions are often left too vague.
My aspiration is that Austin develops a data-driven culture by creating some shared standards around empiricist policy deliberation. This might sound strange at first but it’s actually something we do in many other areas of human endeavor: a stock’s price-earnings ratio, a quarterback’s completion percentage or passer rating, a snack’s calorie count within its nutritional information. Below are some preliminary recommendations for standards.
1. Open data, open algorithms, and easily accessible tools.
The overwhelming majority of reports and other Austin policy research I’ve consumed do not provide source data in some accessible way that allows replication or evaluation. Worse, the key algorithms that crunch granular-level data to arrive at some projection or optimization are often hidden, proprietary, or poorly explained. Reports and their accompanying data tables are routinely doled out solely in PDF – which is quite labor-intensive (and error prone!) to convert into structured data for analysis.
This is not how people who want to engage or persuade should act. I am disappointed but not surprised when a neighborhood group or advocacy organization does it. Their goal is not the broad common good. We should still call them out on twisting the data (and applaud them when they go open), but I get that twisting the data to make their case is aligned with their mission.
But it is completely unjustifiable when public agencies fail to do everything feasible to open up data and algorithms. It’s also very unfortunate that procurement is rarely sensitive to this requirement and often fails to request more complicated analytical work be done with free, open source technologies. It’s hard for the average citizen to hold staff or consultant work products accountable when it would require expensive software licenses or features a proprietary black box with internals that can’t be shared.
If a report doesn’t provide source data and explain its methodology, then it should not be taken seriously. It’s probably just data theatre.
2. Clearly define units. They should be relevant to public value. Place them in context. Prefer ratios.
Too many discussions announce desired outcomes (“make growth pay for itself!”) without defining measurable units to determine the impact of a specific policy on the target outcome. For example, what precisely is meant by “improving housing affordability”? Is it reducing the count of Austinites paying more than 1/3 of their income on housing? Decreasing the median rent/purchase square foot cost to a certain benchmark?
Units should be relevant to public value. For example, focusing on a benchmark of how many patrol officers we need per capita is not directly related to the actual public value we are interested in determining: decreased property and violent crime per additional dollar invested. The underlying rationale for a benchmark is that communities are comparable and there is collective wisdom in matching the benchmark. But that is tangential to determining the diminishing returns from additional public safety investment.
Units should be contextualized. For example, Austin’s housing journalists will cover this or that project or development and their unit count. But it is rare an article will contextualize those units. It’s not that hard to put a basic calculation pointing out that in a city with population of 800,000 growing at 2% per year and averaging 2 folks living in a unit it would take 670 units per month just to keep up with growth. Similarly, for the upcoming housing bond, the total of approximately 3,600 housing units created or refurbished is rarely broken out over the six years that the project will be operation. This would create the more accurate impression that it is only 600 units per year being financed – a good chunk of which are refurbishments. And I have yet to see an accounting of the subsidy per unit over its lifetime.
The most useful units are often ratios. This is because they reflect a higher order of analysis such as optimization. It also prevents politically convenient platitudes. For example, many civic activists will indicate that they support “more density” as long as it is “reasonable”. Some high profile neighborhood leaders will argue that they support “additional multi-family development” where “it makes sense”. But these statements should not be taken seriously unless accompanied by actual ratios about the share of housing they believe should be in that category. If some group wants additional “affordable” 2 bedrooms in their neighborhood, what exactly do they want the ratio to be? What is the housing mix that they envision allowing that? Without these specifics, it is quite easy for policies that are essentially stridently NIMBY and zero growth to masquerade as reasonable compromises.
3. Avoid monocausality in favor of multiple key drivers
During the WTP4 debate, water demand was modeled simply as a coefficient to population growth. It is intuitive, but obviously reduces everything to “will there be more people tomorrow than today? Well, if so, I guess we need more treatment capacity!” But what about water pricing? The mix of housing? Changing industry? Regulations around lawn watering? Climate change? A more serious effort to develop real projections would have been more careful to consider the multiple drivers that impact water demand.
Reports or white papers that try and tag some individual factor (e.g. single family homes with five or six students living in them) as the sole or dominant driver for an outcome (e.g. perceptions of neighborhood ‘change’) without weighting the impact of other relevant drivers warrant deep skepticism. While one factor may contribute to an outcome, without examining other drivers it is hard to assess whether the one receiving the spotlight is even a meaningful driver. The failure to consider other drivers and alternative explanations are likely to indicate that the research is theatre intended to advance a pre-determined agenda.
4. Make the best case for alternative scenarios that embrace variance
A ‘good’ policy is not necessarily the ‘best’ policy or an optimal allocation of resources for achieving a set of outcomes. For example, building urban rail from Mueller to the Central Business District is viewed by some as a ‘good enough’ urban rail route. It certainly has virtues. But is it the optimal initial urban rail sequence? And what if the assumptions behind the models supporting it as the best initial sequence shift or change?
Incorporating alternative scenarios that consider likely variations in key drivers is fundamental to data-driven decision-making. Gathering the strongest evidence for a pre-determined outcome while excluding other options is data theatre. It helps advocates of a specific outcome to label alternative scenarios as uncertain or risky because no knowledge of them exists, even though it is often relatively easy to calculate alternatives once the initial model setup is generated. Further, the failure to embrace the volatility inherent to projections based on samples/statistical methods can make highly speculative estimates seem equally reliable as facts that are fairly certain.
For example, it is much harder to predict the future population level of transit sources and destinations than it is to determine the current levels of ridership for specific transit services or the existing amount of population in census tracts. Yet routinely, this information is presented without the appropriate confidence intervals in local documents and discussion. This oversight makes it appear that our future projections are just as accurate as our existing data. This is not the case. The device that measures how many people swipe a card to get on a bus is likely to be a lot more reliable than some estimate of how many people will live in a census tract two or three decades from now.
This is not an exhaustive list of potential standards; hopefully, others will identify blind spots and share their ideas.
Institutionalizing standards will take some time, but there is a clear path to do so. In civil society, we can consistently request that reports and work products follow these standards and publicly ‘grade’ others and ourselves on these standards. Within government, policy makers and top-level civil servants can request that staff-prepared documents and RFPs for external vendors follow standards.
I am sure others would have even better institutionalization ideas. So, the means of institutionalizing better data-driven decision-making are not hard to figure out. What is needed is the will to do so.
I think one of the keys here is: why do people do the theater? One thing that’s clear is that theater gets a lot of uncritical media coverage. There’s been a real revolution in science reporting in the last decade or so, where science/medicine reporters try to clearly answer some questions like: What does it mean? What comes next? Who funded it? Where can I read more? I think a clear set of guidelines for reporters reporting on studies is in order here.
A few of us on twitter hounded some reporters for a copy of the EGRSO report on urban rail, only to find out it had not been publicly released, only the PDF of the presentation. A simple sentence included in every article that says whether and where you can read the full report would be very helpful: “A copy of the report is available from the City of Austin website *here*” or “The full report has not been released.”
Another guideline: always do your best to place a study in context. When the EGRSO report on urban rail comes out and reborts a $30B impact, place it up against the other studies claiming a $5B impact. Ask what assumptions the reports differ on, etc.
One of my many unwritten blog postings kind of goes along these lines – but also chides you impending unworthy tiara-thieves as well. Basically, we’re pushing for a data-driven process in regards to urban rail, but most of y’all are willing to swallow the “data theatre” in regards to Rapid Bus (not you in particular, Julio, as your article on it was quite well done).
If we’re not willing to accept “trust us, it’ll be great” when it comes to rail to Mueller and we want to see how many people will be boarding under various scenarios, for instance, then we shouldn’t accept “trust us, it’ll be great” as a reply to those (like me) who point out that Rapid Bus isn’t any faster or more frequent than existing service, right?
Pingback: What I learned from meeting with Capital Metro on fare changes | Austin On Your Feet