Tuesday, January 30, 2018

Who Got It Right?


Photo Credit: The Wrap (www.thewrap.com)

At the end of each year (or the start of a new year), pundits and practitioners in various fields offer forecasts for the following 12 months. A search on Google for “Predictions for 2018” produces over 18,600,000 results, with the first page containing forecasts on economics, financial markets, science, technology, politics, sports, national security and much more.

How accurate are these predictions? How often do various commentators get things right (or, more often, wrong)?

The recently passed GOP tax plan will “revitalize” the US economy, according to the Heritage Foundation. The Center for Economic Policy and Research disagrees, warning that this proposal is “not reform, and does not benefit the middle class.” Who will be proven correct in the end?

Today, Elon Musk warns of the risks posed by artificial intelligence, while Mark Zuckerberg and  Ray Kurzweil are quite optimistic. The verdict will be issued in the coming decades.

Nate Silver, one of the most accurate American election forecasters of the modern era, gave Donald Trump just a 28% chance of winning the 2016 election. Silver was not as off as some other analysts, many of whom thought Jeb Bush would be the eventual Republican nominee.

In 2005, then-Federal Reserve chair Alan Greenspan argued that derivatives, financial instruments which played a starring role in the 2008 financial meltdown, were “key factors underlying the remarkable resilience of the banking system.” Even earlier, in 2002, Joseph Stiglitz, a prominent economist at Columbia University, who had recently won the Nobel Prize, declared that based on “millions of potential scenarios” the risk of collapse of government-sponsored mortgage giants Fannie Mae & Freddie Mac was miniscule. 6 years later, both institutions were on the brink of failure, and placed in conservatorship.

Yet, not everyone got it wrong. Other observers, including New York University economist Nouriel Roubini, Dean Baker, an economist with the Center for Economic and Policy Research, Nassim Taleb, an academic and investor, as well as hedge fund managers John Paulson, Michael Burry and Steve Eisman, foresaw the events of 2008.

In 2003, Thomas Friedman, a New York Times columnist and prominent Middle East analyst, wrote in the New York Times that the proposed US invasion of Iraq was appropriate, anticipating few consequences that would later follow. Around the same time, Brent Sowcroft, former National Security Advisor to President Ford and President George H.W. Bush, spoke out in the Wall Street Journal, warning against the war. Almost 15 years later, it’s clear Sowcroft got much more correct.

None of this is meant to confer blanket praise (or unfair condemnation) on any individual named above. Each offers unique intellectual and analytical merits, as well as shortcomings. The same is true of most who make predictions, in any field.

The world is a complex, interconnected, random place. Looking ahead, and correctly seeing what will happen, is inherently challenging. Over the long run, each person’s crystal ball becomes at least a little murky, no matter how talented one might be.

Yet, it is crucial to keep track of who gets what right, and how come, over a long period of time, when considering what someone sees in store for the future. Why?

First, such a metric allows us to assess whether a particular commentator or observer is worth listening to, when it comes to his or her forecast of future events. Someone who writes compellingly, but is consistently wrong when looking into the future, might be interesting to read, but shouldn’t be taken seriously. Style must be accompanied by measurable substance.

It is also worth noting what particular observers get right (or wrong). Let’s say your favorite newspaper’s opinion writer hit the bull’s eye, when analyzing how tax code changes impact economic growth, or how health care reforms will affect medical costs and access. Now, suppose this same commentator offers predictions on events in Syria, or the future of US-China trade relations. Our esteemed editorialist misses by a mile.

Should we categorically say this individual is a poor analyst of the future, and not worth listening to? Not at all. Rather, he or she has a stronger understanding of domestic matters, or economic matters, as compared to foreign affairs. Perhaps we should keep reading this writer’s pieces on American public policy, but skip to the cartoons, when musing on the likely fate of Nicolas Maduro’s government in Venezuela. Domain expertise is real.  

Lastly, it is also useful to consider how each observer reaches conclusions. I’ve written about the merits of probabilistic thinking, including Bayesian statistical analysis, whose most prominent champion is Nate Silver. Others have been critical of Silver’s assertion that Bayesian analysis is quite so effective.

Some analysts argue Marxist theory is a strong method of predicting future economic trends, while many economists advocate regression analysis. Others engage in a range of methods, including trend analysis, environmental scanning, simulations, and more. Some thinkers, most notably Nassim Taleb, have warned that most models of prediction are flawed, because they fail to take account  of “black swan” events, i.e. those with a low probability, but outsized impact.

It would be instructive to track various forecasts, over the short and long term, and classify them (when possible), according to the methods of prediction implemented. In many instances, forecasters don’t tell us how exactly they reached their conclusions. However, in other cases, they explicitly tell us why certain events are likely to occur. Over time, we might start to see patterns in terms of when (i.e. using which methods) those who predict, tend to get things right- or not.  


For these reasons, I suggest creating a comprehensive searchable database. Let’s call it Prediction Scorecard. Prediction Scorecard will be a part-time project, staffed by data scientists, who categorize and analyze large volumes of data, as well as veteran journalists, who cover the fields where forecasts are being reviewed, and put matters into context.

Funding could come from one of the many foundations and projects dedicated to investigative journalism. Or, it could be a joint project of several newspapers or media entities. Prediction Scorecard keeps track of the predictions of commentators who have made written (or video) forecasts, in a variety of fields, including, but not limited to: domestic and international politics, science, technology, economics, and financial markets. For each observer, forecasts are offered in a declarative manner, with a link to applicable articles, books, and public pronouncements.

Let’s take the example of John Simpson, a (fictional) financial writer for the Financial Times: “Simpson believes the US banking system is managing risk effectively, so few major banks will suffer large losses within the next 5 years”, along with the reasons for this belief.  A reason could be something like “Simpson thinks banks have made lasting changes since the 2008 financial crisis, and are engaged in more conservative lending practices. Specifically, they have increased borrower credit and income requirements.”

If available, Prediction Scorecard will state how John reached conclusions, something like “John extrapolates from previous times in history, when banks maintained strong credit and income standards, and avoided financial losses.”  Of course, the dates of John’s predictions must be noted. In this case, let’s say, January 4, 2018.

Prediction Scorecard will track how John fared, by monitoring events, perhaps on a yearly basis, until 2023 (when his prediction expires). This requires examining large American banks, and considering whether any suffered notable losses. It also demands some degree of judgment. If a bank faced losses, were they, in context, “large?” Also, if this happened to multiple banks, how many would still count as “few?” And how do we decide if a bank is “major”?

Since these issues are both qualitative and quantitative, insightful journalists and savvy data scientists can put matters into context, helping us understand how correct (or wrong) someone was. We can categorize each commentator’s predictions by subject matter (to assess domain expertise). So, if Simpson’s theories regarding bank stability were correct, but he also wrote about oil prices (and was wildly wrong), that must be accounted for.

Also, within a given domain, we’ll be able to keep track of who gets things right (or not) more often. Let’s suppose in the field of bank and financial stability, Anna Chung (let’s say she writes for Bloomberg), over a 15-year period, gets more right than Simpson, or anyone else. We should make note of this, although, of course, past success in predictions, does not guarantee the same in the future.

We’ll gain a greater understanding of why and how noted commentators reach certain conclusions. Let’s say that when predicting scientific developments, extrapolating from current trends is a powerful means of seeing what happens in the future. Now, maybe in economics, this doesn’t work so well. By tracking the methods used to develop various forecasts, we will gain insight into which methods work most effectively under particular circumstances. We can develop a broad idea of how to think more clearly, and be less wrong.

Ultimately, forming accurate predictions is a tough line of work. What happens in the future, is influenced by many different factors, working in ways that are hard to anticipate. Even the very best forecasters, over the long run, are likely to get much wrong. We shouldn’t shun people for mistaken predictions.

Yet, we should know who gets what right, and who doesn’t, and why this is the case. This helps us develop deeper insights into what makes for effective (or flawed) predictions. It offers a stronger framework through which to analyze events. Ultimately, we can think more deeply, clearly and objectively, about the world we live in.