What in the world happened to economics?

 

Economists are all finally speaking the same language, but they still can’t answer the big questions. By Justin Fox

Economists rule the world. This is not a new phenomenon. “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood,” John Maynard Keynes wrote in 1935, in the famous conclusion to his General Theory of Employment, Interest, and Money. “Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.” The world went on to prove Keynes right: His General Theory became the basis of economic policymaking in the U.S. and Europe for decades after his death.

Now the undisputed role of Keynesianism is long past, but the work of economics professors still rules. When we talk about emerging-markets currency crises, about Japanese stagnation, about European unemployment, about U.S. prosperity, the words we use and the framework that shapes our thinking come from Adam Smith, from Keynes, from Milton Friedman, from other academics we may never have heard of.

These days, in fact, it’s not just the economists’ ideas that have power. Of the three men who, by most accounts, currently run the world--Federal Reserve Chairman Alan Greenspan, Treasury Secretary Robert Rubin, and Rubin’s deputy and likely successor, Larry Summers--two have economics Ph.D.s. And while Greenspan has spent his career outside academia, Summers is a former Harvard and MIT professor of impeccable academic-economist stock--son of two professors, nephew of two Nobel laureates, and himself the 1993 recipient of the John Bates Clark Medal as the top under-40 American economist.

Sitting in his office overlooking the Washington mall, Summers proudly reels off the names of other powerful products of the Cambridge, Mass., economics tradition whence he sprang: IMF deputy director Stanley Fischer; former finance ministers Pedro Aspe of Mexico and Domingo Cavallo of Argentina; current Finance Minister Eduardo Aninat of Chile; Japanese vice minister of finance Eisuke Sakakibara (a stretch, although Sakakibra did teach at Harvard for a year in the 1980’s); globetrotting adviser-to-governments Jeffrey Sachs. This isn’t a matter of politics. If the next President in Republican, he’ll likely turn for advice to Summers’ academic mentor, Harvard professor Martin Feldstein. It is all, Summers concludes, “evidence of the triumph of a more analytically oriented approach.”

This is an interesting claim. The field of economics--or at least macroeconomics, the study of big issues like inflation, unemployment, and the business cycle--has been the scene of some very public battles over the past half-century. Is Summers saying somebody won? Also, the world economy is in a scary state these days. Politicians don’t know what to do about it; the hedge fund managers who a couple of years ago seemed to be running the show don’t have a clue. Is Summers saying the economists have an answer?

No, and no again. The pitched battles of years past are over, but nobody really won. Academic economists have indeed attained a state of relative peace and consensus, but they have done so by diminishing their expectations. They use similar analytical tools and come up with similar answers to narrow questions. But when it comes to explaining the behavior of the global economy, economists can’t agree--in fact, most of them no longer seem to believe there is a single correct explanation. Economists rule the world, but they aren’t quite sure what to do with it.

It wasn’t always this way, of course. To understand how economics went from warring dead certainties to peaceful confusion, one has to go back a few decades. Back to the 1960s, when most everybody was sure there was one obviously correct explanation for the workings of the economy. And back even further, to when the previous economic orthodoxy--which held that if the laws of supply and demand were allowed to work their magic, everything would turn out okay in the end--ran into the messy reality of the Great Depression.

In the U.S. in the 1930s, the financial system essentially stopped functioning, the market for real estate and other assets dried up, and unemployment remained stub-bornly high. “Nothing in what I was taught from nine o’clock to ten o’clock in economic theory had any room to explain that,” says Paul Samuelson, then an under-graduate at the University of Chicago and now one of Larry Summers’ Nobel laureate uncles. “Finally, I heard [Chicago professor] Frank Knight say normal economics applies in normal times, but when it comes to really pathological times, it doesn’t apply.”

The job of designing an economics for pathological times fell to none other than John Maynard Keynes. Keynes--a lecturer at England’s Cambridge University as well as a journalist, an adviser to governments, an insurance company executive, a commodities speculator, and husband of a famous ballerina--had, like every other economist of his time, internalized the “classical” truisms that the laws of supply and demand interact to set prices at the appropriate level and that something called Say’s law decrees that every penny saved is automatically converted into investment. In the alternative economics Keynes proposed in the mid-1930s, however, savings sometimes gets stuffed in mattresses, prices and wages don’t always adjust to falling demand, and it’s perfectly possible for an economy to get stuck in a slump unless the government acts to stimulate demand.

By the 1950s this Keynesian economics had become the new orthodoxy--with first Cambridge University in England and then the Cambridge, Mass., neighbors Harvard and MIT as its temples. At MIT the high priest was Samuelson, whose best-selling introductory textbook, Economics, first published in 1948, introduced Keynesian ideas to generations of college students. Of course, by the 1950s the U.S. economy was no longer the pathological wreck that Keynes was trying to repair in the 1930s. But Samuelson and his fellow Keynesians figured they could banish economic downturns and mass unemployment by tweaking government fiscal and monetary policy. In the early 1960s, Samuelson and another MIT star and future Nobelist, Robert Solow, were able to put their ideas into practice as advisers to President Kennedy. The U.S. economy responded with the longest expansion in history. “It seemed an economics as free of ideological difficulties as, say, applied chemistry or physics, promising a straightforward expansion in economic possibilities,” wrote economists Robert Lucas and Thomas Sargent years later. “One might argue about how this windfall should be distributed, but it seemed a simple lapse of logic to oppose the windfall itself.”

A few economists weren’t so sure. The most notable dissenter was Milton Friedman of the University of Chicago. Friedman had been a graduate student at Chicago during the same troubled times that Samuelson was there as an undergrad, but he never shared Samuelson’s conviction that the Chicago economics they learned had failed. In the epic Monetary History of the United States he co-authored with Anna Schwartz in 1961, Friedman argued that the best explanation for the Great Depression was not market pathology but the failure of the Federal Reserve to keep the money supply from shrinking in the early 1930s.

Friedman’s emphasis on monetary policy--which had been deemed by Keynes to be impotent in times of true economic crisis and was thus long ignored by his disciples--had a big impact on economic discourse. But at first most economists adopted monetary policy as a way to keep the economy running on high-employment overdrive. Samuelson and Solow had brought to the U.S. the empirical evidence, first compiled by a British economist named A. W. Phillips, that there was a tradeoff between inflation and unemployment-- that is, higher inflation meant lower unemployment. Allowing prices to rise seemed the only humane thing to do. Friedman argued that the unemployment/inflation tradeoff was temporary, and he also pointed out that using fiscal and monetary policy to avert recessions was a lot harder than it looked. These arguments weren’t ignored: For years Friedman and Samuelson wrote dueling columns for Newsweek; in 1967 Friedman was president of the American Economic Association. But his thinking wasn’t mainstream, either among Americans at large or within the economics profession.

That changed in the 1970s when the Mideast oil crisis hit the U.S. with both high inflation and high unemployment. Friedman won the Nobel Prize (six years after Samuelson) and became a best-selling author, TV personality, and revered adviser to free-market-oriented politicians around the world. Within the economics profession, however, the deathblow to the old ways came from the above-mentioned Robert Lucas, a Friedman student at the University of Chicago who had gone on to teach at what is now Carnegie Mellon University. Lucas wrote a series of articles in the 1970s hammering away at the theoretical underpinnings of Keynesian thought. He argued that if people are rational--a basic tenet of economics that we’ll discuss in more depth later--they can form rational expectations of predictable future events. So if the government gets in the habit of boosting spending or increasing the money supply every time the economy appears headed for a downturn, everybody will eventually learn that and adjust their behavior accordingly. Which means that regular government efforts to control the business cycle simply cannot work.

By 1980, Lucas was able to claim--with some justification-- that “one cannot find good, under-40 economists who identify themselves or their work as ‘Keynesian.’ ” For a while it looked as if Lucasism was the wave of the future and the University of Chicago (to which Lucas returned in 1974) had supplanted MIT and Harvard as the world’s center of economic thought. But the deductive logic of Lucas and other “new classical” economists led them to the stark conclusion that government monetary and fiscal policy should have no effect on the real economy. Even Lucas never really believed that, and the two early 1980s recessions brought on by the Federal Reserve convinced most economists that monetary policy could in fact have a real impact.

Lucas, who won the 1995 Nobel Prize for his critique of Keynesianism, has never come up with a viable alternative macro theory. One of his former students, Edward Prescott of the University of Minnesota, has proposed that recessions and booms could be explained as the stops and starts of technological process. But this “real business cycle” school has yet to deliver anything of much use to economic policymakers. So Cambridge, Mass., home of the discredited Keynesian orthodoxy, got the opportunity to come up with a credible replacement. And in a way, it did.

The process started in 1973 when Stanley Fischer returned to MIT. Fischer had been a star student of Samuelson’s in the late 1960s, but MIT had a policy of not hiring its own newly minted Ph.D.s, so he ended up at the University of Chicago. By the time MIT called with a job offer several years later, Fischer had acquired both an appreciation for Friedman’s real-world approach to economics and an interest in Lucas’ theoretical critique of Keynesianism. Fischer brought over another Chicagoan, international economist Rudiger Dorn-busch, and the pair came to dominate MIT economics for two decades. “They were bringing the latest thinking in,” recalls Columbi a professor Frederic Mishkin, one of Fischer’s first students at MIT. “They had absorbed a lot of the Chicago approach, but had very open minds.” They also had open doors, and before long Fischer and Dornbusch had become MIT’s dissertation advisers of choice.

Across town at Harvard, the agent of change was Martin Feldstein. Feldstein, an Oxford Ph.D. who joined the Harvard faculty in 1967, specialized in investigating how the incentives created by government taxing and spending change the behavior of people and firms--an area that had been given short shrift by the Keynesians but that became, in cruder form, the heart of Reagan-era supplyside economics. His biggest impact on the study of economics, though, may have been his transformation of the National Bureau of Economic research. The bureau, a private think tank, had been founded in 1920 by one of the most prominent economists of the day, Columbia’s Wesley Mitchell, to offer “scientific determination and impartial interpretation of facts bearing upon economic, social, and industrial problems.” Over the years it had sponsored some landmark research--including Milton Fried-man’s monetary history--but by the late 1970s it was a fusty place known mainly as the official arbiter of when recessions start and end. Upon being named president of the NBER in 1977, Feldstein moved it from New York to Cambridge and brought in top academics like Fischer and Dorn-busch, and their students, to churn out working papers using cutting-edge theory to examine real-world problems.

The products of this atmosphere include a disproportionate number of the world’s most prominent economists. There are the policymakers listed above; the chairmen of three top economics departments, Olivier Blanchard of MIT, Maurice Obstfeld of Berkeley, and Ben Bernanke of Princeton; the many Romers: Stanford growth-theorist Paul (who started his Ph.D. training at MIT but finished at Chicago), Berkeley economic historian Christina (no relation to Paul), and Berkeley macroeconomist David (married to Christina, not related to Paul); Columbia’s Mishkin, the former chief economist of the New York Fed; industrial organization guru Jean Tirole of the University of Toulouse; and, most familiar to readers of this magazine, FORTUNE columnists Greg Mankiw of Harvard and Paul Krugman of MIT. The members of this group don’t fit into any neat ideological or doctrinal category, but they are generally skeptical of both unfettered capitalism and government efforts to fetter it. They share Keynes’ conviction that markets can go wrong (some of the younger ones even call themselves new Keynesians) but have also accepted the criticisms of Friedman and Lucas.

To a casual observer this may sound like plain old common sense, and to a certain extent that’s what it is. But when these economists communicate with one another, it’s not in the language of common sense but in a jargon that has its roots in the work of 18th-century Scotsman Adam Smith. Smith’s masterpiece, An Inquiry Into the Nature and Causes of the Wealth of Nations, introduced the then-radical notion that selfish, greedy individuals, if allowed to pursue their interests largely unchecked, would interact to produce a wealthier society as if guided by an “invisible hand.” Smith never worked out a proof that this invisible hand existed, and not all sub-sequent economists agreed with his optimistic assessment--Thomas Malthus thought people would have too many children and overpopulate the world; Karl Marx thought capitalists would be so greedy they would bring down the system. But they all shared Smith’s view of economics as the study of people trying to maximize their material well-being.

This assumption of rational, maximizing behavior won out not just because it often reflected reality but because it was useful. It enabled economists to build mathematical models of behavior, to give their discipline a rigorous, scientific air. This process started in the mid-1800s, evolving by the end of the century into the approach known today as neoclassical economics (Marx having assigned the term “classical” to Smith and his immediate successors). And while 20th-century critics like the University of Chicago’s Thorstein Veblen and Harvard’s John Kenneth Galbraith argued that people are also motivated by altruism, envy, panic, and other emotions, they failed to come up with a way to fit these emotions into the models that economists had grown accustomed to--and thus had little impact.

Keynes, to get at his explanation for slumps, did have to assume that economic actions were sometimes motivated by “animal spirits” rather than by pure rationality. But he never tried to work this into a full-blown behavioral theory. After Keynes, in fact, economics came to be split into two parts. There was macroeconomics, which used broad strokes to depict the big things that Keynesians cared about: unemployment, inflation, and the business cycle. Then there was microeconomics, which examined how the interactions of rational individuals led to market outcomes. Macroeconomics described how economies malfunctioned; microeconomics described how they worked.

These two sides of economics coexisted uneasily in the same academic departments, sometimes in the same people. In his advice to President Kennedy and in his undergraduate textbook, MIT’s Samuelson offered thoroughly Keynesian explanations of macroeconomic phenomena; meanwhile, Samuelson’s landmark 1947 book, Foundations of Economic Analysis, taught generations of graduate students how to approach microeconomics as a set of mathematical models featuring rational actors. One of those students was Robert Lucas, who worked through Foundations, calculus textbook in hand, the summer before he started grad school at Chicago. Lucas’ sub-sequent theoretical work essentially forced Keynes’ (and Samuelson’s) macroeconomics to submit to the same relentless mathematical logic as Samuelson’s micro-economics--a test it couldn’t pass.

Microeconomics, however, was beginning to change. The neoclassical tradition reached an apotheosis in 1951 when future Nobel laureates Kenneth Arrow (another Summers uncle) and Gerard Debreu published an article that in essence mathematically proved the existence of Adam Smith’s invisible hand. This “general equilibrium” proof has been a mainstay of graduate-level economics training ever since. But Arrow soon moved on; he and other economists began working out ways in which rational behavior could lead to less-than-optimal market outcomes.

The most important tool in this analysis was game theory--the study of situations, like poker or chess games, in which players have to make their decisions based on guesses about what the other player is going to do next. Game theory was first adapted to economics in the 1940s by mathematician John von Neumann (the same von Neumann whose theoretical insights made the computer possible) and economist Oskar Morgenstern. But it took a while to catch on.

In 1963, Arrow was first to hint at the game-theory implications of situations in which different parties to a transaction possess different amounts of information. But “asymmetric information” really came into its own in the 1970s as a way to explain the behavior of financial markets--which are extremely susceptible to information difficulties. Its leading theorist was probably Joseph Stiglitz, a 1966 MIT Ph.D., now the World Bank’s chief economist.

Another long-neglected aspect of micro-economics that Stiglitz and others began to study in the 1970s was increasing returns. To work out to equilibrium, models of economic behavior always had to assume that at a certain point makers of a product would be faced with diminishing returns: The more they produced, the less profit per piece. It had long been clear that this didn’t always reflect reality, but new math techniques and the growth of the software industry--a business in which making additional copies of a product costs virtually nothing--led economists to finally take increasing returns seriously.

This was the context in which the young scholars of Harvard and MIT learned economics in the late 1970s and early 1980s. Keynesian macroeconomics was dead, but nothing had sprung up in its place. Micro-economics, meanwhile, had moved away from the dead certainties of the past into a much more interesting thicket of research possibilities. The mathematical models that had come to form the basis of academic economics were shifting from general equilibrium, in which everything worked out for the best, to multiple equilibriums, in which it might not. “That was kind of a golden age for economic theorizing,” says Krugman.

Different people took to the atmosphere in different ways. Larry Summers became a master debunker, using theory and data to poke holes in new-classical certainties. Paul Romer moved macroeconomics away from its business-cycle orientation to devise a new theory of long-term economic growth. And Krugman, whose academic work probably best represents the direction economics has taken, built lots of mathematical models of real-world economic phenomena.

The models, Krugman says, are constructed upon a couple of basic principles: “self-interested behavior and interaction-- $100 bills don’t lie in the street for very long, and you don’t have sales that aren’t purchases.” Beyond that there are no clear rules. “What you end up looking for is a specific set of strategic simplifications,” he says.

The two models that made Krugman’s name in the late 1970s both involved international economics. One concluded that currency crises were rational, inevitable reactions to untenable government policies. The other overturned the conventional economic wisdom that countries could gain an advantage in trade only because of better technology or greater resources--by showing that the increasing returns inherent in making huge quantities of a product can lock in an advantage.

These two models shared no grand theme or ideology, and matters got even murkier when Krugman tried to draw policy conclusions from them. He gradually came over to the view that currency collapses can also result from self-fulfilling investor panics that overrun even countries with sensible economic policies. This has led him to conclude that controls on capital flows sometimes make sense. But he does not believe in restricting trade, even though his increasing-returns model seems to suggest advantages for the sort of protectionist, volume-building tactics used by Japanese industries in the 1980s.

Herein lies the dilemma of modern economics. Analytical techniques are becoming ever more sophisticated, but it is looking ever less likely that they’ll someday add up to a coherent, reliable science of economics. “If you ask grand questions of economic theory, you come up with garbage,” says David Colander, a historian of economic thought at Middlebury College. Most economists have come to agree. As a result they are staying away from grand questions and sticking to narrower ones.

It’s not that economists can’t agree on any big issues. The experience of the 1970s, plus the articles of Robert Lucas, appears to have banished from the economic mainstream all hankering for inflationary Fed policy--although there is debate over whether the optimal inflation rate is 3% or 2% or 0. There’s also consensus about what facilitates long-term growth: transparent financial markets; well-capitalized, well-regulated banks; free trade; educated workers; a reliable but not inflexible legal system; taxes and welfare benefits low enough to avoid disincentives to work.

The trouble comes when there’s trouble. In dealing with the impact of financial crises on the real economy, with downturns in the business cycle, with interactions between nations, the mathematical models of modern economics come up short. So economists make substitutions: guesswork, judgment, experience, ideology.

Which leads to large differences of opinion. Witness the response to the recent emerging-markets economic crises. Economists who use the same techniques, believe in the same principles, and studied under the same teachers are coming up with wildly different responses. Summers and Fischer have backed a tough-love policy of advancing IMF loans to countries in crisis but demanding that those countries shut down reckless banks, raise interest rates, and cut government spending. Stiglitz wants more generous lending and reregulation of global capital flows. Sachs favors creation of an international bankruptcy code under which troubled countries could seek protection. Krugman has urged countries to impose capital controls. Dornbusch, who taught Krugman international economics, says that’s nuts.

A big help, these economists are. Says Krugman: “I’ve got a guess, Jeff Sachs has a guess, and Larry Summers is ruling the world.” Summers has a slightly more reassuring take: “Ultimately there’s no alternative to judgment--you can never get the answers out of some model. But the reason there are many, many more good economists in positions of influence in the world is that one can understand the issues more sharply and clearly, and can pose the tradeoffs and can make more accurate judgments within a clear analytic framework.”

That’s a long way from saying economics has all the answers. But it’s about all any economist can honestly claim.


From Fortune, March 15, 1999, pp. 91-102. © 1999 by Time, Inc. All rights reserved. Reprinted by permission.