Considerable work has been done on creativity across a wide-range of disciplines, including business, cognitive neuroscience, economics, history, psychology, and sociology, but until recently there had been little interaction among these researchers. But on March 10 and 11, fifteen experts on innovation and creativity from these disciplines - both established senior scholars and emerging younger researchers - met at the NBER in Cambridge for a "New Ideas About New Ideas Conference" designed to foster cross-disciplinary dialogue on creativity and innovation. The conference, supported by the Sloan Foundation, was organized by NBER Research Associate Richard Freeman of Harvard University and Faculty Research Fellow Bruce Weinberg of Ohio State University. Weinberg also prepared this summary article for the NBER Reporter.
Given the varied group and the nature of scientific papers presented, one might have been concerned about the ability to communicate across disciplinary lines, let alone to find common interest, but soon cognitive neuroscientists were discussing history, psychologists were talking about economics, and everyone was poring over images of the brain. And, after close to 20 hours of discussions, some during an informal stroll along the Charles River, a set of themes emerged quite clearly along with policy implications and directions for future research.
Indeed, the timing was also fortuitous from a policy perspective. As the most recent State of the Union Address indicated, the United States increasingly sees its economic position challenged, and creativity and innovation are viewed as the most promising directions for us to take in maintaining our position. On the other hand, this increased interest in creativity and innovation may conflict with demographics - the workforce has been aging, and creativity and innovation traditionally have been regarded as linked to youth.
I. The Idiosyncrasy of Innovation
With innovation viewed as a way for the United States to maintain its economic position, it is natural to ask what can be done to foster it. As Daniel Goroff -- a mathematician and the Dean of Faculty at Harvey Mudd College -- discussed, colleges and universities increasingly are prioritizing creativity, and the Mills Commission is recommending systematic testing of higher education outcomes.
Our working definition of creativity was "the production of novel and useful ideas or artifacts." The discussion touched on the arts, industry, and sciences. Perhaps the single point on which there was the widest agreement was that, while there are recognizable patterns in creativity, the motivations of creators and the processes by which creative ideas arise are frequently specific to the individual, or idiosyncratic. The idiosyncratic nature of innovation showed up in brain images, in problem-solving experiments, and in analyses of historical and contemporary innovations and innovators. Participants were optimistic about our ability to foster creativity; however, we agreed that, to be successful, we must attempt to confront the idiosyncratic nature of creativity.
The Idiosyncratic Nature of the Creative Brain
At the finest level, the cognitive neuroscientists at the conference showed the idiosyncratic nature of creativity in brain functioning. Mark Jung-Beeman, a cognitive neuroscientist at Northwestern University, showed that distinct brain areas contribute when people solve problems with insight, that is, when solutions are accompanied by "Aha" moments. The patterns of brain activity suggest an increase in "top-down" processing, and increased contributions from the brain's right hemisphere. Beeman attributed the latter effect to the more diffuse links in the right brain, which allow for novel or idiosyncratic connections across distantly related concepts.
John Kounious, a cognitive neuroscientist at Drexel University, showed that, although the final moment of insight is sudden, there are substantial changes in brain activity leading up to insight solutions to problems, such as a quieting of the sensory areas of the brain in the seconds before the solution reaches consciousness. He interpreted these results as the unconscious brain searching for solutions, but having to quiet down external inputs to bring a candidate solution into consciousness. Moreover, patterns of brain activity before people even see a problem predict whether they will solve that problem with insight, or more analytically. Finally, he also reports that moderate rates of arousal are best for problem solving, with the optimal amount of arousal being lower as problems become more difficult.
Sohee Park, a cognitive neuroscientist at Vanderbilt University, discussed the link between psychosis and creativity. She showed that while psychosis itself may interfere with creativity, the more idiosyncratic associations among people who are psychosis-prone (but clinically normal) enhance creativity. She interpreted these results by arguing that psychosis increases the novelty of associations, but interferes with memory and other processes that are essential for creativity. Healthy but psychosis-prone people also show increased use of their right frontal lobe when they are generating novel ideas.
Teresa Amabile , a psychologist at Harvard Business School, discussed how emotions can influence the creative process. Although research on psychopathology has found a connection between depression and creativity among artists and writers, the emotion-creativity connection has been virtually unexamined among adults working in business organizations. Amabile discussed the results of an extensive study that found a consistent, positive relationship between positive emotion and creativity. Exploiting the longitudinal nature of the data, she showed that positive affect preceded creativity in the coming days. She also found that getting a new idea or having an insight can evoke immediate (though short-lived) feelings of elation - even when the idea is a relatively minor one. Thus, her study revealed bi-directional causality in the connection between positive emotion and creativity at work, and it also suggested an incubation effect whereby positive emotion on one day can stimulate new cognitive associations that bear fruit in the coming days. Given the different domains studied, these results are consistent with those of Park and others who study the link between psychosis and creativity.
Perhaps the most extreme statement of the view of innovation as idiosyncratic is the chance permutation model of Dean Keith Simonton, a psychologist at the University of California, Davis. In his model, important contributions are the result of purely random combinations of ideas. He discussed a broad range of evidence supporting his approach.
The Idiosyncratic Nature of Creative Motivations
The motivations of innovators also are idiosyncratic, and this is particularly true in the initial development of an innovation. Josh Lerner, an economist at Harvard Business School, showed that many of the early developers of open-source code were hackers who contributed to open source code for the stimulation of programming, the recognition of their peers, or to oppose commercial software manufacturers. Companies only contribute to open source code once a large body of code has been developed and the commercial benefits are clear.
David Galenson and Bruce Weinberg, economists at the University of Chicago and Ohio State University, find that early phases of revolutions in the arts, industry, and science are more likely to arise from individuals pursuing their aesthetic goals or from serendipity. As revolutions develop, market factors become more important. In physics for instance, many of the discoveries that led to the development of quantum mechanics arose accidentally, but later contributions were self-conscious attempts to explain earlier results.
Consistent with these findings, Teresa Amabile's research has revealed an intrinsic motivation principle of creativity: people will be most creative when they are motivated primarily by the interest, enjoyment, satisfaction, meaningfulness, and personal challenge of the work itself, rather than by extrinsic inducements or constraints. This principle is best understood within a social-psychological view of creativity. Although peoples' production of creative (novel and appropriate) work certainly depends on both their domain expertise and their creative thinking skills, it also depends on their level of intrinsic motivation for the work, which can be strongly influenced by the inducements and constraints in their social environment. Experimental and non-experimental research has revealed several aspects of work environments, such as a primary focus on tangible rewards or critical evaluation, which can undermine intrinsic motivation and creativity. Also, several aspects of work environments -- such as autonomy and optimal challenge in the work -- can support intrinsic motivation and creativity. For example, using their diary database, Amabile and her colleagues have discovered a number of specific leader behaviors in the everyday work environment that have positive or negative effects on daily perceptions of leader support for creativity and, thus, on creativity itself.
Richard Freeman sought to understand gender differences in involvement in the sciences in terms of gender differences in the response to incentives. He noted that incentives for innovation often take the form of tournaments, where the first person to succeed receives most or all of the returns. The evidence indicates that women shy away from these situations, providing an explanation for the under-representation of women in the sciences.
Gerald Holton, a physicist and historian of science, discussed the role of thema: unquestioned principles held by individuals that guide their creativity. For instance, Newton's view of the universe as being designed by God shaped the questions he asked and the answers he gave. Thus, thema are individual influences that shape a person's creative work.
David Kaiser, a physicist and historian at MIT, presented a cautionary tale from the rapid, post-war expansion of physics. He showed how the expansion led teachers to emphasize the most mechanical sides of quantum mechanics, which were easier to teach, while shying away from more qualitative questions of interpretation - the "What does it all mean?" musings that had so exercised the discipline's leaders before the war. In this way, concrete pedagogical pressures helped to change how modern physics was handled in the classroom, and, indeed, what would count as "creative" among the younger generation.
Thus, cognitive neuroscientists, economists, historians, and psychologists all see creativity as being idiosyncratic in terms of the processes through which it develops and the motivations of creators. Given this view, there was great concern about efforts to test outcomes in higher education. Similarly, the group was concerned about the ability to identify areas for innovation and target support to them, as opposed to supporting innovation more broadly. Nevertheless, the Unites States must strive for excellence in education and support scientific research that will provide the basis for future economic growth in a flexible way.
II. The Geography of Creativity
Technological centers, such as Silicon Valley and the Route 128 Corridor outside of Boston, have been attributed to knowledge spillovers that arise among innovators. In other words, the presence of many others working on related problems is assumed to lead to informal interactions that foster creativity. This phenomenon can operate at the level of cities and even nations, and is an important motivation for public investment in research.
Perhaps the finest grained evidence here comes from the work of David Kaiser, who has traced the flow of ideas among physicists, looking at the development, mutations, and spread of Feynman diagrams. Using these diagrams, which illustrate interactions between particles, he shows how interacting communities modify and define techniques.
Weinberg has shown that geography affects the probability of contributing to a scientific revolution. People who went to graduate school at a place where they were exposed to the people who pioneered the new paradigm were more likely to make contributions to that paradigm than people who attended other schools. The nature of their work also was affected.
Lynne Zucker and Michael Darby, a sociologist and economist at the University of California, Los Angeles, have studied the flow of knowledge from academia to industry. For a variety of leading technologies -- including semiconductors, biotechnology, and nanotechnology -- startup firms are more likely to develop in cities where there are more star academic researchers. This work provides an important, direct link between academic research and industrial innovation.
While there was considerable optimism about the future of the United States as a leader in creativity and innovation, there was a general view that the distance between the United States and other countries likely would shrink. As other countries develop their research capabilities, scientific breakthroughs and their commercial applications likely will shift overseas, at least to some extent. For us to maintain a strong position, we will have to invest in our scientific and industrial innovative communities.
III. Innovation and an Aging Workforce
Work on the effect of age on creativity dates back at least to Harvey Lehmann's Age and Achievement, published in 1953. The relationship between age and creativity is particularly important today with new technologies developing rapidly and the workforce aging, driven by the large baby boom generation. Will our ability to innovate and take advantage of innovations be affected by the aging workforce? How can companies that need to innovate adapt to an aging workforce?
While there seems to be a presumption that creativity is associated with youth, there was a consensus that older individuals can, and frequently are highly creative. David Galenson outlined a distinction between experimental and conceptual innovators. Conceptual innovators work deductively and frequently make their most important contributions early in their careers. Experimental innovators work inductively, accumulating knowledge from trial-and-error experiments, and tending to do their most important work later in their careers. These experimental innovators may be entering their peak years of creativity.
Dean Keith Simonton discussed a different approach, one that focuses on disciplines rather than the styles of individual innovators. In his view, creativity varies across disciplines depending on the rate at which ideas can be developed and elaborated. He argued that in many fields, creativity increases for much of life; and, even in fields where creativity is greatest at early ages, older individuals make important contributions at the same rate as younger individuals after one controls for their lower rate of publication.
Regardless of the approach -- and there was an active, scholarly discussion of the relative merits of the two approaches -- it is clear that older individuals are often highly creative. Both approaches imply that there will be differences across fields in the age at which people are most creative: in Galenson's approach, the shares of conceptual innovators, who tend to be most creative when young, and experimental innovators, who tend to be most creative later in their careers, vary across fields. One wonders whether the development of information technology was due at least in part to the relative youth of the workforce and if technological progress may shift to other areas as the workforce ages.
Ben Jones, an economist at Northwestern University's Kellogg School of Management, and Bruce Weinberg took another view of the relationship between age and creativity. Jones argued that the accumulation of knowledge over time generates a burden of knowledge. In his words, while later generations have the advantage of being able to see further by standing on the shoulders of giants, they suffer from having longer climbs. He showed that the age at which innovators in the sciences and industry do important work has been increasing over the twentieth century. While his view is pessimistic at some level - it implies that innovators will spend more of their careers getting to the knowledge frontier and less time innovating - now we may have the advantage of having a population that has reached an age where they are largely done climbing.
Weinberg's work has shown that people who make contributions to new scientific paradigms tend to have been exposed to them in their formative professional years. While this result would suggest that younger individuals are more involved with important innovations, he has also found that older individuals often make the contributions that set off innovative revolutions. Thus, older individuals have a crucial role to play in the innovative process.
IV. Future Work
Many directions for future work emerged from the meeting. There was considerable interest in using the emerging tools of cognitive neuroscience to test the foundations of other theories. Among the possibilities would be to probe the effect of age on creativity and receptivity to new ideas by looking at changes in cognition over the life cycle. Another area in which links could be made was the relationship between affect and psychosis and creativity across domains. There was also interest in linking observational data on the creative output of scientists or industrial innovators to information about cognitive functioning. Similarly, it would be valuable to study cognition under various incentives and other aspects of the social environment. Work is also necessary to reconcile differences in views of creativity that have emerged across the various disciplines.
Amabile, T. M. (1996), Creativity in context: Update to the social psychology of creativity, Boulder, CO: Westview Press.
Amabile, T.M., Schatzel, E.A., Moneta, G.B., and Kramer, S.J. (2004), Leader behaviors and the work environment for creativity: Perceived leader support, The Leadership Quarterly, 15:1, pp. 5-32.
Amabile, T.M., Barsade, S.G., Mueller, J.S., and Staw, B.M (2005), Affect and Creativity at Work, Administrative Science Quarterly, 50:3, pp. 367-403.
Folley, Bradley S., and Sohee Park, "Verbal creativity and schizotypal personality in relation to prefrontal hemispheric laterality: A behavioral and near-infrared optical imaging study," Schizophrenia Research, 80, 2005.
Galenson, David W. "A Portrait of the Artist as a Very Young or Very Old Innovator," NBER Working Paper No. 10515, May 2004.
Galenson, David W. and Bruce A. Weinberg, "Creating Modern Art: The Changing Careers of Painters in France from Impressionism to Cubism," American Economic Review 91 (no. 4, September 2001): pp. 1063-71.
Jones, Benjamin F. "Age and Great Invention," Northwestern University Working Paper, 2005.
Jung-Beeman, Mark, Edward M. Bowden, Jason Haberman, Jennifer L. Frymiare, Stella Arambel-Liu, Richard Greenblatt, Paul J. Reber, and John Kounios, "Neural Activity When People Solve Verbal Problems with Insight," PloS Biology, vol. 2, issue 4, April 2004.
Kaiser, David, Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics (Chicago: University of Chicago Press), 2005.
Kaiser, David. "Training Quantum Mechanics: Enrollments and Epistemology in Modern Physics."
Lerner, Joshua, and Jean Tirole, "The Dynamics of Technology Sharing: Open Source and Beyond," NBER Working Paper No. 10956, December 2004.
Simonton, Dean Keith, "Creativity: Cognitive, Personal, Developmental, and Social Aspects,"N American Psychologist, January 2000.
Weinberg, Bruce A., "Which Labor Economists Invested in Human Capital? Geography, Vintage, and Participation in Scientific Revolutions," Ohio State University Working Paper, February 2006.
Zucker, Lynne G., Michael R. Darby, and Marilynn B. Brewer, "Intellectual Human Capital and the Birth of U.S. Biotechnology Enterprises," American Economic Review, vol. 88, no. 1, March 1998.
Christiano,Eichenbaum, and ,Vigfusson, analyze the quality of VAR-based procedures for estimating the response of the economy to a shock. They focus on two key questions. First, do VAR-based confidence intervals accurately reflect the actual degree of sampling uncertainty associated with impulse response functions? Second, what is the size of bias relative to confidence intervals, and how do coverage rates of confidence intervals compare to their nominal size? They address these questions using data generated from a series of estimated dynamic, stochastic general equilibrium models. They organize most of their analysis around a particular question that has attracted a great deal of attention in the literature: how do hours worked respond to an identified shock? In all of their examples, as long as the variance in hours worked attributable to a given shock is above the remarkably low number of 1 percent, structural VARs perform well. This is true regardless of whether identification is based on short-run or long run restrictions. Confidence intervals are wider in the latter case. Even so, long run identified VARs can be useful for discriminating between competing economic models.
Davis ,Haltiwanger ,Jarmin, and Miranda study the distribution of growth rates among establishments and firms in the U.S. private sector from 1976 onwards. To carry out their study, they exploit the recently developed Longitudinal Business Database (LBD), which contains annual observations on employment and payroll for all business establishments and firms. Their main finding is a large secular decline in the cross-sectional dispersion of firm growth rates and in the average magnitude of firm level volatility. Measured in the same way as in other recent research, the employment-weighted mean volatility of firm growth rates in the private sector has declined by more than 40 percent since 1982. This result stands in sharp contrast to previous findings of rising volatility for publicly traded firms based on COMPUSTAT data. They confirm the rise in volatility among publicly traded firms using the LBD, but show that its impact is overwhelmed by declining volatility among privately held firms. The rising activity share, higher volatility, and increasingly volatile character of newly listed firms after 1979 explains much of the trend increase in volatility among publicly traded firms. They also show that business volatility and dispersion declined much more rapidly in Retail Trade and Services than in Manufacturing.
To appreciate the role of a "not-so-well-known aggregation theory" that underlies Prescott's (2002) conclusion that higher taxes on labor have depressed Europe relative to the United States, Ljungqvist and Sargent compare aggregate outcomes for economies with two alternative arrangements for coping with indivisible labor: employment lotteries plus complete consumption insurance; and individual consumption smoothing via borrowing and lending at a risk-free interest rate. Under idealized conditions, the two arrangements support equivalent outcomes when human capital is not present; when it is present, outcomes are naturally different. Households' reliance on personal savings in the incomplete markets model constrains the "career choices" that are implicit in their human capital acquisition plans relative to those that can be supported by lotteries and consumption insurance in the complete markets model. Lumpy career choices make the incomplete markets model better at coping with a generous system of government funded compensation to people who withdraw from work. Adding generous government supplied benefits to Prescott's model with employment lotteries and consumption insurance causes employment to implode and prevents the model from matching outcomes observed in Europe.
Davig and Leeper estimate regime-switching rules for monetary policy and tax policy over the post-war period in the United States and impose the estimated policy process on a model with nominal rigidities. Decision rules are locally unique and produce a stationary rational expectations equilibrium in which (lump-sum) tax shocks always affect output and inflation. Tax non-neutralities in the model arise solely through the mechanism articulated by the fiscal theory of the price level. This paper quantifies that mechanism and finds it to be important in U.S. data, reconciling a popular class of monetary models with the evidence that tax shocks have substantial impacts. Because long-run policy behavior determines existence and uniqueness of equilibrium, in a regime-switching environment more accurate qualitative inferences can be gleaned from full-sample information than by conditioning on policy regime.
Golosov,Tsyvinski, and Werning present a simple dynamic Mirrleesian model. There are two main goals for this paper: to review some recent results and contrast the Mirrlees approach with the Ramsey framework in a dynamic setting; and to present new numerical results for a flexible two-period economy featuring aggregate shocks.
Piazzesi and Schneider consider how the role of inflation as a leading business-cycle indicator affects the pricing of nominal bonds. They examine a representative-agent asset-pricing model with recursive utility preferences and exogenous consumption growth and inflation. They solve for yields under various assumptions on the evolution of investor beliefs. If inflation is bad news for consumption growth, the nominal yield curve slopes up. Moreover, the level of nominal interest rates and term spreads are high in times when inflation news are harder to interpret. This is relevant for periods such as the early 1980s, when the joint dynamics of inflation and growth was not well understood.
These papers will appear in an annual volume published by the MIT Press. Its availability will be announced in a future issue of the Reporter. They can also be found at "Books in Progress" on the NBER's website.
Rising R and D expenditures and falling counts of new drug approvals since 1996 have led many observers to conclude that there has been a sharp decline in research productivity in the pharmaceutical industry over the past decade. But a close look at the underlying data, Cockburn suggests, shows that these trends are greatly exaggerated: properly measured, research output is unlikely to have fallen as much as these figures imply, while trends in R and D expenditure are seriously overstated by failing to account for inflation in R and D input costs. Some of the increase in R and D investment is a necessary, indeed welcome, response to new technological opportunities and can be expected to deliver a handsome return of innovative drugs in future years. The rising cost per new drug approved is nonetheless a serious cause for concern, particularly where this is driven by transactions costs and other inefficiencies in the market for basic research, and by late-stage abandonment of drug development projects on purely economic grounds. Policies that make "small" markets more attractive, build capacity in translational medicine, reduce the cost, time, and uncertainty of regulatory review, maximize access to basic research, and encourage greater cooperation and collaborative research within the industry can all contribute to greater R and D efficiency.
Murray and Stern describe the impact of formal intellectual property rights (IPR) on the production and diffusion of "dual knowledge" - ideas that are simultaneously of value as a scientific discovery and as a useful, inventive construct. They argue that a great deal of knowledge generated in academia, particularly in the life sciences, falls into this category (sometimes referred to as Pasteur's Quadrant). The production and diffusion of dual purpose knowledge challenges the premise of most science policy analysis, implicitly based on a clear separation between basic scientific knowledge and applied knowledge useful in the development of new technology. Instead, dual knowledge simultaneously makes both a basic and an applied contribution. The authors review qualitative and quantitative evidence relating to the policy challenges raised by the production and dissemination of dual knowledge, highlighting three broad findings. First, rather than facing a fundamental tradeoff between applied research and more fundamental scientific knowledge, research agencies can and do invest in dual purpose knowledge. Indeed, the dual purpose knowledge framework suggests a distinct rationale for public sector involvement in the funding and conduct of research: the social impact of a given piece of knowledge may be enhanced when knowledge is produced and disclosed in accordance with the norms of the scientific research community (particularly compared to secrecy). Second, within Pasteur's Quadrant, the increased use of formal IPR seems to be significantly shaping the structure, conduct, and performance of both university and industry researchers. On the one hand, from the perspective of individual researchers, patenting does not seem to come at the expense of scientific publication, and both respond to the process of scientific discovery. However, there is some evidence that patent grants may reduce the extent of use of knowledge: the citation rate to a scientific article describing a dual-purpose discovery experiences a modest decline after patent rights are granted over that knowledge. Finally, the impact of patents may be indirect; rather than directly affecting behavior through patent enforcement, scientific conduct may be influenced by related mechanisms, such as material transfer agreements. Not simply a legal document within a seamless web of cooperation, nor a bludgeon to stop scientific progress in its tracks, patents seem to be changing the "rules of the game" for scientific exchange, cooperation, and credit.
Stephan analyzes data concerning the placements of new PhDs who had definite plans to go to work in industry for the period 1997-2002. Her data come from the Survey of Earned Doctorates overseen by the National Science Foundation. She finds knowledge sources to be heavily concentrated in certain regions and states. Moreover, the geographic distribution of knowledge sources, as measured by where PhDs going to work in industry are trained, is different than other measures of knowledge sources, such as university R and D-expenditure data, would suggest. A major headline is the strong role played by Midwestern universities, which educate over 26.5 percent of all PhDs going to industry but are responsible for only 21.1 percent of university R and D. Stephan finds that only 37 percent of PhDs trained in science and engineering stay in their state of training. Particularly among certain Midwestern states, many students leave for employment on the Coasts. The firms most likely to hire new PhDs are found in computer and electrical products, followed by firms in publishing and professional, scientific, and technical services. Almost one out of ten new PhDs going to work for industry heads to San Jose; 58 percent go to work in one of twenty cities. The placement data also suggest that small firms play a larger role in innovation than R and D expenditure data would suggest.
Innovations can often be targeted to be more valuable for some consumers than others. This is especially true for digital information goods. Brynjolffsonand Zhangshow that the traditional price system not only results in significant deadweight loss, but also provides incorrect incentives to the creators of these innovations. In contrast, they propose and analyze a profit-maximizing mechanism for bundles of digital goods which is more efficient and more accurately provides innovation incentives for information goods. Their "statistical couponing mechanism" does not rely on the universal excludability of information goods, which creates substantial deadweight loss, but instead estimates social value created from new goods and innovations by offering coupons to a relatively small sample of representative consumers. They find that the statistical couponing mechanism can operate with less than 0.1 percent of the deadweight loss of the traditional price-based system, while more accurately aligning incentives with social value.
Diermeier, Hopp, and Iravani propose a rigorous modeling framework for characterizing the structural ability of organizations to respond quickly and effectively to unanticipated events. As such, the authors seek to provide a theoretical basis for improved crisis management strategies. Their framework conceptualizes organizations as adaptive, responsive networks. Most of the existing models of complex social networks to date, however, have not explicitly modeled human capacity constraints or system congestion. As a result, no viable frameworks exist for investigating the responsiveness of various organizational structures under crisis conditions. The authors propose to integrate the social-network approach to modeling communication and collaboration with the flow-network approach from production-systems modeling as a way of representing task processing and flow under crisis conditions. By providing an analytic structure for decisionmaking environments currently viewed as not amenable to formal methods, this research may improve the performance of various organizations in both the private and public sectors.
These papers will appear in an annual volume published by the MIT Press. Its availability will be announced in a future issue of the Reporter. They can also be found at "Books in Progress" on the NBER's website.
Gilchrist and Saito study the implications of financial market imperfections represented by a countercyclical external finance premiums and gradual recognition of changes in the drift of technology growth for the design of an interest rate rule. Asset price movements induced by changes in trend growth influence balance sheet conditions, which in turn determine the premium on external funds. The presence of financial market frictions provides a motivation for responding to the gap between the observed asset price and the potential asset price in addition to responding strongly to inflation. This is because the asset price gap represents distortions in the resource allocation induced by financial market frictions more distinctly than inflation. Policymakers' imperfect information about the drift of technology growth makes the calculation of potential imprecise and thus reduces the benefit of responding to the asset price gap. Asset price targeting which does not take into account changes in potential tends to be welfare reducing.
The modern view of monetary policy stresses its role in shaping the entire yield curve of interest rates in order to achieve various macroeconomic objectives. A crucial element of this process involves guiding financial market expectations of future central bank actions. Recently, a few central banks have started to explicitly signal their future policy intentions to the public, and two of these banks have even begun publishing their internal interest rate projections. Rudebusch and Williams examine the macroeconomic effects of direct revelation of a central bank's expectations about the future path of the policy rate. They show that, in an economy where private agents have imperfect information about the determination of monetary policy, central bank communication of interest rate projections can help shape financial market expectations and improve macroeconomic performance.
Dewachter and Lyrio present a macroeconomic model in which agents learn about the central bank's inflation target and the output-neutral real interest rate. They use this framework to explain the joint dynamics of the macroeconomy, and the term structures of interest rates and inflation expectations. Introducing learning into the macro model generates endogenous stochastic endpoints which act as level factors for the yield curve. These endpoints are sufficiently volatile to account for most of the variation in long-term yields and inflation expectations. As such, this paper complements the current macro-finance literature in explaining long-term movements in the term structure without reference to additional latent factors.
Commodity prices are back. Frankel looks at connections between monetary policy and agricultural and mineral commodities. He begins with the monetary influences on commodity prices, first for a large country such as the United States, then for smaller countries. The claim is that low real interest rates lead to high real commodity prices. The theory is an analogy with Dornbusch overshooting. The relationship between real interest rates and real commodity prices also is supported empirically. One channel through which this effect is accomplished is a negative effect of interest rates on the desire to carry commodity inventories. Frankel concludes with a consideration of the reverse causality: the possible influence of commodity prices on monetary policy, under alternative currency regimes. The new proposal for PEPI --Peg the Export Price Index -- is compared (favorably) with -- the popular regime of CPI targeting -- by the criterion of robustness with respect to changes in the terms of trade such as oil price shocks.
Cecchetti uses data from a broad cross-section of countries to examine GDP at risk and price-level at risk arising from booms and crashes in equity and property markets. He shows that the distribution of GDP and price-level deviations from their trends both have fat tails, so the probability of extreme events is higher than implied by a normal distribution. Specifically, housing booms create outsized risks of output declines. This means that policymakers who are intent on averting catastrophes should react. The question is: How?
Clarida and Waldman make a theoretical point and provide some empirical support for it: they show in a simple, but robust, theoretical monetary exchange rate model that the sign of the covariance between an inflation surprise and the nominal exchange rate can tell us something about how monetary policy is conducted. Specifically, they show that "bad news" about inflation - that it is higher than expected - can be "good news" for the nominal exchange rate - which appreciates on this news - if the central bank has an inflation target that it implements with a Taylor Rule. The model is one of inflation - not price level - targeting, so that in the model a shock to inflation has a permanent effect on the price level. Since PPP holds in the long run of the model, the nominal exchange rate depreciates in the long run to an inflation shock, even though, on impact, it can appreciate in response to this shock. The empirical work in this paper examines point sampled data on inflation announcements and the reaction of nominal exchange rates in 10-minute windows around these announcements for ten countries and several different inflation measures for the period July 2001 through March 2005. Eight of the countries in the study are inflation targeters, and two are not. In the data, the authors indeed find that bad news about inflation is good news for the nominal exchange rate, and that the results are statistically significant. They also find significant differences comparing the inflation targeting countries and the two non-inflation targeting countries. For the non-IT countries, there is no significant impact of inflation announcements on the nominal exchange rate, although the estimated sign is indeed in line with the story here. For each of the IT countries, the sign is as predicted by the theory and quite significant. Finally, Clarida and Waldman study two countries, the United Kingdom and Norway, in which there was a clear regime change during a period for which they have data. They study the granting of independence to the Bank of England in 1997 and the shift to formal inflation targeting by Norway in 2001. For both countries, the correlation between the exchange rate and the inflation surprise before the regime change reveal that "bad news about inflation was bad news about the exchange rate." After the regime change, they find, "bad news about inflation is good news about the exchange rate."
The current literature has provided a number of important insights about the effects of macroeconomic data releases on monetary policy expectations and asset prices. However, one puzzling aspect of that literature is that the estimated responses are quite small. Indeed, these studies typically find that the major economic releases, taken together, account for only a small amount of the variation in asset prices - even those closely tied to near-term policy expectations. Rigobon and Sack argue that this apparent detachment arises in part from the difficulties associated with measuring macroeconomic news. They propose two new econometric approaches that allow them to account for the noise in measured data surprises. Using these estimators, they find that asset prices and monetary policy expectations are much more responsive to incoming news than previously believed. Their results also clarify the set of facts that should be captured by any model attempting to understand the interactions between economic data, monetary policy, and asset prices.
Monacelli studies optimal monetary policy in an economy with nominal private debt, borrowing constraints and price rigidity. Private debt reflects equilibrium trade between an impatient borrower, who faces an endogenous collateral constraint, and a patient saver, who engages in consumption smoothing. Since inflation can positively affect borrower's net worth, monetary policy optimally balances the incentive to offset the price stickiness distortion with the one of marginally affecting the borrower's collateral constraint. He finds that the optimal volatility of inflation is increasing in three key parameters: 1) the borrower's weight in the planner's objective function; 2) the borrower's impatience rate; 3) the degree of price flexibility. In general, however, deviations from price stability are small for a small degree of price stickiness. In a two-sector version of the model, in which durable price movements can directly affect the ability of borrowing, the optimal volatility of (non-durable) inflation is more sizeable. In the context used here, and relative to simple Taylor rules, the Ramsey-optimal allocation entails a partial smoothing of real durable goods prices.
Piazzesi and Schneider consider asset pricing in a general equilibrium model in which some, but not all, agents suffer from inflation illusion. The model predicts that housing booms occur both when inflation is unusually high and when it is unusually low, which they also find in cross-country data. The key mechanism is that illusionary and smart investors disagree about the level of real interest rates, especially when inflation is far from its historical average. This disagreement stimulates borrowing and lending and drives up the price of collateral.
These papers will be published by the University of Chicago Press in an NBER conference volume. Its availability will be announced in a future issue of the NBER Reporter. They are also available at "Books in Progress" on the NBER's website.
While there is considerable empirical evidence on the impact of liberalizing trade in goods, the effects of services liberalization have not been established empirically. Arnold, Javorcik, and Mattoo examine the link between service sector reforms and the productivity of manufacturing industries that rely on service inputs. Their results, based on firm-level data from the Czech Republic for the period 1998-2003, show a positive relationship between service sector reform and the performance of domestic firms in downstream manufacturing sectors. When several aspects of services liberalization are considered - namely the presence of foreign providers, privatization, and the level of competition - they find that allowing foreign entry into service industries may be the key channel through which services liberalization contributes to improved performance of downstream manufacturing sectors. As most barriers to foreign investment today are not in goods but in services sectors, these findings may strengthen the argument for reform in this area.
A growing share of international trade occurs through intra-firm transactions: those between domestic and foreign subsidiaries of a multinational firm. The difficulties associated with writing and enforcing a vertical contract are compounded when a product must cross a national border, and this may explain the high rate of multinational trade across such borders. Hellerstein andVillas-Boas show that this common cross-border organization of the firm may have implications for the well-documented incomplete transmission of shocks across borders. They present new evidence of a positive relationship between an industry's share of multinational trade and its rate of exchange rate pass-through to prices. They then develop a structural econometric model with both manufacturers and retailers to quantify how firms' organization of their activities across borders affects their pass-through of a foreign cost shock. They apply the model to data from the auto market. Counterfactual experiments show why cross-border transmission may be much higher for a multinational transaction than for an arm's-length transaction. In the structural model, firms' pass-through of foreign cost shocks is on average 29 percentage points lower in arm's-length transactions than in multinational transactions because the higher markups from a double optimization along the distribution chain create more opportunity for markup adjustment following a shock. Since arm's-length transactions account for about 60 percent of U.S. imports, this difference may explain up to 20 percent of the incomplete transmission of foreign-cost shocks to the United States in the aggregate.
Ederington and McCalman develop a model of international trade and industrial evolution. Evolution is driven by the endogenous technology choices of firms, which generate a rich industrial environment that includes the possibility of a dramatic shakeout. The likelihood, magnitude, and timing of this shakeout depend not only on the size of the innovation but also on the structure of production costs. In this setting, trade liberalization reduces the likelihood of a shakeout, resulting in a more stable industrial structure. However, when shakeouts arise in global markets, the distribution of firm exits can vary widely across countries. Furthermore, conditions exist so that a shakeout occurs in a closed economy but not in an open economy. The empirical evidence presented here is consistent with the prediction that the more internationally integrated sectors are less likely to experience a shakeout.
Theoretical research has predicted three different effects of increased import competition on plant-level behavior: reduced domestic production and sales; improved average efficiency of plants; and increased exit of marginal firms. Conway uses detailed plant-level information available in the U.S. Census of Manufacturers and the Annual Survey of Manufacturers for the period 1983-2000 to decompose these effects. He derives the relative contribution of technology and import competition to the increase in productivity and to the decline in employment in textiles production in the United States in recent years. He then simulates the impact of removal of quota protection on the scale of operation of the average plant and on the incentive for plant closure.
Park and his coauthors analyze firm panel data to examine how export demand shocks associated with the 1997 Asian financial crisis affected Chinese exporters. They construct firm-specific exchange rate shocks based on the pre-crisis destinations of firms' exports. Because the shocks were unanticipated and large, they are an ideal instrument for identifying the impact of exporting on firm productivity and on other aspects of firm performance. The authors find that firms whose export destinations experience greater currency depreciation have slower growth in exports; export growth also increases firm productivity, as well as other measures of firm performance. Consistent with the "learning-by-exporting" hypothesis, greater exports increase the productivity of firms exporting to developed countries but not of firms exporting via Hong Kong or directly to poorer destinations.
Productivity growth in sectors that intensively use information technologies (IT) appears to have accelerated much faster in the United States than in Europe since 1995, leading to the U.S. "productivity miracle." If this was partly attributable to the superior management/organization of U.S. firms (rather than simply the U.S. geographical or regulatory environment), then we would expect to see a stronger association of productivity with IT for U.S. multinationals located in Europe than for other firms. Bloom and his coauthors examine a large panel of U.K. establishments and show that U.S.-owned establishments have a significantly higher productivity of IT capital than either non-U.S. multinationals or domestically owned establishments do. Indeed, the differential effect of IT appears to account for almost all of the difference in total factor productivity between U.S.-owned and all other establishments. This finding holds in the cross section, when fixed effects are included, and even when a sample of establishments taken over by U.S. multinationals (relative to takeovers by other multinationals and by domestic firms) is examined. The authors find that the U.S. multinational effect on IT is particularly strong in the sectors that intensively use information technologies (such as retail and wholesale), the very same industries that accounted for the U.S.-European productivity growth differential since the mid-1990s.
Multinational labor demand responds to wage differentials at the extensive margin, when a multinational enterprise (MNE) expands into foreign locations, and at the intensive margin, when an MNE operates existing affiliates across locations. Muendler and Becker derive conditions for parametric and nonparametric identication of an MNE model to infer elasticities of labor substitution at both margins, controlling for location selectivity. Prior studies have rarely found foreign wages or operations to affect employment. The strategy here detects salient adjustments at the extensive margin for German MNEs. With every percentage increase in German wages, German MNEs allocate 2,000 manufacturing jobs to Eastern Europe at the extensive margin and 4,000 jobs overall.
Chen and Swenson study Chinese trade between 1997 and 2003 to see how the presence of multinational firms affected the quality, frequency, and survival of new export transactions by private Chinese traders. By exploiting the richness of the data that come from the fine geographical and product detail, they show how own-industry multinational firm presence helped to stimulate new trade, and to elevate the quality of those trades. In contrast, they find that greater concentrations of other industry multinational activity were associated with less-favorable outcomes, as one would predict if multinational presence brought with it local factor market congestion, or elevated levels of competition.