The United Kingdom experienced an information and communications technology (ICT) investment boom in the 1990s in parallel with the United States, but measured total factor productivity (TFP) there has decelerated rather than accelerated in recent years. Basu, Fernald, Oulton, and Srinivasan ask whether ICT can explain the divergent TFP performance in the two countries. As a "general purpose technology," measured TFP should rise in ICT-using sectors (reflecting either unobserved accumulation of intangible organizational capital, spillovers, or both) with a long lag. The authors indeed find that the acceleration in U.S. TFP after the mid-1990s was broadbased -- located primarily in ICT-using sectors rather than ICT-producing sectors -- and there is some evidence that the TFP acceleration was larger in industries that increased their ICT share in the preceding 15 years (especially in the early 1980s). Furthermore, the TFP acceleration appears negatively correlated with increases in ICT usage in the late 1990s. In the United Kingdom, by contrast, the increase in ICT-intensity came later than in the United States. Given the long lags, these results suggest, albeit tentatively, that the United Kingdom should see an acceleration in TFP over the next decade.
Krueger and Perri investigate the welfare consequences of the stark increase in wage and earnings inequality in the United States for the last 30 years. Their data come from the Consumer Expenditure Survey (CE), the only U.S. dataset that contains information on wages, hours worked, earnings, and consumption for a large cross section of U.S. households. In their sample, the cross-sectional variation in wages and disposable earnings has increased significantly, both within and between groups identified by education and sex. This trend potentially implies large welfare losses, especially for the poorest groups in the U.S. population. On the other hand, the dispersion in consumption and in hours worked has not increased significantly, suggesting that smoothing mechanisms (like credit markets) might have reduced these losses. To better quantify the welfare consequences of the recent changes in inequality, the authors estimate stochastic processes for income, consumption, and leisure that are consistent with observed cross-sectional variability and with one-year mobility patterns from the CE. They insert these estimates into a standard lifetime-utility framework to obtain estimates of the welfare losses. They find that, for commonly used specifications of the lifetime-utility function, the welfare losses for a substantial fraction of the U.S. population amount to between 2 and 3 percent of lifetime consumption.
Vissing-Jorgensen discusses the current state of the behavioral finance literature. She argues that more direct evidence on investors' actions and expectations would make existing theories more convincing to outsiders and would help sort among behavioral theories for a given asset pricing phenomenon. Furthermore, evidence on the dependence of a given bias on investor wealth/sophistication would be useful, first for determining whether the bias could be attributable to (fixed) information or transactions costs or is likely to require a behavioral explanation, and second for determining which biases are likely to be most important for asset prices. The author analyzes a novel dataset on investor expectations and actions obtained from UBS/Gallup. The data suggest that an investor's expectations about future market returns depends strongly on the investor's own investment experience, and expectational variables do affect portfolio choice. The dependence of beliefs on own experience remains strong for high-wealth investors, suggesting that information costs are not a likely explanation. She then reviews evidence on the dependence of a series of "irrational" investor behaviors on investor wealth and concludes that many such behaviors diminish substantially with wealth. As an example of how one may approach a calculation of the costs needed to explain a particular type of "irrational" behavior, she considers the size of the costs needed to explain why many households do not invest in the stock market.
Analyzing 50 years of inflation expectations data from several sources, Mankiw, Reis, and Wolfers document substantial disagreement among consumers and professional economists about expected future inflation. Moreover, this disagreement shows substantial variation through time, moving with inflation, the absolute value of the change in inflation, the output gap, and relative price variability. The authors argue that a satisfactory model of economic dynamics must speak to these important business cycle moments. Nothing that most macroeconomic models do not endogenously generate disagreement, they show that a simple "sticky-information" model broadly matches these facts. Moreover, the sticky information model is consistent with other observed departures of inflation expectations from full rationality, including autocorrelated forecast errors and insufficient sensitivity to recent macroeconomic news.
While by now two substantial literatures seek to characterize optimal monetary and fiscal policy respectively, the two have developed largely in isolation, and apparently on contradictory foundations. The modern literature on dynamically optimal fiscal policy often abstracts from monetary aspects of the economy, and thus implicitly allows for no useful role for monetary policy. The literature on optimal monetary policy instead has been concerned mainly with quite distinct objectives for monetary stabilization policy, namely the minimization of the distortions that result from prices or wages that do not adjust quickly enough to clear markets. At the same time, this literature typically ignores the fiscal consequences of alternative monetary policies; the characterizations of optimal monetary policy that are obtained thus are strictly correct only for a world in which lump-sum taxes are available. Here Benigno and Woodford model price stickiness by assuming staggered pricing of the kind introduced by Calvo (1983). Perhaps more importantly, they obtain analytical results rather than purely numerical ones, by proposing a linear-quadratic approach to the characterization of optimal monetary and fiscal policy that allows them to nest both conventional analyses of optimal monetary policy and analyses of optimal tax-smoothing as special cases of their more general framework. They show how a linear-quadratic policy problem can be derived which yields a correct linear approximation to the optimal policy rules from the point of view of the maximization of expected discounted utility in a dynamic stochastic general-equilibrium model, building on their work for the case of optimal monetary policy when lump-sum taxes are available. Finally, they do not content themselves with merely characterizing the optimal dynamic responses of their policy instruments (and other state variables) to shocks under an optimal policy, given one assumption or another about the nature and statistical properties of the exogenous disturbances to our model economy. Instead, they derive policy rules for the monetary and fiscal authorities. In particular, they seek to characterize optimal policy in terms of optimal targeting rules for monetary and fiscal policy. The rules are specified in terms of a target criterion for each authority; each authority commits itself to use its policy instrument in each period in whatever way is necessary in order to allow it to project an evolution of the economy consistent with its target criterion.
Fraga, Goldfajn, and Minella assess inflation targeting in emerging market economies (EMEs) and develop applied prescriptions for the conduct of monetary policy and inflation-targeting design in EMEs. They verify that EMEs have faced more acute trade-offs -- higher output and inflation volatility -- and worse performance than developed economies. These results stem from more pronounced external shocks, lower credibility, and lower levels of development of institutions in these countries. In order to improve their performance, the authors recommend high levels of transparency and communication with the public and the development of more stable institutions. At an operational level, the authors describe a procedure that a central bank under inflation targeting can apply and communicate when facing strong supply shocks, and present a monitoring structure for an inflation-targeting regime under an IMF program.
These papers will be published by the MIT Press as NBER Macroeconomics Annual, Volume 18. They will also be available at "Books in Progress" on the NBER website.
Trajtenberg seeks to analyze the nature of the terrorist threat following 9/11, and to explore the implications for defense R and D policy. First he reviews the defining trends of defense R and D since the cold war, and brings in pertinent empirical evidence: During the 1990s, the United States accumulated a defense R and D stock ten times larger than any other country's, and almost thirty times larger than Russia's. Big weapon systems, key during the cold war but of dubious significance since then, still figure prominently, commanding 30 percent of current defense R and D spending, vis-a-vis just about 13 percent for intelligence and antiterrorism. Trajtenberg then examines the nature of the terrorist threat, focusing on the role of uncertainty, the lack of deterrence, and the extent to which security against terrorism is (still) a public good. Drawing from a formal model of terrorism developed elsewhere, he explores these and related issues in detail. Two strategies to confront terrorism are considered: fighting terrorism at its source, and protecting individual targets, which entails a negative externality. Contrary to the traditional case of national defense, security against terrorism becomes a mixed private/public good. A key result of the model is that the government should spend enough on fighting terrorism at its source so as to nullify the incentives of private targets to invest in their own security. Intelligence emerges as the key aspect of the war against terrorism and, accordingly, R and D aimed at providing advanced technological means for intelligence is viewed as the cornerstone of defense R and D. This entails developing computerized sensory interfaces, and increasing the ability to analyze vast amounts of data. Both have direct civilian applications, and therefore the required R and D is mostly "dual use". Indeed, there is already a private market for these systems, with a large number of players. R and D programs designed to preserve this diversity and to encourage further competition may prove beneficial both for the required R and D, and for the economy at large.
Jaffe, Newell, and Stavins analyze the implications of the interaction of market failures associated with pollution and the environment with market failures associated with the development and the diffusion of new technology. These combined market failures imply a strong prima facie case for public policy intervention to foster environmentally beneficial technology. Both theory and empirical evidence suggest that the rate and direction of technological advance is influenced by incentives from the market and from regulation. Environmental policy based on incentive-based approaches is more likely to foster cost-effective technology innovation and diffusion than policy based on command and control approaches. In addition, society's investments in the development and diffusion of new environmentally beneficial technologies is very likely to be less than is socially desirable, in the presence of weak or nonexistent environmental policies that would otherwise foster such technology. Positive knowledge and adoption spillovers and information problems further weaken innovation incentives. While environmental technology policy is fraught with difficulties, a long-term view suggests a strategy of experimenting with different policy approaches, and systematically evaluating their success.
Shaw assesses the empirical evidence and policy issues associated with the human resources "revolution." While managers and practitioners have long emphasized the role of human resource practices, economists and policymakers only recently have begun to evaluate the impact of human resource policies on overall productivity growth. This paper suggests that advanced human resource practices (ranging from team-based problem-solving to incentive pay for training) have facilitated the strong productivity record experienced since the mid-1990s, both directly and as a complement to the intensive adoption of information technology. Two implications emerge from the analysis. First, the advantages to innovative human resource practices can only be realized when America's workforce possesses a strong human capital foundation. Second, although the private sector has invested intensively in advanced human resource practices, many of these investments have not been measured in a consistent way or expensed correctly as an accounting matter. The lack of standards by which to measure workplace organization implies that society finds it difficult to identify and diffuse productive practices as quickly as possible.
The recent surge in U.S. patenting and expansion of patentable subject matter has increased patent office backlogs and raised concerns that in some cases patents of insufficient quality or with inadequate search of prior art are being issued. At the same time patent litigation and its costs are rising. Hall, Graham, Harhoff, and Mowery explore the potential of a post-grant review process modeled on the European opposition system to improve patent quality, reveal overlooked prior art, and reduce subsequent litigation. The authors argue that the welfare gains to such a system may be substantial.
Paragraph IV of the Hatch-Waxman Act provides a mechanism for the litigation of pharmaceutical patent infringement disputes. Many of these cases have been settled with "reverse payments" by the brand to the generic in return for delayed generic entry. The FTC has contested a number of these settlements with good but not complete success. Bulow argues for per se illegality of settlements that include side payments or deals which are beneficial to the generic. Further, he shows a number of additional strategies beyond side payments, some highly questionable from an antitrust perspective, that brands have used to keep out generics.
These papers will appear in an annual volume published by the MIT Press. Its availability will be announced in a future issue of the Reporter. They can also be found at "Books in Progress" on the NBER's website.
Poterba, Rauh, Venti, and Wise develop a stochastic simulation algorithm to evaluate the effect of holding a broadly diversified portfolio of common stocks, or a portfolio of index bonds, on the distribution of 401(k) account balances at retirement. They compare the alternative distributions of retirement wealth by showing the empirical distribution of potential wealth values and by computing the expected utility of these outcomes under standard assumptions about the structure of household preferences. Their analysis highlights the critical role of other sources of wealth, such as Social Security, defined benefit pension annuities, and saving outside retirement plans, in determining the expected utility cost of holding equities in the retirement account. Their findings also demonstrate the importance of the equity premium in affecting investors' utility from different retirement asset allocations. Viewed from the beginning of a working career, and given the historical patterns of returns on stocks and bonds, a household that does not have extremely high risk aversion would achieve a higher expected utility by holding a portfolio of stocks rather than bonds.
Default options have an enormous impact on household "choices." Defaults matter, because opting out of a default is costly and these costs change over time, generating an option value of waiting. In addition, people have a tendency to procrastinate. Choi, Laibson, Madrian, and Metrick develop a theory of optimal defaults based on these considerations. They find that it is sometimes optimal to set extreme defaults, which are far away from the mean optimal savings rate. A default that is far away from a consumer's optimal savings rate may make that consumer better off, because such a "bad" default will lead procrastinating consumers to opt out of the default more quickly. The authors calculate optimal defaults for employees at four different companies. Their work suggests that optimal defaults are likely to be at one of three savings rates: the minimum savings rate (that is, 0 percent); the match threshold (typically 5 percent or 6 percent); or the maximum savings rate.
Substantial recent evidence shows a reduction in disability among the elderly in the United States, on the order of 25 percent in the past two decades. The major issue raised by these findings is why disability has declined. Cutler investigates the role of intensive medical technologies in the decline in disability. Using data from the National Long-Term Care Survey, he documents that increased use of intensive procedures might be associated with some reduction in disability, but probably does not account for the majority of the decline.
MaCurdy reveals several valuable insights into the growth of Medicare expenditures in recent years. For example, 20-30 percent of total growth in Medicare program payments from 1989-99, arose from an increase in the participation rate; 50-60 percent from an increase in average program payments per service recipient; and the remainder from higher enrollment. In sharp contrast to the first half of the 1990s, total Medicare costs actually fell in the late 1990s. Whereas the lower percentiles of the expenditure distribution continued to increase throughout the period, expenditures actually fell for the highest-cost users of Medicare services. Published statistics extending beyond this sample period suggest that the reversal in the total growth of Medicare expenditures achieved in the late 1990s was only temporary; starting in 2000, the overall growth in Medicare expenditures reverted to its previous rate and may have even accelerated. Annual Medicare spending is also highly concentrated among a small segment of the beneficiary population, and shares of spending attributable to high-cost users have remained remarkably stable over the 1989-99 decade, even though growth rates have varied considerably during the period and across intensity of use. Those beneficiaries classified in the top 2 percent of the annual expenditure distribution alone account for about one quarter of total expenditures, and those in the top 5 percent cover almost half of annual expenditures. Considering spending by months only reinforces this picture of concentration. The top 2 percent of months with spending during a year account for around two-fifths of total annual expenditures, and the top 5 percent of months cover nearly two-thirds of yearly Medicare expenditures. While high-cost episodes account for a large proportion of total spending during any year, the majority of elderly experience such episodes at some point over a decade, implying far less concentration in expenditures when viewed over lifetimes. Three-fifths of beneficiaries experience at least one 95-percentile month in the decade, and two-fifths realize one or more 98-percentile months. Knowledge of the incidence of 95-percentile months alone explains nearly two-thirds of Medicare spending over the decade, and spending accumulated for those in the 98-percentile months comprises almost four-fifths of total decade expenditures.
Technological advances in health care have been shown to yield large health benefits for the U.S. elderly population. However, less is known about the marginal or incremental benefits of health care spending at a point in time. Skinner, Fisher, and Wennberg use Medicare claims data on 306 hospital referral regions in the United States to estimate how different dimensions of health care affect survival rates. They find that measures of effective care -- factors that have been well established to be efficacious in treatment -- are not associated with Medicare costs, but are associated with greater survival in the Medicare population after controlling for a variety of health status measures. By contrast, the authors find that a large component of Medicare expenditures -- $26 billion in 1996 dollars, or nearly 20 percent of total Medicare expenditures -- is associated with care for chronically ill patients and appears to provide no benefit in terms of survival, nor is it likely that this extra spending improves the quality of life. While secular trends in health care technology have delivered large health benefits, variation in health care intensity at a point in time have not.
Smith examines the consequences of new health events on a series of SES-related outcomes: out-of-pocket labor supply, labor force activity, household income, and wealth. For each of these outcomes, new severe health events have a significant effect, although most of the impact on income and wealth takes place through labor supply and not medical expenses. Smith also examines the ability of different measures of SES to predict the future onset of disease. He finds no predictive effect of income or wealth, but education does predict future onset, even after controlling for current health status.
Self-reported health status (SRHS) allows examination of how health status varies over the life course. The SRHS of both men and women deteriorates with age. In the bottom quartile of income, SRHS declines more rapidly with age, but only until retirement age. These and related facts motivate a study of the role of work, particularly manual work, in health decline with age. The Grossman capital-stock model of health implies that declines in health status are driven, not by the rate of deterioration of the health stock, but by the rate of increase of the rate of deterioration. Case and Deaton show instead that people in manual occupations have worse SRHS, and more rapidly declining SRHS, even with a comprehensive set of controls for income and education. They also find that much of the differences in SRHS across the income distribution is driven by health-related absence from the labor-force, which is a mechanism running from health to income, not the reverse.
Jensen explores two themes relating to the well-being of widows in India. First, measuring individual welfare via household income or expenditure per person implicitly assumes that each person in the household receives an equal share. However, if some individuals systematically receive less, which is typically the case for women in upper caste households, this will be a misleading measure of individual welfare. Jensen finds that while upper caste widows have significantly higher levels of household expenditure per capita, there is no difference in average Body Mass Index, a summary measure of individual nutritional status, between upper and lower caste widows; wealthier, upper caste widows are no better-off in terms of nutritional status than poorer but more equally treated lower caste widows. Second, Jensen analyzes well-being more broadly though measures of practices and customs relating to widows. He finds that the status of widows is better in villages where women and the elderly are able to make larger economic contributions, in particular where less strength-intensive crops (such as rice) are grown. Following their husbands' deaths, widows in these areas are less likely to experience declines in treatment by the husband's family or others in the village, to lose control of land, or to feel unwelcome at social events.
Health, wealth, and where one lives are important, if not the three most important material living conditions. There are many mechanisms that suggest that living arrangements and well-being derived from health and economic status are closely related. Heiss, Hurd, and Börsch-Supan investigate the joint evolution of the three conditions, using a microeconometric approach similar to what is known as "vector autoregressions" in the macroeconomics literature.
Kapteyn and Panis analyze retirement saving and portfolio choice in the United States, Italy, and the Netherlands. While these countries enjoy roughly the same standard of living, they vary widely in their institutional organization of retirement income provisions. Building on extensions of the life cycle model, the authors derive hypotheses on the implications of institutional differences for wealth accumulation and portfolio composition: for example, the ratio of net worth and gross wealth should be highest in Italy; Dutch households should hold the lowest wealth levels at retirement; and the ownership of risky assets should be highest in the United States. The authors investigate these and other hypotheses at both the macro and micro level and find that the data are generally consistent with the hypotheses.
Germany is an interesting country for the study of saving among older households because nearly everyone -- whether in the middle income bracket or richer -- saves substantial amounts in old age. Only households in the lowest quarter of the income distribution spend more between the ages of 60 and 75 than they save. Börsch-Supan and Essig exploit newly collected data, the first wave of the so-called SAVE panel, specifically collected to understand economic, psychological, and sociological determinants of saving. Overall, they find extraordinarily stable savings patterns. More than 40 percent of German households regularly save a fixed amount. About 25 percent of German households plan their savings and have a clearly defined savings target in mind. Most German household saving is in the form of contractual saving, such as saving plans, whole life insurance, and building society contracts. This makes the flow of saving rather unresponsive to economic fluctuations, such as income shocks. Most households prefer to cut consumption if ends do not meet. In particular, the elderly do not like to use credit cards, and they eschew debt. The authors suspect large cohorts differences and will study them once further waves of the SAVE panel become available.
Testing life-cycle models and other economic models of saving and consumption at the micro level requires knowledge of individuals' subjective beliefs of their mortality risk. Previous studies have shown that individual responses regarding subjective survival probabilities are generally consistent with life tables. However, survey responses suffer serious problems caused by focal responses of zero and one. Gan, Hurd, and McFadden suggest using a Bayesian update model that accounts for the problems encountered in focal responses. They also propose models that help us to identify how much each individual deviates from the life table in her subjective belief. The resulting individual subjective survival curves have considerable variations, can better predict observed survival experience than life tables, and are readily applicable in testing economic models that require individual subjective life expectancies.
The proceedings of this conference will be published as an NBER conference volume by the University of Chicago Press. Some of the papers may also be found at "Books in Progress" on the NBER's website.
Kedia examines the option grants and option exercises of top executives of 224 firms that announce restatements of their financial results because of accounting irregularities between January 1997 and June 2002. He finds that firms that announce large negative restatements grant about 50 percent more stock options to their top executives in the years prior to the announcement than members of a size-and-industry-matched control group. Top executives of firms with large negative and large positive restatements also exercise significantly more options in the years prior to the announcement than members of a size-and-industry-matched control group. Higher option exercises appear to be concentrated among firms that are subsequently subject to SEC enforcement actions, while higher option grants are not concentrated in this group of extreme violators. Further, Kedia finds that the percentage of restating firms in the highest quintile, by option grants, is double that in the lowest quintile; this difference is significant at the 1 percent level.
Subramanian, Chakraborty, and Sheikh examine the relationship between the optimal incentive contract and the firm's decision to fire a manager for poor performance. They find that CEOs with steeper compensation contracts (that is, with greater incentives) are more likely to be fired following poor firm performance. Logit estimations indicate that, among poorly performing firms, a CEO receiving incentives at the 60th percentile level is roughly 10 percent more likely to be fired than a CEO with incentives at the 40th percentile. Also, the performance pressure was greater in the latter half (1997-9) of the sample than in the first (1993-6). Increased firing pressure might have been one of the factors contributing to the accounting shenanigans of the late 1990s.
McNeil, Niehaus, and Powers compare turnover of subsidiary managers inside conglomerate firms to turnover of CEOs of comparable stand-alone firms. The authors find that subsidiary manager turnover is significantly more sensitive to changes in performance and significantly more likely following poor performance than turnover of CEOs is. Further, for subsidiary managers, the relationship between turnover and performance is significantly stronger when the subsidiary operates in an industry that is related to the parent's primary industry. These results suggest that boards of directors are relatively ineffective disciplinarians of CEOs, and that, despite their other apparent failings, conglomerate firms have relatively strict disciplining mechanisms for subsidiary managers.
Peréz-González examines the impact of inherited control on firms' performance. He uses data from management successions where the departing CEO was a member of the controlling family of the corporation. He finds that firms where control is inherited undergo large declines in return on assets and market-to-book ratios that are not experienced by firms that promote CEOs not related to the controlling family. Consistent with wasteful nepotism, these declines are particularly prominent in firms that appoint family CEOs who did not attend a selective college. Overall, the results strongly suggest that nepotism hurts firms' performance by limiting the scope of labor market competition.
Adams and Ferreira analyze the consequences of the board's dual role as an advisor and a monitor of management. Because of this dual role, the manager faces a trade-off concerning the amount of information he discloses to the board. On the one hand, if he reveals his information, he gets better advice. On the other hand, the board may change its opinion of his ability on the basis of his information. The authors' model shows that the board may choose to pre-commit to reduce its monitoring of the manager in order to encourage the manager to share his information. Therefore, management-friendly boards may be optimal governance structures under certain circumstances. The authors discuss some evidence consistent with the new empirical implications of their theory. Using the insights from the model, they also analyze the differences between a sole board system, such as in the United States, and the dual board system, as in various countries in Europe.
Ryan and Wiggins examine the relationship between the structure of outside-director compensation and board-of-director independence. They consider the relationship between the structure of director compensation and board size, the percentage of outside directors on the board, CEO tenure, and CEO/Chair duality. The evidence indicates a positive relationship between characteristics associated with an independent board and director compensation that is tied to stock-price performance. If barriers to monitoring are prevalent in a firm, then director compensation provides weaker incentives. Directors of firms in which the CEO is entrenched receive the fewest financial incentives to monitor.
Aggarwal and Samwick consider the equilibrium relationships between incentives from compensation, investment, and firm performance. Using an optimal contracting model, the authors show that the relationship between firm performance and managerial incentives, in isolation, is insufficient to identify whether managers have private benefits of investment, as in theories of managerial entrenchment. The authors then estimate the joint relationships between incentives and firm performance and between incentives and investment and show that investment increases with incentives. Further, they find that firm performance increases with incentives at all levels. Taken together, these results are not consistent with theories of overinvestment based on managers having private benefits from investment but are consistent with managers having private costs of investment and, more generally, with models of underinvestment.
Black, Jang, and Kim report that corporate governance is an important factor in explaining the market value of Korean public companies. The authors construct a corporate governance index for 526 companies based primarily on responses to a Spring 2001 survey of all listed companies by the Korea Stock Exchange. The index is based on five subindexes for shareholder rights, board and committee structure, board and committee procedures, disclosure to investors, and ownership parity. In their ordinary least squares (OLS) regressions, a moderate 10 point increase in the corporate governance index predicts a 6 percent increase in Tobin's q and a 14 percent increase in market/book ratio. A worst-to-best change in the index predicts a 42 percent increase in Tobin's q and an 87 percent increase in market/book ratio. This effect is statistically strong and robust to choice of performance variable (Tobin's q, market/book, and market/sales) and to specification of the corporate governance index. Each subindex is a significant predictor of higher Tobin's q (and other performance variables). This value effect appears to exist primarily because investors value the same reported earnings more highly for a better-governed firm, rather than because better-governed firms generate higher reported earnings or pay higher dividends. Unique features of Korea's corporate governance rules allow the authors to partially address two alternative explanations for these results: signaling (firms signal high quality by adopting good governance rules) and endogeneity (firms with high Tobin's q choose good governance rules). The authors use both a two-stage (2SLS) and a three-stage (3SLS) least squares approach; the coefficients are larger than estimated earlier and are highly significant. This is consistent with causation running from the exogenous component of corporate governance rules to higher Tobin's q (and other performance variables).
Does country transparency affect international portfolio investment? Gelos and Wei examine this and related questions using new measures of transparency and a unique micro dataset on international portfolio holdings of emerging market funds. They distinguish between government and corporate transparency. There is clear evidence that funds invest systematically less in less transparent countries. Herding among funds tends to be more prevalent in less transparent markets. Funds seem to react less strongly to macroeconomic news about opaque countries. There is also some evidence that during crises, funds flee non-transparent countries to a greater extent.
Using a dataset that provides unprecedented details on individual investors' stockholdings, Giannetti and Simonov analyze whether investors take into account corporate governance when they select stocks. After controlling for the supply effect via free float and other firm characteristics, the authors find that all categories of investors who generally enjoy only security benefits (domestic and foreign; institutional and small individual investors) are reluctant to invest in companies with bad corporate governance. In contrast, individuals who have strong connections with the local financial community because they are board members or hold large blocks of at least some listed companies behave differently. They do not care about the expected extraction of private benefits or even prefer to invest in firms where there is more room for it. The effect of corporate governance on portfolio decisions is more pronounced for small and medium-size companies. These findings shed new light on the determinants of investor behavior, and suggest that in order to understand portfolio choices it is important to distinguish between investors who enjoy private benefits or access private information and investors who enjoy only security benefits.
Desai, Dyck, and Zingales analyze the previously unexplored relationship between corporate governance and corporate taxation. They show that a higher tax rate increases the level of managerial diversion, while stronger tax enforcement reduces it. They also show that when the corporate governance system is ineffective (that is, when it is easy to divert income), or when ownership concentration levels are high, an increase in the tax rate actually can reduce tax revenues, generating a corporate version of the Laffer curve. Finally, the authors show that an increase in tax enforcement can increase (rather than decrease) the stock market value of a company. They find that corporate tax rate changes have smaller (in fact, negative) effects on revenues when ownership is more concentrated and corporate governance is worse. This corporate governance role of corporate taxes provides a new rationale for the existence of a low, well-enforced tax at the corporate level.
Blundell, Pistaferri, and Preston use panel data on household consumption and income inequality to evaluate the degree of insurance against income shocks. They aim to describe the transmission of income inequality into consumption inequality. Their framework nests the special cases of self-insurance and the complete markets assumption. The authors assess the degree of insurance, over and above self-insurance through savings, by contrasting shifts in the distribution of income growth with shifts in the distribution of consumption growth, and analyzing the way these two measures of household welfare correlate over time. Combining panel data on income with consumption data, they find some partial insurance but reject the complete markets restriction. They also find a greater degree of insurance for transitory shocks and differences in the degree of insurance over time. Finally, they document the importance of durables and of taxes and transfers as a means of insurance.
Hornstein, Krusell, and Violante argue that certain ingredients are likely to be important for understanding how wage distributions and employment respond to technology. First, it matters whether there are labor market frictions. Matching frictions, and associated departures from marginal-product wage determination, are important. Second, the form of technological change matters. Capital embodiment is a key characteristic of technological change. Third, how new capital is introduced into the economy -- what firms acquire it, and who decides on the purchase - is important. Fourth, labor market institutions are highly relevant for both wage distribution and unemployment, especially in analyzing the effects of technological change. Thus this paper analyzes a matching model with vintage capital and capital-embodied technological change. The authors consider the importance of where and how new capital enters, and contribute to the literature on "Europe-United States comparisons" by emphasizing how the institutional setting can matter for the effects of technology in general and for technological revolutions, such as the new "IT era," in particular.
A useful model of economic geography should determine not just how much economic activity occurs at a given location but also where that location is relative to others. In empirical studies, those latter features appear nowhere in measures of concentration or in cross-section and panel data regression. Indeed, they are absent in many analytical models of economic geography. Quah and Simpson provide a new econometric method for studying clustering that explicitly takes into account such spatial relations, motivating the analysis with a theoretical model of dynamically evolving spatial distribution. They then apply their techniques to geographically disaggregated data on the U.K. manufacturing sector.
Whereas a political fiscal cycle once was though to be a phenomenon of less developed economies, some recent studies find such a cycle in a large cross section of both developed and developing countries. Brender and Drazen re-examine these empirical results and show that they are not robust to the choice of countries and time periods. The authors show that the finding of a political fiscal cycle is driven by the experience of "new democracies," in which fiscal manipulation may "work" because of lack of experience with electoral politics or lack of information that voters in more established democracies use. The strong fiscal cycle in those countries accounts for the finding of a fiscal cycle in larger samples that include these countries. Once these countries are removed from the larger sample, the political fiscal cycle disappears. These findings also reconcile two contradictory views of pre-electoral manipulation, one arguing it is a useful instrument to gain voter support and a widespread empirical phenomenon, the other arguing that voters punish rather than reward fiscal manipulation.
Aoki and Nikolov evaluate the performance of three kinds of rule-based monetary policies under central bank-learning about the parameter values of a simple New Keynesian model. The central bank and the private sector learn the slopes of the IS and the Phillips curves by recursive least squares estimation, and form expectations and set policy based on their estimated model. The three policies the authors evaluate are: 1) the optimal non-inertial rule; 2) the optimal history-dependent rule; and 3) the Wicksellian rule. They show that the optimal Wicksellian rule delivers the highest welfare, by improving the inflation-output gap variability trade-off, without introducing undesirable feedback from past policy mistakes that are caused by imprecise parameter estimates.
Stock and Watson find first that, although there has not been a general increase in international synchronization among G-7 business cycles, there have been important changes, in particular the emergence of two groups -- one consisting of Euro-zone countries and the other of English-speaking countries -- within which correlations have increased and across which correlations have decreased. Further, cyclical movements in the United Kingdom became less correlated with Euro-zone countries and more correlated with North American countries in the period they study. Second, common international shocks have been smaller in the 1980s and 1990s than they were in the 1960s and 1970s. This declining volatility of common G-7 shocks is the source of much of the observed moderation in individual country business cycles. Moreover, this moderation of common G-7 shocks is responsible, in a mechanical sense, for the failure of business cycles to become more synchronous as one might expect given the large increase in trade over this period: had world shocks been as large in the 1980s and 1990s as they were in the 1960s and 1970s, then international cyclical correlations would have increased considerably. Third, the Japanese experience in many ways is exceptional. For the other G-7 countries, volatility generally decreased or at least stayed constant in the 1990s, but it increased in Japan in the 1990s. During the 1980s and 1990s, cyclical fluctuations in Japanese GDP almost became detached from the other G-7 economies, with domestic shocks explaining almost all of the cyclical movements in Japanese GDP. Fourth, however measured, persistence of disturbances has increased in Canada, France, and the United Kingdom. In those countries, a shock of a given magnitude would result in more cyclical volatility today than 30 years ago.
Francis and Ramey investigate the source of historical fluctuations in annual U.S. and U.K. data extending back to the nineteenth century. They use long-run identifying restrictions to decompose shocks into technology shocks and other shocks. For the U.S. data, a variety of models with differing auxiliary assumptions are investigated. In the United States, the impact of technology shocks on labor input in the pre-WWII period is the opposite of its impact in the post-WWII period in most models. The U.K. data shows more sample stability, with the short-run impact of technology on labor being negative. The decomposition also reveals important changes in the volatility of shocks over time.
The presence of growth options in the value of the firm generates variability in firm value that is not driven by current cash flows. This variability diminishes the correlation between investment and Tobin's Q, while maintaining a positive correlation between investment and cash flow. By simulating the model, Abel and Eberly show that there is also a high frequency negative relation between investment and cash flow shocks that can reverse these effects. However, time aggregation restores the weak relationship between investment and Q, and a strong positive association between investment and cash flow, consistent with that found in empirical studies. Growth options also generate excess volatility of firm value relative to cash flows.
These papers will be published in a special issue of the European Economic Review. Many of them are also available at "Books in Progress" on the NBER's website.