Berman and Laitin model the choice of tactics by rebels, bearing in mind that a successful suicide attack imposes the ultimate cost on the attacker and the organization. They first ask what a suicide attacker would have to believe to be deemed rational. They then embed the attacker and other operatives in a club-good model that emphasizes the function of voluntary religious organizations as providers of benign local public goods. The sacrifices that these groups demand make clubs well suited for organizing suicide attacks, a tactic in which defection by operatives (including the attacker) endangers the entire organization. The model also analyzes the choice of suicide attacks as a tactic, predicting that suicide will be used when targets are well protected and when damage is great. Those predictions are consistent with the patterns described above. The model has testable implications for tactic choice of terrorists and for damage achieved by different types of terrorists, which the authors find to be consistent with the data.
Feldman and Slemrod explore the relationship among war, government financing, and citizens' willingness to voluntarily comply with tax and other obligations because of social identity. Their motivating idea is that the willingness to voluntarily comply with obligations to the government may be a function of the perceived military threat to a country, and the willingness to pay in turn affects the marginal efficiency cost of raising resources, both via taxes and conscription. A model of the interactions generates predictions about the effect of the external threat on military spending, non-military spending, and the share of military resources raised via conscription, as well as predictions concerning the effect of wars and external threats on the willingness to voluntarily pay taxes. The authors test these predictions empirically using cross-country data from 1970 to the present on government finances, the Correlates of War Militarized Interstate Disputes dataset, and data on attitudes toward t evasion and military service from the World Values Survey.
Most studies of war finance have focused on how belligerent powers funded hostilities with their own resources. The collapse of the Third Republic in 1940 left Berlin in control of a nearly equally powerful industrial economy. The resources extracted from France by the Nazis represent perhaps the largest international transfer. Occhino, Oosterlinck , and White assess the welfare costs of the policies that the French chose to fund payments to Germany and alternative plans with a neoclassical growth model that incorporates essential features of the occupied economy and the postwar stabilization. Although the mix of taxes, bonds, and seigniorage employed by Vichy, resembles methods chosen by belligerents, the French economy sharply contracted. Vichy's postwar debt overhang would have required substantial budget surpluses; but inflation, which erupted after Liberation, reduced the debt well below its steady state level and redistributed the adjustment costs. The Marshall Plan played only a minor dire role, and international credits helped to substantially lower the nation's burden.
Fisman, Fisman, Galef, and Khurana estimate the value of personal ties to Richard Cheney through three distinct approaches that have been used recently to measure the value of political connections. Their proxies for personal ties are based on corporate board linkages that are prevalent in the network sociology literature. They measure the value of these ties using three event studies: 1) market reaction of connected companies to news of Cheney's heart attacks; 2) correlation of the value of connected companies with probability of Bush victory in 2000; and 3) correlation of the value of connected companies with the probability of war in Iraq. In all cases, the value of ties to Cheney is precisely estimated as zero. The authors interpret this as evidence that U.S. institutions are effective in controlling rent seeking through personal ties with high-level government officials.
A general problem facing estimates of the elasticity of labor supply to a profession is that the wage is "endogenous": when a profession is particularly pleasant, the wage tends to be low but the supply of labor high. In studying the supply of recruits to the military, Gelber solves this problem by instrumenting for the endogenous military wage -- he uses a statutory formula that usually governs increases in the wage. Using Department of Defense administrative data on all 3.5 million enlistment contracts signed by recruits over 16 recent years, he estimates that elasticities of labor supply with respect to wages are quite high. Ordinary least squares regressions sometimes show a negative or insignificant impact of the military wage on enlistments, but instrumental variables regressions show a positive and significant effect. The high elasticities imply that enlarging the military would be substantially less costly than previous estimates have suggested.
Bellows and Miguel study the aftermath of the brutal 1991-2002 Sierra Leone civil war. One notable aspect of their project is the availability of extensive household data on conflict experiences and local institutions (broadly defined) for Sierra Leone. They first confirm that there are no lingering effects of war violence on local socioeconomic conditions, a mere three years after the end of the civil war, in line with the existing war impact studies. They find that measures of local community mobilization and collective action - including the number of village meetings and the voter registration rate - are significantly higher in areas that experienced more war violence, conditional on extensive prewar and geographic controls. In other words, if anything, areas where there was greater violence against civilians during the recent war have arguably better local outcomes. These findings speak to the remarkable resilience of ordinary Sierra Leoneans. The authors view these results as complementary to the other recent studies of war, none of which examines local institutional or political economy impacts. These findings echo the claims of other observers of Sierra Leone (including Keen 2005 and Ferme 2002) who also argue that the war increased political awareness and mobilization and generated far-reaching institutional changes.
Becerra, Bohórquez, Johnson, Restrepo, Spagat, Suárez, and Zarama report a remarkable universality in the frequency of violence arising in two high-profile ongoing wars, and in global terrorism. Their results suggest that these quite different conflict arenas currently feature a common type of enemy, that is the various insurgent forces are beginning to operate in a similar way regardless of their underlying ideologies, motivations, and the terrain they operate in. The authors provide a theory to explain their main observations which treats the insurgent forces as a generic, self-organizing network, dynamically evolving through the continual coalescence and fragmentation of attack units.
Dertouzas documents research methods, findings, and policy conclusions from a project analyzing human resource management options for improving recruiting production. He details research designed to develop new insights to help guide future recruiter-management policies. The research involves econometric analyses of three large and rich datasets. The first analysis compares the career paths of enlisted personnel, including recruiters. The second analyzes individual recruiter characteristics and links those characteristics with their productivity, controlling for a variety of independent factors. Finally, the research focuses on station-level recruiting outcomes, paying close attention to the management options that can affect recruiter production and effort. These empirical analyses demonstrate that various types of human resource management policies can be very helpful in meeting the Army's ambitious recruiting requirements. For example, the findings have implications for human resource policies in the areas of selecting soldiers for recruiting duty, assigning recruiters to stations, missioning to promote equity across recruiters, missioning to increase recruiter productivity, using promotions to motivate and reward recruiters, and screening out recruiters who are under-producing. Although the gains from any individual policy appear to be modest, the cumulative benefits of implementing multiple policies could save the Army over $50 million in recruiting resources on an annual basis. This work will interest those involved in the day-to-day management of recruiting resources as well as researchers and analysts engaged in analyses of military enlistment behavior.
Loughran describes research using a sample of Army and Air Force reservists activated in 2001 and 2002 for the Global War on Terrorism. It combines information on their civilian earnings from Social Security Administration (SSA) data for 2001 with information on military earnings from Department of Defense (DoD) administrative files to estimate the effect of activation on their earnings. This measure of military earnings includes pays, allowances, and an approximation to the value of the federal tax preference accorded military allowances and military pay received while serving in a combat zone. The results on earnings and activation reported in this document are early and subject to a number of important caveats, but the estimates do imply less prevalent and severe earnings losses among activated reservists than do estimates derived from DoD survey data.
Congress has put forward several proposals to increase the generosity of the retirement benefits payable to reservists. The proposals have the potential to affect reserve retention behavior, yet also could create cross effects on retention in the active-duty force. Hosek and his colleagues, Beth Asch and Daniel Clendenning, developed a dynamic programming model of active and reserve retention, estimated it on actual data, and used it to simulate the effects of the proposals. The most generous proposal, which starts retirement benefits as soon as the individual leaves the reserves after 20 or more years of active and reserve service, increased mid-career retention in the actives but increased the outflow from the actives to the reserves. Reserve retention increased prior to 20 years but decreased afterwards, by so much that expected years of service declined on net. None of the congressional proposals was found to be cost effective.
Simon and Warner model first-term enlisted attrition as the outcome of a process of learning about true tastes for service. Attrition occurs when recruits learn that their true tastes for service are sufficiently lower than their forecasted tastes as to render their gain to staying negative. Preference shocks might arise from different sources, but in this model they arise when youth are ill informed about the actual on-the-job effort requirement and (optimistically) understate this requirement prior to entry. Larger mistakes in forecasting the effort requirement lead to higher early attrition, but a steeper decline in attrition relative to the better-informed groups. The authors' empirical analysis provides evidence supporting this view of the attrition process. More educated groups, males, and non-whites are estimated to have lower, flatter attrition profiles, a result consistent with the model. The model also explains the empirical finding of lower and flatter attrition profiles for individuals who entered and remained longer in Delayed Entry Program (DEP). This last result has important implications for current military manpower policy. The length and lethality of the second Iraq war has strained the existing force, the Army in particular, which along with the Marine Corps has borne the brunt of the conflict. As public support for the mission in Iraq has declined, the Army has missed its recruiting targets in recent months. In response, the Army has reduced the time that newly signed recruits spend in the DEP in order to place them in service more quickly. In addition to reducing the pipeline of future manpower supply, the empirical results here suggest that the result will also entail higher attrition in service. The Army recognizes the problem and has adjusted basic training to reduce attrition. It remains to be seen whether this adjustment in training policies will reduce attrition longer term.
Berrebi and Klor use scoring matching techniques and event study analysis to elucidate the impact of terrorism across different economic sectors. Using the Israeli-Palestinian conflict as a case study, they differentiate between Israeli companies that belong to the defense, security, or anti-terrorism related industries and other companies. Their findings show that, whereas terrorism has a significant negative impact on non-defense-related companies, the overall effect of terrorism on defense and security-related companies is significantly positive. Similarly, using panel data on countries' defense expenditures and imports from Israel, they find that terror fatalities in Israel have a positive effect on Israeli exports of defense products. These results suggest that the expectation of future high levels of terrorism has important implications for resource allocation across industries.
Lakdawalla and Talley analyze the normative role for civil liability in aligning terrorism pre-caution incentives, when the perpetrators of terrorism are unreachable by courts or regulators. The authors consider the strategic interaction among targets, subsidiary victims, and terrorists within a sequential, game-theoretic model. Their model reveals that, while an "optimal" liability regime indeed exists, its features appear at odds with conventional legal templates. For example, it frequently prescribes damages payments from seemingly unlikely defendants, directing them to seemingly unlikely plaintiffs. The challenge of introducing such a regime using existing tort law doctrines, therefore, is likely to be prohibitive. Instead, the authors argue, efficient precaution incentives may be best provided by alternative policy mechanisms, such as a mutual public insurance pool for potential targets of terrorism, coupled with direct compensation to victims of terrorist attacks.
Kosová and Lafontaine analyze the survival and growth of franchised chains using an unbalanced panel data set that covers about 1000 franchised chains each year from 1980 to 2001. The empirical literature on firm survival and growth has focused almost exclusively on manufacturing. This analysis allows the authors to explore whether chain age and size have the same effect on the survival and growth of retail and service chains as firm and establishment age and size have been found to have on survival and growth in manufacturing. In addition, while the researchers focus on the effect of age and size as the prior literature has done, their large and long panel data set allows them to control for the first time for chain-specific effects as well as for other chain characteristics that might affect chain survival and growth. They find that controlling for chain-level unobserved heterogeneity is statistically warranted, and affects the conclusions they reach on the effect of chain age and size in our regressions. They also find that other chain characteristics affect the survival and growth of individual chains. Finally, their long panel allows them to examine a subsample of mature chains, for which they find that age and size no longer affect exit. However, they find that chain size continues to have a negative effect on chain growth, a result that implies that chains converge in size to chain-specific levels.
Cockburn and Wagner examine the effect of patenting on the survival prospects of 356 internet-related firms that made an initial public offering on the NASDAQ at the height of the stock market bubble of the late 1990s. By March 2005, almost two thirds of these firms had delisted from the exchange. Changes in the legal environment in the United States in the 1990s made it much easier to obtain patents on software, and ultimately, on business methods, although less than half of the firms in the sample obtained, or attempted to obtain, patents. For those that did, the authors hypothesize that patents conferred competitive advantages that translated into higher probability of survival, although they may also simply have been a signal of firm quality. Controlling for other determinants of firm survival, such as age, venture-capital backing, financial characteristics, and stock market conditions, patenting is positively associated with survival. Quite different processes appear to govern exit via acquisition compared to exit via delisting from the exchange because of business failure. Firms that applied for more patents were less likely to be acquired, although if they obtained unusually highly cited patents, they meght be a more attractive acquisition target. These findings do not hold true for business method patents, which do not appear to confer a survival advantage.
Blanchflower and Wainwright find that despite the existence of various affirmative action programs designed to improve the position of women and minorities in public construction, little has changed in the last 25 years. They show that where race-conscious affirmative action programs exist, they appear to generate significant improvements: when these programs are removed or replaced with race-neutral programs, the utilization of minorities and women in public construction declines rapidly. They also show that the programs have not helped minorities to become self-employed or to raise their earnings over the period 1979-2004, using data from the Current Population Survey and the Census, but have improved the position of white females. There has been a growth in incorporated self-employment rates of white women in construction such that currently their rate is significantly higher than that of white men. The data are suggestive of the possibility that some of these companies are "fronts" which are actually run by their white male spouses or sons to take advantage of the affirmative action programs.
Schoar uses information on Chapter 11 filings for almost 5000 private companies across five district courts in the United States between 1989 and 2003. For each case, she codes the entire docket, in particular all of the decisions that the judge made during a Chapter 11 process. She first establishes that while there are some significant differences across districts in the types of firms that file for Chapter 11, within districts, cases appear to be assigned randomly to judges. She then estimates judge-specific fixed effects to analyze whether judges differ systematically in their Chapter 11 rulings. She finds very strong and economically significant differences across judges in their propensity to grant or deny specific motions. Some judges appear to rule persistently more favorably towards allowing the use of cash collateral, lifting the automatic stay, or conversion of cases into other chapters, such as 7. Next, she uses the estimated judge fixed effects to instrument for the exogenous variation in the propensity to grant a specific motion. She shows that the use of cash collateral and the extension of the exclusivity period increase a firm's likelihood of re-filing for bankruptcy. Finally, based on the judge fixed effects, she also creates an aggregate index to measure the pro-debtor (pro-creditor) friendliness of the judges. She provides suggestive evidence that a pro-management bias leads to increased rates of re-filing and lower post-bankruptcy credit ratings.
Hurst and Lusardi (2004) recently challenged the long-standing belief that liquidity constraints are important causal determinants of entry into self-employment. They demonstrated that the oft-cited positive relationship between entry rates and assets is actually unchanging as assets increase from the first to the 95th percentile of the asset distribution, but rise drastically after this point. They also applied a new instrument, unanticipated changes in house prices, for wealth in the entry equation, and showed that instrumented wealth is not a significant determinant of entry. Fairlie and Krashinsky reinterpret these findings: first, they demonstrate that bifurcating the sample into workers who enter self-employment after job loss and those who do not reveals steadily increasing entry rates as assets increase in both subsamples. They argue that these two groups merit a separate analysis, because a careful examination of the entrepreneurial choice model of Evans and Jovanovic (1989) reveals that the two groups face different incentives, and thus have different solutions to the entrepreneurial decision. Second, they use microdata from matched Current Population Surveys (1993-2004) to demonstrate that unanticipated housing appreciation measured at the MSA-level is a significantly positive determinant of entry into self-employment. In addition, they perform a duration analysis to demonstrate that pre-entry assets are an important determinant of entrepreneurial longevity.
Jovanovic and Szentes model the market for venture capital. VCs have the expertise to assess the profitability of projects, and have liquidity to finance them. The scarcity of VCs enables them to internalize their social value, so that the competitive equilibrium is socially optimal. This optimality obtains on an open set of parameter values. The scarcity of VCs also leads to an equilibrium return on venture capital higher than the market rate, but the preliminary estimates here show this excess return to be negligible. The ability to earn higher returns makes VCs less patient when waiting for a project to succeed; this explains why companies backed by venture capitalists reach IPOs earlier than other start-ups and why they are worth more at IPO.
NBER Director John Lipsky to the IMF
NBER Director-at-Large John Lipsky, who was elected to the Board in 1998 and became a member of the Executive Committee in 2002, will become first deputy managing director of the International Monetary Fund on September 1. Lipsky succeeds Anne O. Krueger, an NBER Research Associate and international economist, in that position. Lipsky is currently vice chairman of J.P.Morgan Chase & Company, but worked at the IMF earlier in his career, from 1974-84. In 1984 he joined Salomon Brothers, where he spent 13 years. He then moved to Chase Manhattan Bank, serving as Chief Economist for three years. He was appointed chief economist of J.P. Morgan Chase when the two banks merged.
NBER Researcher to Head Philadelphia Fed
NBER Research Associate Charles I. Plosser, a professor of economics and former dean of the William E. Simon Graduate School of Business Administration, University of Rochester, has been named president of the Federal Reserve Bank of Philadelphia. He takes office on August 1. Plosser had been a member of the NBER's Program on Economic Fluctuations and Growth.
Over the past century, the labor force participation rate of women has increased dramatically. Hellerstein and Sandler examine one potential ramification of this, namely whether the transmission of occupation-specific skills between fathers and daughters has increased. They develop a model of intergenerational human capital investment in which increased labor force participation by women gives fathers more incentives to invest in daughters' skills that are specific to the fathers' occupations. As a result, daughters are more likely to enter the labor market and to take up their fathers' occupations. Testing whether the transmission of occupation-specific skills between fathers and daughters has increased is confounded by the fact that occupational upgrading of women alone will generate an increased probability over time that women work in their fathers' occupations. The authors show that, under basic assumptions of assortative mating, a comparison of the rates of change over time in the probability that a woman enters her father's occupation relative to her father-in-law's occupation can be used to test whether there has been increased transmission of occupation-specific human capital. Using data for the birth cohorts of 1909-77 containing information on womens' occupations and the occupations of their fathers and fathers-in-law, Hellerstein and Sandler demonstrate an increase in occupation-specific transmission between fathers and daughters. They show that this is a phenomenon unique to women, as it should be if it is a response to rising female labor force participation rates. The magnitude of the shift in women working in their fathers' occupations that results from increased transmission is large - about 20 percent of the total increase in the probability a woman enters her father's occupation over our sample period - and this is an estimate that they argue is likely a lower bound.
Heckman, Stixrud, and Urzua establish that a low-dimensional vector of cognitive and noncognitive skills explains a variety of labor market and behavioral outcomes. For many dimensions of social performance, cognitive and noncognitive skills are equally important. Their analysis addresses the problems of measurement error, imperfect proxies, and reverse causality that plague conventional studies of cognitive and noncognitive skills that regress earnings (and other outcomes) on proxies for skills. Noncognitive skills strongly influence schooling decisions, and also affect wages given schooling decisions. Schooling, employment, work experience, and choice of occupation are affected by latent noncognitive and cognitive skills. These authors study a variety of correlated risky behaviors, such as teenage pregnancy and marriage, smoking, marijuana use, and participation in illegal activities. They find that the same low-dimensional vector of abilities that explains schooling choices, wages, employment, work experience, and choice of occupation explains these behavioral outcomes.
Ahee and Malmendier argue that individual biases inducing overpayment are exacerbated in auctions. If consumers are heterogeneous in their ability to identify the lowest-price item of a given quality, then the auction mechanism will systematically select as winners those consumers whose estimate is most biased upward. Using a novel dataset on eBay auctions of a popular board game, the authors find that buyers neglect lower prices once they have started bidding. In 51 percent of all auctions, the price is higher than the"buy-it-now" price at which the same good is available for immediate purchase from the same website. However, only 12 percent of bidders systematically overbid. The authors also find that prices are more likely to be above the buy-it-now price in longer auctions, auctions with more bids, and if the seller's item description explicitly mentions the (higher) retail price of the manufacturer. Experience does not diminish the suboptimal bidding behavior. Instead, high experience is correlated with more distortion, such as higher bidding in auctions where the manufacturer's price is mentioned. The latter result suggests that overbidding reflects individual biases rather than search cost or other standard explanations for suboptimal purchase decisions.
Weinberg studies how proximity and vintage are related to innovation, using evidence from the human capital revolution in labor economics. He finds a strong effect of geography on the probability of making a contribution and on the nature of the contribution. Contributors to the human capital paradigm are significantly more likely to have studied at the University of Chicago or Columbia University and to have been in graduate school in the early years of the human capital revolution, earning their doctorates during the mid-1960s. These results also indicate that a small number of contributors played a large role in the development of human capital, especially at the beginning.
Theory predicts that mandated employment protections may reduce productivity by distorting production choices. Firms facing (non-Coasean) worker dismissal costs will curtail hiring below efficient levels and retain unproductive workers, both of which should affect productivity. Autor,Kerr, and Kugler use the adoption of wrongful-discharge protections by U.S. state courts over the last three decades to evaluate the link between dismissal costs and productivity. Drawing on establishment-level data from the Annual Survey of Manufacturers and the Longitudinal Business Database, they find that wrongful-discharge protections significantly reduce employment flows. Moreover, analysis of plant-level data provides evidence of capital deepening and a decline in total factor productivity following the introduction of wrongful-discharge protections. This last result is potentially quite important, suggesting that mandated employment protections reduce productive efficiency, as theory would suggest. However, the analysis also presents some puzzles including, most significantly, evidence of strong employment growth following adoption of dismissal protections. In light of these puzzles, the authors read their findings as suggestive but tentative.
Published macroeconomic data traditionally exclude most intangible investment from measured GDP. This situation is beginning to change, but the estimates here suggest that as much as $800 billion is still excluded from U.S. published data (as of 2003), and that this leads to the exclusion of more than $3 trillion of business intangible capital stock. To assess the importance of this omission, co-authors Corrado, Hulten, and Sichel add intangible capital to the standard sources-of-growth framework used by the BLS, and find that the inclusion of our list of intangible assets makes a significant difference in the observed patterns of U.S. economic growth. The rate of change of output per worker increases more rapidly when intangibles are counted as capital, and capital deepening becomes the unambiguously dominant source of growth in labor productivity. The role of multifactor productivity is correspondingly diminished, and labor's income share is found to have decreased significantly over the last 50 years.
Eslava, Haltiwanger, Kugler, and Kugler analyze employment and capital adjustments using a panel of plants from Colombia. They allow for nonlinear adjustment of employment to reflect not only adjustment costs of labor but also adjustment costs of capital, and vice versa. Using data from the Annual Manufacturing Survey, which include plant-level prices, they generate measures of plant-level productivity, demand shocks, and cost shocks, and use them to measure desired factor levels. They then estimate adjustment functions for capital and labor as a function of the gap between desired and actual factor levels. As in other countries, they find non-linear adjustments in employment and capital in response to market fundamentals. In addition, they find that employment and capital adjustments reinforce each other, in that capital shortages reduce hiring and labor shortages reduce investment. Moreover, they find that the market-oriented reforms introduced in Colombia after 1990 increased employment adjustments, especially on the job destruction margin, while reducing capital adjustments. Finally, they find that while completely eliminating frictions from factor adjustments would yield a dramatic increase in aggregate productivity through improved allocative efficiency, the reforms introduced in Colombia generated relatively modest improvements.
Using a matched employer-employee data set of manufacturing plants in three sub-Saharan countries, Van Biesebroeck compares the marginal productivity of different categories of workers with the wages they earn. In each country, he observes approximately 135 firms and an average of 5.5 employees per firm. Under certain conditions, the wage premiums for worker characteristics should equal the productivity benefits associated with them. He finds that equality holds strongly in Zimbabwe (the most developed country in the sample), but not at all for Tanzania (the least developed country). The results for Kenya are intermediate. Differences between wage and productivity premiums are most pronounced for characteristics that are clearly related to human capital, such as schooling, training, experience, and tenure. Moreover, where the wage premium differs from the productivity benefit, general human capital tends to receive a wage return that exceeds the productivity return, and the reverse holds for more specific human capital investments. Schooling tends to be over-rewarded, even though most of the productivity benefit comes from job training. Wages tend to rise with experience, even though productivity is mostly increasing in tenure. Sampling errors, nonlinear effects, and non-wage benefits are rejected as explanations for the gap between wage and productivity effects. Localized labor markets and imperfect substitutability of different worker-types provide a partial explanation.
Most industries go through a "shakeout" phase during which the number of producers in the industry declines. Industry output generally continues to rise, though, which implies a reallocation of capacity from exiting firms to incumbents and new entrants. Thus, shakeouts seem to be classic creative destruction episodes. Shakeouts of firms tend to occur sooner in industries where technological progress is more rapid. Existing models do not explain this. In fact, the relationship emerges in a vintage-capital model in which shakeouts of firms accompany the replacement of capital, and in which a shakeout is the first replacement echo of the capital created when the industry is born. Jovanovic and Chung-Yi fit the model, with some success, to the Gort-Klepper data.
Do adult children influence the care that elderly parents provide for each other? Pezzin, Pollak, and Schone develop two models in which the anticipated behavior of adult children provides incentives for elderly parents to increase care for their disabled spouses. The "demonstration effect" assumes that children learn from a parent's example that family caregiving is appropriate behavior. For the "punishment effect," if the nondisabled spouse fails to provide spousal care, then children may respond by not providing future care for the nondisabled spouse when necessary. Joint children act as a commitment mechanism, increasing the probability that elderly spouses will provide care; stepchildren may provide weaker incentives for spousal care. Using data from the Health and Retirement Study, the authors find some evidence that spouses provide more care when they have children with strong parental attachment.
Bleakley considers the malaria-eradication campaigns in the United States (circa 1920), and in Brazil, Colombia, and Mexico (circa 1955), with a specific goal of measuring how much childhood exposure to malaria depresses labor productivity. These eradication campaigns happened because of advances in medical and public-health knowledge, which mitigates concerns about reverse causality of the timing of eradication efforts. Bleakley collects data from regional malaria eradication programs and collates them with publicly available census data. Malarious areas saw large drops in their malaria incidence following the campaign. In both absolute terms and relative to those in non-malarious areas, the cohorts born after eradication had higher income as adults than the preceding generation. Similar increases in literacy and the returns to schooling also occur. The results for years of schooling are mixed, though.
Troesken and Clay confirm that deprivation early in life can have lingering physiological effects. In particular, their results suggest that crowded housing conditions in early life facilitate the spread of tuberculosis, which in turn, increases the risk of cancer and stroke later in life. In the typical city, eradicating tuberculosis in 1900 would have reduced the death rates from cancer and stroke in 1915 by 32 percent. Similarly, drinking impure water in early life raises the likelihood that one will be infected with typhoid fever, which in turn, increases the risk of heart and kidney disease later in life. In the typical city, eradicating typhoid fever in 1900 would have reduced the death rate from heart disease by 21 percent, and the death rate from kidney disease by 23 percent. These results are obtained after the authors include controls for the contemporaneous disease environment and lagged values of the dependent variable and the overall disease environment.
Lee investigates the patterns of socioeconomic differences in wartime morbidity and mortality of black Union Army soldiers, and compares them with white recruits. Light-skinned soldiers, former slaves who had been engaged in non-field occupations, men from large plantations, and enlistees from urban areas were less likely to contract diseases and/or to die from disease while in service than, respectively, dark-skinned soldiers, field hands, men from small farms, and enlistees from rural areas. Patterns of disease-specific mortality and timing of death suggests that the differences in development of immunity against diseases and nutritional status prior to enlistment are responsible for the observed mortality differentials. The patterns of wartime mortality of black and white soldiers are generally similar, but the relative effects of the two factors were somewhat different by race. It appears that the health of white recruits was more strongly influenced by the disease environment they were exposed to prior to enlistment. For black soldiers, on the other hand, socioeconomic status, a proxy for nutritional status and general economic wellbeing, was a perhaps more powerful determinant of health. Lee suggests that the larger occupational differences in wartime mortality among blacks could reflect the differences in health and living conditions of blacks and whites prior to enlistment. The stronger health effect of prior residence in urban areas among whites could be explained by the differences in prior exposure to disease between blacks and whites.
Three of the most important recent facts in global macroeconomics - the sustained rise in the U.S. current account deficit, the stubborn decline in long-run real rates, and the rise in the share of U.S. assets in global portfolio - appear as anomalies from the perspective of conventional wisdom and models.Caballero, Farhi, and Gourinchas provide a model that rationalizes these facts as an equilibrium outcome of two observed forces: 1) potential growth differentials among different regions of the world and, 2) heterogeneity in these regions' capacity to generate financial assets from real investments. In extensions of the basic model, they also generate exchange rate and gross flows patterns that are broadly consistent with the recent trends observed in these variables. Unlike the conventional wisdom, in the absence of a large change in the two forces, the model does not augur any catastrophic event. More generally, the framework is flexible enough to shed light on a range of scenarios in a global equilibrium environment.
Rose and Spiegel analyze the causes and consequences of offshore financial centers (OFCs). Since OFCs are likely to be tax havens and money launderers, they encourage bad behavior in source countries. Nevertheless, OFCs may also have unintended positive consequences for their neighbors, since they act as a competitive fringe for the domestic banking sector. The authors derive and simulate a model of a home country monopoly bank facing a representative competitive OFC that offers tax advantages attained by moving assets offshore at a cost that is increasing in distance between the OFC and the source. The model predicts that proximity to an OFC is likely to have pro-competitive implications for the domestic banking sector, although the overall effect on welfare is ambiguous. Rose and Spiegel test and confirm the predictions empirically. OFC proximity is associated with a more competitive domestic banking system and greater overall financial depth.
Ranciere, Tornell, and Westermann document the fact that countries that have experienced occasional financial crises have, on average, grown faster than countries with stable financial conditions. The authors measure the incidence of crisis using the skewness of credit growth, and find that it has a robust negative effect on GDP growth. This link coexists with the negative link between variance and growth typically found in the literature. To explain the link between crises and growth, the authors present a model in which contract enforce-ability problems generate financial constraints and low growth. Systemic risk-taking relaxes borrowing constraints and increases investment. This leads to higher long-run growth, but also to a greater incidence of crises. The authors find that the negative link between skewness and growth emerges under similar restrictions in the model and in the data.
It has been observed that more open countries experience higher output growth volatility. DiGiovanni and Levchenko use an industry-level panel dataset of manufacturing production and trade to analyze the mechanisms through which trade can affect the volatility of production. They find that sectors with higher trade are more volatile and that trade leads to increased specialization. These two forces act to increase overall volatility. They also find that sectors that are more open to trade are less correlated with the rest of the economy, an effect that acts to reduce aggregate volatility. The point estimates indicate that each of the three effects has an appreciable impact on aggregate volatility. Added together they imply that a single standard deviation change in trade openness is associated with an increase in aggregate volatility of about 15 percent of the mean volatility observed in the data. The results are also used to provide estimates of the welfare cost of increased volatility under several sets of assumptions. The authors then propose a summary measure of the riskiness of a country's pattern of export specialization, and analyze its features across countries and over time. There is a great deal of variation in countries' risk content of exports, but it does not have a simple relationship to the level of income or other country characteristics.
Standard theory shows that sterilized foreign exchange interventions do not affect equilibrium prices and quantities, and that domestic and foreign currency-denominated bonds are perfect substitutes. Kumhof and Van Nieuwerburgh show that when fiscal policy is not sufficiently flexible in response to spending shocks, exchange rates must adjust to restore budget balance. This exchange rate adjustment generates a capital gain or loss for holders of domestic currency denominated bonds and causes perfect substitutability to break down. Because of imperfect asset substitutability, uncovered interest rate parity no longer holds. Government balance sheet operations can be used as an independent policy instrument to target interest rates. Sterilized foreign exchange interventions should be most effective in developing countries, where fiscal volatility is large and where the fraction of domestic currency denominated government liabilities is small.
Openness to trade is one factor that has been identified as determining whether a country is prone to sudden stops in capital inflow, currency crashes, or severe recessions. Some believe that openness raises vulnerability to foreign shocks, while others believe that it makes adjustment to crises less painful. Several authors have offered empirical evidence that having a large tradable sector reduces the contraction necessary to adjust to a given cut-off in funding. This would help explain lower vulnerability to crises in Asia than in Latin America. Such studies may, however, be subject to the problem that trade is endogenous. Cavallo and Frankel use the gravity instrument for trade openness, which is constructed from geographical determinants of bilateral trade. They find that openness indeed makes countries less vulnerable, both to severe sudden stops and currency crashes, and that the relationship is even stronger when correcting for the endogeneity of trade.
Before the stock market crash of 1987, the Black-Scholes model implied that volatilities of S&P 500 index options were relatively constant. Since the crash, though, deep out-of-the money S&P 500 put options have become "expensive" relative to the Black-Scholes benchmark. Many researchers have argued that such prices cannot be justified in a general equilibrium setting if the representative agent has "standard preferences." However, Benzoni, Goldstein, and Collin-Dufresne demonstrate that the "volatility smirk" can be rationalized if the agent is endowed with Epstein-Zin preferences and if the aggregate dividend and consumption processes are driven by a persistent stochastic growth variable that can jump. They identify a realistic calibration of the model that simultaneously matches the empirical properties of dividends, the equity premium, the prices of both at-the-money and deep out-of-the-money puts, and the level of the risk-free rate. A more challenging question (that apparently has not been previously investigated) is whether one can explain within a standard preference framework the stark regime change in the volatility smirk that has existed since the 1987 market crash. To this end, the authors extend their model to a Bayesian setting in which the agents update their beliefs about the average jump size in the event of a jump. Such beliefs only update at crash dates, and hence can explain why the volatility smirk has not diminished over the last 18 years. The authors find that the model can capture the shape of the implied volatility curve both pre- and post-crash while maintaining reasonable estimates for expected returns, price-dividend ratios, and risk-free rates.
Brandt, Cochrane, and Santa-Clara (2004) pointed out that the implicit stochastic discount factors computed using prices, on the one hand, and consumption growth, on the other hand, have very different implications for their cross-country correlation. They leave this as an unresolved puzzle. Colacito and Croce explain it by combining Epstein and Zin (1989) preferences with a model of predictable returns and by positing a very correlated long-run component. They also assume that the intertemporal elasticity of substitution is larger than one. This setup brings the stochastic discount factors computed using prices and quantities close together, by keeping the volatility of the depreciation rate in the order of 12 percent and the cross-country correlation of consumption growth around 30 percent.
Jagannathan, Malakhov, and Novikov empirically demonstrate that both hot and cold hands among hedge fund managers tend to persist. To measure performance, they use statistical model-selection methods for identifying style benchmarks for a given hedge fund, and they allow for the possibility that hedge fund net asset values may be based on stale prices for illiquid assets. They are able to eliminate the backfill bias by deleting all of the backfill observations in their dataset. They also take into account the self-selection bias introduced by the fact that both successful and unsuccessful hedge funds stop reporting information to the database provider. The former stop accepting new money and the latter get liquidated. The authors find statistically as well as economically significant persistence in the performance of funds relative to their style benchmarks. It appears that half of the superior or inferior performance during a three-year interval will spill over into the following three-year interval.
Panageas and Jianfeng develop a theoretical model in order to understand comovements between asset returns and consumption over longer horizons. They develop an intertemporal general equilibrium model featuring two types of shocks: "small," frequent, and disembodied shocks to productivity and "large" technological innovations, which are embodied in new vintages of the capital stock. The latter affect the economy with significant lags, because firms need to make irreversible investments in the new types of capital and there is an option value to waiting. The model produces endogenous cycles, countercyclical variation in risk premia, and only a very modest degree of predictability in consumption and dividend growth as observed in the data. The authors then use their model as a laboratory to show that, in their simulated data, the unconditional consumption Capital Asset Pricing Model performs badly, while its "long-horizon" version performs significantly better.
Andersen and Benzoni investigate whether bonds can hedge volatility risk in the U.S. Treasury market, as predicted by most "affine" term structure models. To this end, they construct powerful and model-free empirical measures of the quadratic yield variation for a cross-section of fixed-maturity zero-coupon bonds ("realized yield volatility") over daily, weekly, and monthly maturities through the use of high-frequency data. They find that the yield curve fails to span yield volatility, as the systematic volatility factors appear largely unrelated to the cross-section of yields. They conclude that a broad class of affine diffusive, quadratic diffusive, and affine jump-diffusive models is incapable of accommodating the observed yield volatility dynamics at daily, weekly, and monthly horizons. Hence, yield volatility risk per se cannot be hedged by taking positions in the Treasury bond market. The authors also advocate using these empirical yield volatility measures more broadly as a basis for specification testing and (parametric) model selection within the term structure literature.
Barber, Ning, and Odean study the trading behavior of individual investors using the Trade and Quotes (TAQ) and Institute for the Study of Security Markets (ISSM) transaction data for the period 1983 to 2001. They document three results: First, order imbalance based on buyer- and seller-initiated small trades from the TAQ/ISSM data correlates well with the order imbalance based on trades of individual investors from brokerage firm data. This indicates that trade size is a reasonable proxy for the trading of individual investors. Second, order imbalance based on TAQ/ISSM data indicates strong herding by individual investors. Individual investors predominantly contemporaneously buy (sell) the same stocks as each other. Furthermore, they predominantly buy (sell) the same stocks in one week (month) that they did the previous week (month). Third, when measured over one year, the imbalance between purchases and sales of each stock by individual investors forecasts cross-sectional stock returns the next year. Stocks heavily bought by individuals one year underperform stocks heavily sold by 4.4 percentage points in the following year. The spread in returns of stocks bought and stocks sold is greater for small stocks and stocks heavily traded by individual investors. Among stocks heavily traded by individual investors, the spread in returns between stocks bought and stocks sold is 13.5 percentage points the following year.Over shorter periods, such as a week or a month, a different pattern emerges. Stocks heavily bought by individual investors one week earn strong returns in the subsequent week, while stocks heavily sold one week earn poor returns in the subsequent week. This pattern persists for a total of three to four weeks and then reverses for the subsequent several weeks. In addition to examining the ability of small trades to forecast returns, the authors look at the predictive value of large trades. In striking contrast to their small trade results, they find that stocks heavily purchased with large trades one week earn poor returns in the subsequent week, while stocks heavily sold one week earn strong returns in the subsequent week.
Landier, Nair, and Wulf document the role of geographic dispersion on corporate decisionmaking. They find that geographically dispersed firms are less employee-friendly. Also, using division-level data, they find that employee dismissals are less common in divisions located close to corporate headquarters. Finally, it turns out that firms are reluctant to divest in-state divisions. To explain these findings, the authors consider two mechanisms. First, they investigate whether headquarter proximity to divisions is related to internal information flows. They find that firms are geographically concentrated when information is more difficult to transfer over long distances (soft information industries). Additionally, the protection of proximate employees is stronger in such soft-information industries. Second, they investigate how headquarter proximity to employees affects managerial alignment with shareholder objectives. They document that the protection of proximate employees only holds when the headquarters are located in less-populated counties, suggesting concern for such employees. Moreover, stock markets respond favorably to divestitures of close divisions, especially for these smaller-county firms. These findings suggest that social factors work alongside informational considerations in making geographic dispersion an important factor in corporate decision-making.
Kedia, Panchapagesan, and Uysal examine the impact of geographical proximity on the acquisition decisions of U.S. public firms over the period 1990-2003. Transactions in which the acquirer and target firms are located within 100 kilometers of each other are classified as local transactions. The authors find that acquirer returns in local transactions are more than twice those in non-local transactions. The higher returns to local acquirers are, at least partially, attributable to information advantages arising from geographical proximity. These information advantages facilitate acquisition of targets that, on average, create higher overall return. However, bidders use their information advantages to earn a higher share of the surplus created.
Lemmon, Roberts, and Zender examine the evolution of the cross-sectional distribution of capital structure and find it to be remarkably stable over time: firms with high (low) leverage remain relatively high (low) levered for over 20 years. Additionally, this relative ranking is observed for both public and private firms, and is largely unaffected by the process of going public. These persistent differences in leverage across firms are associated with the presence of an unobserved firm-specific effect that is responsible for the majority of variation in capital structure. Over 90 percent of the explained variation in leverage is captured by firm fixed effects, whereas previously identified determinants (for example, size, market-to-book, industry) are responsible for less than 10 percent. These findings show that firms use net security issuances to maintain their leverage ratios in relatively confined regions around their long-run means, consistent with a dynamic rebalancing of capital structure. Importantly, the results imply that the primary determinants of cross-sectional variation in corporate capital structures are largely time invariant, which significantly reduces the set of candidate explanations to those based on factors that remain relatively stable over long periods of time.
Using an information asymmetry index based on measures of adverse selection developed by the market microstructure literature, Bharath, Pasquariello, and Guojun test whether information asymmetry is the sole determinant of capital structure decisions, as suggested by the pecking order theory. Their tests rely exclusively on measures of the market's assessment of adverse selection risk, rather than on ex-ante firm characteristics. They find that information asymmetry does affect capital structure decisions of U.S. firms over the period 1973-2002, especially when firms' financing needs are low and when firms are financially constrained. They also find a significant degree of intertemporal variability in firms' degree of information asymmetry, as well as in its impact on firms' debt issuance decisions. These findings, based on the information asymmetry index, are robust to sorting firms based on size and firm insider trading activity, two popular alternative proxies for the severity of adverse selection. Overall, this evidence explains why the pecking order theory is only partially successful in explaining all of firms' capital structure decisions. It also suggests that the theory finds support when its basic assumptions hold in the data, as should reasonably be expected of any theory.
Benmelech and Moskowitz study the political economy of financial regulation by examining the determinants and effects of U.S. state usury laws during the eighteenth and nineteenth centuries. They argue that regulation is the outcome of private interests using the coercive power of the state to extract rents from other groups. They find that the strictness of usury coexists with other exclusionary policies, such as suffrage laws and lack of general incorporation, or free banking laws, which also respond less to competitive pressures for repeal. Furthermore, the same determinants of financial regulation that favor one group and limit access to others, are associated with lower future economic growth rates, highlighting the endogeneity of financial development and growth.
Garmaise studies the eects of non-competition agreements by analyzing time-series and cross-sectional variation in the enforceability of these contracts across U.S. states. He finds that increased enforceability reduces executive compensation and shifts its form towards greater use of salary. He also shows that tougher non-competition enforcement reduces research and development spending and capital expenditures per employee. Non-competition agreements promote executive stability and board participation, but higher quality managers apparently shun firms in high-enforcement jurisdictions. These results have implications for theories of executive compensation and firm organization.
Kaplan, Sensoy, and Strömberg study how firm characteristics evolve from early business plan to initial public offering to public company for 49 venture capital financed companies. The average time elapsed is almost six years. They describe the financial performance, business idea, point(s) of differentiation, non-human capital assets, growth strategy, customers, competitors, alliances, top management, ownership structure, and the board of directors. Their analysis focuses on the nature and stability of those firm attributes. Firm business lines remain remarkably stable from business plan through public company. Within those business lines, non-human capital aspects of the businesses appear more stable than human capital aspects. In the cross-section, firms with more alienable assets have substantially more human capital turnover.
Djankov, La Porta, Lopez-de-Silanes, and Shleifer present a new measure of legal protection of minority shareholders against expropriation by corporate insiders: the anti-self-dealing index. Assembled with the help of Lex Mundi law firms, the index is calculated for 72 countries based on legal rules prevailing in 2003, and focuses on private enforcement mechanisms, such as disclosure, approval, and litigation, governing a specific self-dealing transaction. This theoretically-grounded index predicts a variety of stock market outcomes, and generally works better than the commonly used index of anti-director rights.
Fama and French (2001a) show that the propensity to pay dividends declines significantly in the 1990s, the disappearing dividends puzzle. Baker and Wurgler (2004a, 2004b) suggest that these appearing and disappearing dividends are an outcome of firms "catering" to transient fads for dividend paying stocks. Hoberg and Prabhala empirically examine disappearing dividends and its catering explanation through the lens of risk. They report two main findings: 1) risk is a significant determinant of the propensity to pay dividends and explains up to 40 percent of the disappearing dividends puzzle; 2) catering is insignificant once they account for risk. Risk is also related to payout policies in general: it explains the decision to increase dividends and to repurchase shares. These findings affirm theories and field evidence on the role of risk in dividend policy and suggest that the 1990s increase in volatility noted by Campbell, Lettau, Malkiel, and Xu (2001) has corporate finance implications.
It has been observed that more open countries experience higher output growth volatility. DiGiovanni and Levchenko use an industry-level panel dataset of manufacturing production and trade to analyze the mechanisms through which trade can affect the volatility of production. They find that sectors with higher trade are more volatile and that trade leads to increased specialization. These two forces act to increase overall volatility. They also find that sectors that are more open to trade are less correlated with the rest of the economy, an effect that acts to reduce aggregate volatility. The point estimates indicate that each of the three effects has an appreciable impact on aggregate volatility. Added together, they imply that a single standard deviation change in trade openness is associated with an increase in aggregate volatility of about 15 percent of the mean volatility observed in the data. The authors also use these results to provide estimates of the welfare cost of increased volatility under several sets of assumptions. They then propose a summary measure of the riskiness of a country's pattern of export specialization, and analyze its features across countries and over time. There is a great deal of variation in countries' risk content of exports, but it does not have a simple relationship to the level of income or other country characteristics.
Why do firms decide to offshore certain parts of their production process? What qualifies certain countries as particularly attractive locations to offshoring? Antràs, Garicano, and Rossi-Hansberg address these questions with a theory of international production hierarchies in which teams arise endogenously to make efficient use of agents' knowledge. Their theory highlights the role of host-country management skills (middle management) in bringing about the emergence of international offshoring. By shielding top management in the source country from routine problems faced by host country workers, the presence of middle managers improves the efficiency of the transmission of knowledge across countries. The model further predicts that the positive effect of middle management skills on offshoring is weaker, the more advanced are communication technologies in the host country. The authors provide evidence consistent with this prediction.
Fitzgerald addresses the question of whether both goods and asset market frictions are necessary to explain the failure of consumption risk sharing across countries. She presents a multi-country model with Armington specialization. There are iceberg costs of shipping goods across countries. In asset markets, contracts are imperfectly enforceable. Both frictions separately limit the extent to which countries can pool risk. The model suggests a test for the presence of each of the two types of friction that exploits data on bilateral imports. Fitzgerald implements this test using a sample of developed and developing countries. She finds that both trade costs and asset market imperfections are necessary in order to explain the failure of perfect consumption risk sharing. The rejection of complete markets is weaker for developed than developing countries. At the same time, financial autarky is also rejected, indicating that some risk sharing is possible through asset markets.
Balat, Brambilla, and Porto advance a hypothesis to explain the small estimated impacts of trade barriers on poverty, especially in rural Africa. They study the case of Uganda and claim that high marketing costs prevent the realization of the gains from trade. Their basic hypothesis is that the availability of markets for agricultural export crops leads to a higher participation into export cropping and that this, in turn, leads to lower poverty. They use data from the Uganda National Household Survey to test it. They first establish that farmers living in villages with fewer outlets for sales of agricultural exports are likely to be poorer than farmers residing in market-endowed villages. Further, they show that market availability leads to increased household participation in export cropping (coffee, tea, cotton, fruits) and that households engaged in export cropping are less likely to be poorer than subsistence-based households. They conclude that the presence of marketing costs affects the way that trade lowers poverty by hindering farmers from engaging in export cropping. In addition, these effects are non-linear: the poverty impacts of higher market availability are much stronger in low market density villages than in medium or high market villages. This uncovers the role of market access and price competition among buyers and intermediaries as key building blocks in the link between export opportunities and the poor.
Starting with Romer  and Rivera-Batiz-Romer , economists have been able to model how trade enhances growth through the creation and import of new varieties. In this framework, international trade increases economic output through two channels. First, trade raises productivity because producers gain access to new imported varieties. Second, increases in the number of varieties drive down the cost of innovation and result in ever more variety creation. Using highly disaggregated trade data -- for example Gabon's imports of Gambian raw, unshelled, groundnuts -- Broda, Greenfield, and Weinstein structurally estimate the impact that new imports have had in approximately 4000 markets per country. They then move from groundnuts to globalization by building an exact total factor productivity index that aggregates these micro gains to obtain an estimate of trade on productivity growth around the world. They find that in the typical country in the world, new imported varieties contribute 0.13 percentage points per year to total factor productivity or 12 percent of their productivity growth. Individual country experiences vary substantially, with trade explaining 5 percent of the productivity growth in the typical developed country but about a quarter of productivity growth in the typical developing country. They also find that the creation of new varieties is correlated with R and D activities across countries in ways consistent to semi-endogenous growth models proposed by Jones .
Do acquiring companies profit from acquisitions, or do acquiring CEOs destroy shareholder value? Answering this question empirically is difficult since the hypothetical counterfactual is hard to determine. While negative stock reactions to the announcement of mergers are consistent with value-destroying mergers, they are also consistent with overvaluation of the acquiror at the time of the announcement. Similarly, studies of long-term returns to acquirors are affected by slowly declining overvaluation. Malmendier and Moretti study bidding contests to address this identification issue. They construct a novel dataset on all mergers with overlapping bids of at least two potential acquirors between 1983 and 2004. They then compare adjusted abnormal returns of all candidates both before and after a merger fight. The key identifying assumption is that the returns and other corporate outcomes of losing bidders are a valid counterfactual for the winner, after employing the usual controls and matching criteria. The authors find that stock returns of bidders are not significantly different before the merger fight, but diverge significantly after one bidder has completed the merger. Winners significantly underperform losers over a five-year horizon.
Many asset price bubbles occur during periods of excitement about new technologies. Hong, Scheinkman, and Xiong focus on the role of advisors and the communication process with investors in explaining this stylized fact. Advisors are well-intentioned and want to maximize the welfare of their advisees (like a parent and child). But only some of them understand the new technology (the tech-savvys); others do not and can only make a downward-biased recommendation (the old-fogies). While smart investors recognize the heterogeneity in advisors, naive ones mistakenly take whatever is said at face value. Tech-savvys inflate their forecasts to signal that they are not old-fogies because more accurate information about their type improves the welfare of investors in the future. A bubble arises for a wide range of parameters and its size is maximized when there is a mix of smart and naive investors in the economy.
Classical models predict that the division of stock returns into dividends and capital appreciation does not affect investor consumption patterns, while mental accounting and other economic frictions predict that investors are more likely to consume from stock returns in the form of dividends. Using two microdata sets, Baker, Nagel, and Wurgler find that investors are indeed far more likely to consume from dividends than capital gains. In the Consumer Expenditure Survey, household consumption increases with dividend income, after controlling for total wealth, total portfolio returns, and other sources of income. In a sample of household investment accounts data from a brokerage, net withdrawals from the accounts increase one-for-one with ordinary dividends of moderate size, after controlling for total portfolio returns, and also increase with mutual fund and special dividends.
Cohen and Frazzini find evidence of return predictability across economically linked firms. They test the hypothesis that, in the presence of investors subject to attention constraints, stock prices do not promptly incorporate news about economically related firms, generating return predictability across assets. They use a dataset of firms' principal customers to identify a set of economically related firms, and show that stock prices do not incorporate news involving related firms, generating predictable subsequent price moves. A long/short equity strategy based on this effect yields monthly alphas of over 150 basis points, or over 18 percent per year.
As previous agency models have shown, fund managers with career concerns have an incentive to imitate the recent trading strategy of other managers. Dasgupta, Prat, and Verardo embed this rational conformist tendency in a stylized financial market with limited arbitrage. Equilibrium prices incorporate a reputational premium or discount, which is a monotonic function of past trade between career-driven traders and the rest of the market. Their prediction is tested with quarterly data on U.S. institutional holdings from 1983 to 2004. They find that stocks that have been persistently bought (sold) by institutions in the past 3 to 5 quarters underperform (overperform) the rest of the market in the next 12 to 30 months. These results are of similar magnitude to, but distinct from, other known asset pricing anomalies. The findings challenge the mainstream view of the roles played by individuals and institutions in generating asset pricing anomalies.
One of the most striking portfolio puzzles is the "disposition effect": the tendency of individuals to sell stocks in their portfolios that have risen in value, rather than fallen in value, since purchase. Perhaps the most prominent explanation for this puzzle is based on prospect theory. Despite its prominence, this hypothesis has received little formal scrutiny. Barberis and Xiong take up this task, and analyze the trading behavior of investors with prospect theory preferences. Surprisingly, they find that, in its simplest implementation, prospect theory often predicts the opposite of the disposition effect. They provide intuition for this result, and identify the conditions under which the disposition effect holds or fails. They also discuss the implications of their results for other disposition-type effects that have been documented in settings such as the housing market, futures trading, and executive stock options.
Feenstra and Spencer explore the relationship between proximity of buyers and sellers and the organizational form of outsourcing. Outsourcing can be "contractual" - in which suppliers undertake specific investments - or involve "generic" market transactions. Proximity expands the variety of products sourced through contracts abroad rather than at home, but does not change the range of generic imports. A higher-quality foreign workforce raises the variety of contractual trade, but at the expense of generics. The authors confirm these predictions using data for ordinary versus processing exports from Chinese provinces to destination markets and the predictions of an extended model that allows for multinational production.
Corporate organization varies within a country and across countries with country size. Larger countries have larger firms with flatter more decentralized corporate hierarchies than smaller countries. Firms in larger countries change their corporate organization more slowly than firms in smaller countries. Furthermore, corporate diversity within a country is correlated with the pattern of heterogeneity among firms in size and productivity. Marin and Verdier develop a theory to explain these stylized facts and link these features to the trade environment that countries and firms face. They introduce heterogenous firms with internal hierarchies into a Krugman (1980) model of trade. The model simultaneously determines firms' organizational choices and heterogeneity across firms in size and productivity. They show that international trade and the toughness of competition in international markets induce a power struggle in firms, eventually leading to decentralized corporate hierarchies. They show further that trade triggers inter-firm reallocations towards more productive firms in which CEOs have power. Based on unique data from 660 Austrian and German corporations, they offer econometric evidence consistent with the model's predictions.
Nocke and Yeaple develop a theory of multiproduct firms and endogenous firm scope that can explain a well-known empirical puzzle: larger firms appear to be less efficient in that they have lower values of Tobin's Q. The authors extend this theory to study the effects of trade liberalization and market integration on the size distribution of firms. They show that a symmetric bilateral trade liberalization leads to a less skewed size distribution. The opposite result obtains in the case of a unilateral trade liberalization in the liberalizing country. In this model, trade liberalization affects not only the distribution of observed productivities but also productivity at the firm level. In the empirical section, the authors show that the key predictions are consistent with the data.
Giertz examines alternative methodologies for measuring responses to the 1990 and 1993 federal tax increases. The methodologies build on those employed by Gruber and Saez (2002), Carroll (1998), Auten and Carroll (1999), and Feldstein (1995). Internal Revenue Service tax return data for the project are from the Statistics of Income, which heavily oversamples high-income filers. Special attention is paid to the importance of sample income restrictions and methodology. Estimates are broken down by income group to measure how responses to tax changes vary by income. In general, estimates are quite sensitive to a number of different factors. Using an approach similar to Carroll's yields elasticity of taxable income (ETI) estimates as high as 0.54 and as low as 0.03, depending on the income threshold for inclusion into the sample. Gruber and Saez's preferred specification yields estimates for the 1990s of between 0.20 and 0.30. Yet another approach compares behavior in a year before a tax change to behavior in a year after the tax change. That approach yields estimated ETIs ranging from 0 to 0.71. The results suggest tremendous variation across income groups, with people at the top of the income distribution showing the greatest responsiveness. In fact, the estimates suggest that the ETI could be as high as 1.2 for those at the very top of the income distribution. The major conclusion, however, is that isolating the true taxable income responses to tax changes is extremely complicated by a myriad of other factors and thus little confidence should be placed on any single estimate. Additionally, focusing on particular components of taxable income might yield more insight.
Looney and Singhal use anticipated changes in tax rates associated with changes in family composition to estimate intertemporal labor supply elasticities and elasticities of taxable income with respect to the net-of-tax wage rate. A number of provisions of the tax code are tied explicitly to child age and dependent status. Changes in the ages of children can thus affect marginal tax rates through phase-in or phase-out provisions of tax credits or by shifting individuals across tax brackets. The authors identify the response of labor and taxable income to these tax changes by comparing families who experienced a tax-rate change to families who had a similar change in dependents but no resulting tax-rate change. A primary advantage of this approach is that the changes are anticipated and therefore should not cause re-evaluations of lifetime income. Consequently, the estimates of substitution effects should not be confounded by life-cycle income effects. The empirical design also allows for comparison of similar families and can be used to estimate elasticities across the income distribution. In particular, the authors provide estimates for low and middle income families. Using data from the Survey of Income and Program Participation (SIPP), they estimate an intertemporal elasticity of family labor earnings close to one for families earning between $30,000 and $75,000.The estimates for families in the EITC phase-out range are lower but still substantial. Estimates from the IRS-NBER individual tax panel are consistent with the SIPP estimates. Tests using alternate control groups and simulated ""placebo" tax schedules support the identifying assumptions. The high-end estimates suggest substantial efficiency costs of taxation.
Heim and Meyer estimate a structural model of employment, hours, and program participation choices of single women over the 1984-96 period. During the 1980s and 1990s, tax and welfare policy dramatically altered the labor supply and program participation incentives of single mothers. The authors use this setting to explore identification in structural labor supply models. Through the judicious use of special samples (specific states, years, women with certain numbers of children), control variables, and separate coefficients for different types of income, they isolate the different sources of variation in the aftertax reward to work. They explore the role of the intensive hours choice versus the extensive work/nonwork decision, examine the role of the point-in-time shape of the tax schedule for a given demographic group versus changes over time, the tax treatment of children, and the role of functional form. They also provide substantive results on effects of the Earned Income Tax Credit (EITC) and welfare programs. Studies analyzing the effects of the EITC using difference-in-difference methods have found that hours per year among those working increased in response to the EITC expansions. This change occurred even though both the income effect of the larger credits and the substitution effect of the higher phase-out rates implied a decline in hours. They address these surprising EITC results as well as the effects of welfare budget set changes using a joint model of labor supply and program participation.
Despite the considerable attention paid to the theory of tax incidence, there are surprisingly few estimates of the pass-through rate of sales taxes on retail prices. Doyle and Samphantharak estimate the effect of a suspension and subsequent reinstatement of the gasoline sales tax in Illinois and Indiana on retail prices. Earlier laws set the timing of the reinstatements, providing plausibly exogenous changes in the tax rates. Using a unique dataset of daily gasoline prices at the station level, the authors find that retail gas prices drop by 3 percent following the elimination of the 5 percent sales tax, and increase by 4 percent following the reinstatements, compared to neighboring states. Some evidence also suggests that the tax reinstatements are associated with higher prices up to an hour into neighboring states, which provides some evidence on the size of the geographic market for gasoline.
Shackelford and his co-authors examine the impact on asset prices of a reduction in the long-term capital gains tax rate using an equilibrium approach that considers both buyers' and sellers' responses. They demonstrate that the equilibrium impact of capital gains taxes reflects both the capitalization effect (capital gains taxes decrease demand) and the lock-in effect (capital gains taxes decrease supply). Depending on time periods and stock characteristics, either effect may dominate. Using the Taxpayer Relief Act of 1997 as their event, they find evidence supporting a dominant capitalization effect in the week following news that sharply increased the probability of a reduction in the capital gains tax rate and a dominant lock-in effect in the week after the rate reduction became effective. Non-dividend paying stocks (whose shareholders only face capital gains taxes) experience higher average returns during the week the capitalization effect dominates and stocks with large embedded capital gains and high individual ownership exhibit lower average returns during the week the lock-in effect dominates. They also find that the tax cut increases the trading volume in non-dividend paying stocks during the dominant capitalization week and in stocks with large embedded capital gains and high individual ownership during the dominant lock-in week.
Work requirements in means-tested transfer programs have grown in importance in the United States and in some other countries. The theoretical literature that considers their possible optimality generally operates within a traditional welfarist framework where some function of the utility of the poor is maximized. Here Moffitt argues that society instead has preferences over the actual work allocations of welfare recipients and that the resulting paternalistic social welfare function is more consistent with the historical evidence. With this social welfare function, optimality of work requirements is possible but depends on the accuracy of the screening mechanism which assigns work requirements to some benefit recipients and not others. Numerical simulations show that the accuracy must be high for such optimality to occur. The simulations also show that earnings subsidies can be justified with the type of social welfare function used here.
Finkelstein investigates the effects of market-wide changes in health insurance by examining the single largest change in health insurance coverage in American history: the introduction of Medicare in 1965. She estimates that the impact of Medicare on hospital spending is over six times larger than what the evidence from individual-level changes in health insurance would have predicted. This disproportionately larger effect may arise if market-wide changes in demand alter the incentives of hospitals to incur the fixed costs of entering the market or of adopting new practice styles. She presents some evidence of these types of effects. A back of the envelope calculation based on the estimated impact of Medicare suggests that the overall spread of health insurance between 1950 and 1990 may be able to explain about half of the increase in real per capita health spending over this time period.
It is well known that unemployment benefits raise unemployment durations. This result has traditionally been interpreted as a substitution effect caused by a distortion in the price of leisure relative to consumption, leading to moral hazard. Chetty questions this interpretation by showing that unemployment benefits can also affect durations through an income effect for agents with limited liquidity. The empirical relevance of liquidity constraints and income effects is evaluated in two ways. First, he divides households into groups that are likely to be constrained and unconstrained based on proxies such as asset holdings. He finds that increases in unemployment benefits have small effects on durations in the unconstrained groups but large effects in the constrained groups. Second, he finds that lump-sum severance payments granted at the time of job loss significantly increase durations among constrained households. These results suggest that unemployment benefits raise durations primarily because of an income effect induced by liquidity constraints rather than moral hazard from distorted incentives.
Battaglini and Coate present a dynamic political economy theory of public spending, taxation, and debt. Policy choices are made by a legislature consisting of representatives elected by geographically-defined districts. The legislature can raise revenues via a distortionary income tax and by borrowing. These revenues can be used to finance a national public good and district-specific transfers (interpreted as pork-barrel spending). The value of the public good is stochastic, reflecting shocks such as wars or natural disasters. In equilibrium, policymaking cycles between two distinct regimes: "business-as-usual" in which legislators bargain over the allocation of pork, and "responsible-policymaking" in which policies maximize the collective good. Transitions between the two regimes are brought about by shocks in the value of the public good. In the long run, equilibrium tax rates are too high and too volatile, public good provision is too low and debt levels are too high. In some environments, a balanced budget requirement can improve citizen welfare.
Conventional hedonic techniques for estimating the value of local amenities rely on the assumption that households move freely among locations. Bayer, Keohane, and Timmins show that when moving is costly, the variation in housing prices and wages across locations may no longer reflect the value of differences in local amenities. They develop an alternative discrete-choice approach that considers the household location decision directly, and apply it to the case of air quality in U.S. metro areas in 1990 and 2000. Because air pollution is likely to be correlated with unobservable local characteristics such as economic activity, they instrument for air quality using the contribution of distant sources to local pollution - excluding emissions from local sources, which are most likely to be correlated with local conditions. Their model yields an estimated elasticity of willingness to pay with respect to air quality of 0.34 to 0.42. These estimates imply that the median household would pay $149 to $185 (in constant 1982-4 dollars) for a one-unit reduction in average ambient concentrations of particulate matter. These estimates are three times greater than the marginal willingness to pay estimated by a conventional hedonic model using the same data. The results are robust to a range of covariates, instrumenting strategies, and functional form assumptions. The findings also confirm the importance of instrumenting for local air pollution.
A large literature shows that state and local laws requiring smoke-free workplaces are associated with improved worker outcomes (lower secondhand smoke exposure and own smoking rates.) Carpenter provides new quasi-experimental evidence on the effects of workplace smoking laws by using the differential timing of adoption of over 100 local smoking by-laws in Ontario, Canada over the period 1997-2004. He is able to control for demographic characteristics, year fixed effects, and county fixed effects. Because he observes the respondent's report of the smoking policy at her worksite, he can test directly for compliance. Although the results indicate that local by-laws increase workplace bans in the aggregate, Carpenter finds that the effects are driven entirely by blue collar workers. Among blue collar workers, local by-laws significantly reduced the fraction of worksites without any smoking restrictions (that is, where smoking is allowed anywhere at work), by over half. These local policies also improved health outcomes: adoption of a local by-law significantly reduced second hand smoke exposure among blue collar workers, by 25-30 percent, and workplace smoking laws did reduce smoking. For all of the outcomes, Carpenter finds plausibly smaller and insignificant estimates for white collar and sales/wservice workers, the vast majority of whom worked in places with privately initiated smoking bans well before local by-laws were adopted. Overall, these findings confirm that workplace smoking bans do reduce smoking; they document the underlying mechanisms through which local smoking by-laws improve health outcomes; and they show that the effects of these laws are strongly heterogeneous with respect to occupation.
Vernon and Santerre compare the likely consumer benefits of higher quality with the potentially higher production costs that result from increased not-for-profit activity in a nursing home services market area. They compare consumer benefits and costs by observing empirically how an increased market penetration of not-for-profit facilities affects the utilization of private-pay nursing home care. Increased (decreased) utilization of nursing home care reflects that the consumer benefits associated with additional not-for-profit nursing homes are greater (less) than consumer costs. The empirical results indicate that, from a consumer's perspective, too few not-for-profit nursing homes exist in the typical market area of the United States. The policy implication is that more quality of care per dollar might be obtained by attracting a greater percentage of not-for-profit nursing homes into most market areas.
Dobkin and Puller analyze the monthly patterns of adverse outcomes attributable to the consumption of illegal drugs by recipients of government transfer payments. They find evidence that certain subpopulations on government cash aid significantly increase their consumption of drugs when their checks arrive at the beginning of the month and, as a result, experience adverse events including arrest, hospitalization, and death. Using data from California, they find that the overall rate of drug related hospital admissions increases abruptly at the beginning of the month, with admissions increasing 25 percent during the first five days of the month. This cycle is driven largely by recipients of Supplemental Security Income (SSI). SSI recipients also experience an abrupt 22 percent increase in within-hospital mortality after receiving their checks on the first of the month. The authors also document pronounced monthly cycles in drug related crimes. On the first of the month, arrests for drug possession and sale increase by 20 percent and prostitution arrests decline by 16 percent. These findings suggest that "full wallets" adversely affect some aid recipients and that policymakers should explore alternate disbursement regimes, such as a staggered disbursement schedule or in-kind support, that have the potential to reduce the rate of adverse events.
It is widely believed that education improves health. And, empirical evidence substantiating that the relationship is causal has progressed in recent years. A pinnacle in this progression is arguably Lleras-Muney's 2005 analysis of state compulsory school law changes in the United States, which were found to improve educational attainment and consequently to reduce mortality. Almond and Mazumder revisit these results, noting that they are not robust to state time trends, even when the Census sample is tripled and a coding error rectified. They use a new dataset with greater detail on health outcomes and statistical power, yielding two primary findings: 1) they replicate Lleras-Muney's results for aggregate measures of health; and 2) the pattern of effects for specific health conditions appears to depart from theoretical predictions of how education should affect health. They also find that state differences in vaccination rates against smallpox during the period of compulsory school law reform may account for the improvement in health and its association with educational attainment.
During the 1980s and 1990s, the lengths of postpartum hospital stays declined for both vaginal and cesarean births. Health professionals and policymakers expressed concern that shorter hospital stays might jeopardize the health of both mothers and infants. The federal government and states responded by passing laws requiring that insurance carriers provide coverage for longer postpartum stays. Evans and Heng use a restricted-use dataset of all births in California over a six-year period to examine the effect of these early discharge laws.They demonstrate that early discharge laws considerably reduced the fraction of newborns and mothers who were discharged early. They also find that an additional day in the hospital reduced the probability of readmission by about one percentage point for vaginal deliveries with complications and for c-sections of all types. The former result is statistically significant at conventional levels but the latter result is only significant at a p-value of around 0.10. There was no statistically significant change in 28-day newborn readmission rates for babies whose mothers had uncomplicated vaginal deliveries. Finally, although the statutes did not cover Medicaid patients and patients with no insurance, their postpartum length of stay was affected by the changes in the law as well.
Does drug treatment for depression with selective serotonin reuptake inhibitors (SSRIs) increase or decrease the risk of completed suicide? The question is important in part because of the substantial social costs associated with severe depression and suicide; by plausible clinical and behavioral arguments, SSRIs could have either positive or negative effects on suicide mortality. Randomized clinical trials on this topic have not been very informative because of small samples and other problems. Ludwig, Marcotte, and Norberg use data from 27 countries for up to twenty years to estimate the association between SSRI sales and suicide mortality using only the variation across countries in the timing of when SSRIs were first sold that can be explained by differences in the speed with which countries approve new drugs for sale more generally. This source of variation is plausibly unrelated to unmeasured mental health conditions or other factors that may influence both SSRI sales and suicide outcomes. The authors find that an increase in SSRI sales of 1 pill per capita (about a 13 percent increase over 1999 sales levels) is associated with a decline in suicide mortality of around 3-4 percent. These estimates imply a cost per statistical life saved of around $66,000, far below most other government interventions to improve health outcomes.
Policymakers increasingly rely on emissions trading programs to address the environmental problems caused by air pollution. If polluting firms in an emissions trading program face different economic regulations and investment incentives in their respective industries, then emissions markets may fail to minimize the total cost of achieving pollution reductions. Fowlie analyzes an emissions trading program that was introduced to reduce smog-causing pollution from large stationary sources (primarily electricity generators) in 19 eastern states. She develops and estimates a model of a plant's environmental compliance decision. Using variation in state-level electricity-industry restructuring activity, she identifies the effect of economic regulation on pollution permit market outcomes. She finds first that plants in states that have restructured electricity markets are less likely to adopt more capital intensive compliance options. Second, this economic regulation effect, together with a failure of the permit market to account for spatial variation in marginal damages from pollution, has resulted in increased health damages. Had permits been defined in terms of units of damages instead of units of emissions, more of the man-dated emissions reductions would have occurred in restructured electricity markets, thereby avoiding on the order of hundreds of premature deaths per year.
Pfaff and his co-authors study how effectively information alone induces people to incur the cost of avoiding a health risk. Arsenic contamination of the groundwater in Bangladesh provides an unfortunate natural experiment. The authors find that the response to specific information about the safety of one's well is large and rapid; having an unsafe well raises the probability by 0.5 that the individual changes to another well within one year. The estimate of the impact of information is unbiased, because arsenic levels are uncorrelated with individual characteristics. The evidence suggests that a media campaign communicates general information about arsenic as effectively as a more expensive door-to-door effort does.
Tarui and Polasky study dynamic environmental regulation with endogenous choice of emissions abatement technology by regulated firms and exogenous learning about environmental damages from emissions by a regulator. Investments in abatement technology by one firm that lower abatement costs for the firm may also lower abatement costs of other firms (technology spillovers). There are two issues facing environmental regulators: setting regulation to achieve optimal abatement given available information; and setting regulations to achieve optimal investment given possible strategic investment and technology spillovers. The authors compare taxes, standards, and marketable permits under flexibility, in which the policy is updated upon learning new information, versus under commitment, in which the policy is not updated. Flexible policy allows regulation to reflect the most up-to-date information. However, under flexible policy, firms can invest strategically to influence future regulation. The authors find that an optimal solution for both investment and abatement decisions can be achieved under a flexible marketable permit scheme in which the permit allocation to a firm is increasing in the firm's investment. No other policy scheme, taxes, auctioned permits, or standards, under either flexibility or commitment, will guarantee achieving an optimal solution. These results run counter to prior literature that finds that price-based mechanisms are superior to quantity-based mechanisms, or that such comparisons depend on conditions.
Weitzman (1974) revealed that prices are preferred to quantities when marginal benefits are relatively flat as compared to marginal costs. Newell and Pizer extend this comparison to indexed policies, where quantities are proportional to an index, such as output. They find that policy preferences hinge on additional parameters describing the first and second moments of the index and the ex post optimal quantity level. When the ratio of these variables' coefficients of variation divided by their correlation is less than 2, indexed quantities are preferred to fixed quantities. A slightly more complex condition determines when indexed quantities are preferred to prices. Applied to the case of climate change, the authors find that quantities indexed to GDP are preferred to fixed quantities for about half of the 19 largest emitters, including the United States and China, while prices dominate for all other countries.
Many communities are concerned about the reuse of old industrial land and believe that environmental liability is a hindrance to redevelopment. However, with land price adjustments, liability might not impede redevelopment. Existing literature has found price reductions in response to liability, but few studies have looked for an effect on redevelopment. Sigman studies variations in state liability rules - specifically, strict liability and joint and several liability - that affect the level and distribution of expected private cleanup costs. She explores the effects of this variation on industrial land prices and vacancy rates in a panel of cities across the United States from 1989 through 2000. In the estimated equations, joint and several liability reduces land prices and increases vacancy rates in central cities. Neither a price nor quantity effect is estimated from strict liability. The results suggest that liability is at least partly capitalized, but does still deter redevelopment.
In standard sticky price models, frequent and large price changes imply that the aggregate price level responds quickly to nominal shocks. Mackowiak and Wiederholt present a model in which price setting firms optimally decide what to pay attention to, subject to a constraint on information flow. When idiosyncratic conditions are more variable or more important than aggregate conditions, then firms pay more attention to idiosyncratic conditions than to aggregate conditions. When the authors calibrate the model to match the large average absolute size of price changes observed in the data, prices react fast and by large amounts to idiosyncratic shocks, but prices react only slowly and by small amounts to nominal shocks. Nominal shocks have persistent real effects. The authors use their model to investigate how the optimal allocation of attention and the dynamics of prices depend on the firms' environment.
Angeletos and Pavan analyze equilibrium and welfare for a tractable class of economies with externalities, strategic complementarity or substitutability, and incomplete information. They first characterize the equilibrium use of information and show how strategic payoff effects can heighten the sensitivity of equilibrium actions to noise. Then they characterize the efficient use of information, which allows them to address whether such heightened sensitivity is socially undesirable. They show how the efficient use of information trades off volatility for dispersion, and how this relates to the socially optimal degree of coordination.Finally, they show how the relation between equilibrium and efficient use of information is instrumental in understanding the social value of information. They conclude with a few applications, including production externalities, Keynesian frictions, inefficient fluctuations, efficient market competition, and large Cournot and Bertrand games.
Svensson and Williams examine optimal and other monetary policies in a linear-quadratic setup with a relatively general form of model uncertainty: so-called Markov jump-linear-quadratic systems extended to include forward-looking variables. The form of model uncertainty that their framework encompasses includes: simple i.i.d. model deviations; serially correlated model deviations; estimable regime-switching models; more complex structural uncertainty about very different models, for instance, backward- and forward-looking models; time-varying central-bank judgment about the state of model uncertainty; and so forth. They provide an algorithm for finding the optimal policy, as well as solutions for arbitrary policy functions. This allows them to compute and plot consistent distribution forecasts - fan charts - of target variables and instruments. Their methods hence extend certainty equivalence and "mean forecast targeting" to more general certainty non-equivalence and "distribution forecast targeting."
Inflation band targeting is a simpler form of an inflation contract that is widely used in practice and is more politically tenable than alternative strategies, such as appointment of a conservative central banker, or an optimal, pecuniary, inflation contract. Mishkin and Westelius provide the first theoretical treatment (that they know of) of how inflation target bands can be designed to mitigate the time-inconsistency problem. Their paper analyzes inflation target ranges in the context of a Barro-Gordon (1983) type model, but has a more realistic setting, in that the time-inconsistency problem stems not from the preferences of the central bank, as in Barro-Gordon, but instead from political pressures from the government. They demonstrate that inflation target bands, or a range, can achieve many of the benefits of these other strategies, providing a possible reason why this strategy has been used by so many central banks. Their theoretical model also enables them to outline how an inflation targeting range should be designed optimally and how it should change when there are changes in the nature of shocks to the economy.
Gali and Monacelli lay out a tractable model for fiscal and monetary policy analysis in a currency union, and analyze its implications for the optimal design of such policies. Monetary policy is conducted by a common central bank, which sets the interest rate for the union as a whole. Fiscal policy is implemented at the country level, through the choice of government spending level. The model incorporates country-specific shocks and nominal rigidities. Under these assumptions, the optimal monetary policy requires that inflation be stabilized at the union level. On the other hand, the relinquishment of an independent monetary policy, coupled with nominal price rigidities, generates a stabilization role for fiscal policy, one beyond the efficient provision of public goods. Interestingly, the stabilizing role for fiscal policy is shown to be desirable, not only from the viewpoint of each individual country, but also from that of the union as a whole. In addition, this paper offers some insights on two aspects of policy design in currency unions: the conditions for equilibrium determinacy and the effects of exogenous government spending variations.
Midrigan uses a large set of scanner price data collected in retail stores to document that: 1) although the average magnitude of price changes is large, a substantial number of price changes are small in absolute value; 2) the distribution of non-zero price changes has fat tails; and 3) stores tend to adjust prices of goods in narrow product categories simultaneously. He extends the standard menu costs model to a multi-product setting in which firms face economies of scale in the technology of adjusting prices. The model, because of its ability to replicate this additional set of microeconomic facts, can generate aggregate fluctuations much larger than those in standard menu costs economies.
In the last and current decade, the Wake County school district has reassigned numerous students to schools, moving up to 5 percent of the student population in any given year. Before 2000, the explicit goal was balancing schools' racial composition; after 2000, it was balancing schools' income composition. Throughout, finding space for the area's rapidly expanding student population was the most important concern. The reassignments generate a very large number of natural experiments in which students experience new peers in the classroom. As a matter of policy, exposure to an "experiment" should have been and actually appears to have been random, conditional on a student's fixed characteristics such as race and income. Using panel data on students before and after they experience policy-induced changes in peers, Hoxby and Weingarth explore which models of peer effects explain the data. Their results reject the models in which a peer has a homogeneous effect that does not depend on the student's own characteristics. They find support for models in which a student benefits from peers who are somewhat higher achieving than himself but not very different. A student benefits least from peers who are very different (in either positive or negative ways) and peers who create an unfocused (bimodal or "schizophrenic") classroom. These results also indicate that, when we properly account for the effects of peers' achievement and peers' race, the student's ethnicity, income, and parental education have no, or at most very slight, effects.
Burke and Sass analyze a unique micro-level panel dataset encompassing all public school students in grades 3-10 in the state of Florida for each of the years 1999/2000 to 2003/4. The authors are able to directly link each student and teacher to a specific classroom and thus can identify each member of a student's classroom peer group. The ability to track individual students through multiple classrooms over time, and multiple classes for each teacher, enables the authors to control for many sources of spurious peer effects including fixed individual student characteristics and fixed teacher inputs, and allows them to compare the strength of peer effects across different groupings of peers and across grade levels. They are also able to compare the effects of fixed versus time-varying peer characteristics. The authors find mixed results on the importance of peers in the linear-in-means model, and resolve some of these apparent conflicts by considering non-linear specifications of peer effects. Their results suggest that some grouping by ability may create Pareto improvements over uniformly mixed classrooms. In general, they find that contemporaneous behavior wields stronger influence than peers' fixed characteristics.
The view that the returns to investments in public education are highest for early childhood interventions primarily stems from several influential randomized trials - Abecedarian, Perry, and the Early Training Project - that point to super-normal returns to preschool interventions. Anderson implements a unified statistical framework to present a de novo analysis of these experiments, focusing on core issues that have received little attention in previous analyses: treatment effect heterogeneity by gender, over-rejection of the null hypothesis due to multiple inference, and robustness of the findings to attrition and deviations from the experimental protocol. The primary finding of this reanalysis is that girls garnered substantial short- and long-term benefits from the interventions, particularly in the domain of total years of education. However, there were no significant long-term benefits for boys. These conclusions change little when allowance is made for attrition and possible violations of random assignment.
Battistin and Sianesi study the impact of misreported treatment status on the estimation of causal treatment effects. Although the bias of matching-type estimators computed from misclassified data cannot, in general, be signed, the authors show that the bias is most likely to be downward if misclassification does not depend on variables entering the selection-on-observables assumption, or if it only depends on such variables via the propensity score index. They extend the framework to multiple treatments and then provide results to bound the returns to a number of educational qualifications in the United Kingdom, semi-parametrically. By using the unique nature of their data, they assess the plausibility for the two biases - from measurement error and from omitted variables - to cancel each other out.
Can changes in teacher pay encourage more able individuals to enter the teaching profession? So far, studies of the impact of pay on the aptitude distribution of teachers have provided mixed evidence on the extent to which altering teacher salaries represents a feasible solution to the teacher quality problem. Using a unique dataset of test scores for every individual admitted into an Australian university between 1989 and 2003, Leigh explores how changes in average pay or pay dispersion affect the decision to enter teacher education courses in the eight states and territories that make up Australia. A single percent rise in the salary of a starting teacher boosts the average aptitude of students entering teacher education courses by 0.6 percentile ranks, with the effect being strongest for those at the median. This result is robust to instrumenting for teacher pay using uniform salary schedules for public schools. Leigh also finds some evidence that pay dispersion in the non-teaching sector affects the aptitude of potential teachers.
Krashinsky uses a unique policy change in Canada's most populous province, Ontario, to provide direct evidence on the effect of reducing the length of high school on student performance in university. In 1999, the Ontario government eliminated the fifth year of education from its high schools, and mandated a new four-year program. This policy change created two cohorts of students who graduated from high school together and entered university with different amounts of high school education, thus making it possible to indentify the effect of one extra year of high school education on university academic performance. Using several different econometric approaches on original survey data, the results demonstrate that students who receive one less year of high school education perform significantly worse than their counterparts in all subjects, even after accounting for the age difference between cohorts. Overall, both in terms of individual courses and grade point average, four-year graduates perform 5 percentage points, or approximately one-half of a letter grade, lower than undergraduates with one more year of high school education.
Although the theoretical case for universal pre-primary education is strong, the empirical foundation is weak. Berlinski, Galliani, and Gertler contribute to the empirical case by investigating the effect of a large expansion of universal pre-primary education on subsequent primary school performance in Argentina. They estimate that one year of pre-primary school increases average third grade test scores by 8 percent of a mean, or by 23 percent of the standard deviation, of the distribution of test scores. They also find that pre-primary school attendance positively affects student's self-control in the third grade as measured by behaviors such as attention, effort, class participation, and discipline.
The average age at which children enter kindergarten has climbed steadily in the past 25 years. Using data from the Early Childhood Longitudinal Study and the National Educational Longitudinal Study of 1988, Elder and Lubotsky examine the effect of age at entrance to kindergarten on achievement test performance, grade retention, and diagnoses of learning disabilities. State kindergarten cutoff dates generate two sources of plausibly exogenous variation in the age at which children enter kindergarten, which these authors use to estimate instrumental variables models of the effect of entrance age on outcomes. Consistent with recent work in the United States and other countries, they find that children who are older when they enter kindergarten have higher reading and math test scores. Older children are also less likely to be held back later and less likely to be diagnosed with a learning disability. Next the authors ascertain why entrance age affects achievement. They find evidence of substantial heterogeneity in the impact of entrance age, with richer children benefiting considerably more than poorer children. This is consistent with the idea that wealthy parents are more willing or more able to develop their children's human capital prior to the start of kindergarten. Although older children in kindergarten tend to be taller, the authors find no evidence of heterogeneity in this correlation. They conclude that older children tend to have more cognitive skills and better preparation prior to entering kindergarten, and that physical maturity is not the primary cause of the entrance age effect. Finally, school entrance cutoffs also influence the average age of classes, and the authors are able to separately identify the influence of an individual's own age at entry from the influence of his or her classmates' age. They find some evidence for both positive and negative peer effects: conditional on a child's own age, older classmates increase test scores, but also increase the probability of being retained in grade or being diagnosed with a learning disability.
The use of experimental designs has enabled researchers to identify social interactions or neighborhood effects on individual behavior. However, a remaining obstacle in the literature has been the inability to distinguish between peer effects that are determined by a person's reference group behavior (endogenous peer effects) and effects that are generated as a result of specific background characteristics of the groups themselves (contextual peer effects). Bobonis and Finan identify and estimate endogenous peer effects on children's school participation decisions using evidence from the Progresa program. Under Progresa, payments were provided to poor mothers conditional upon school enrollment of their children. Because program eligibility was randomly assigned, the authors use this exogenous variation in school participation to identify peer effects on the school enrollment of ineligible children residing in the same communities. They find that peers have considerable influence on the enrollment decision of program-ineligible children, and these effects are nonlinear and concentrated among children from relatively poorer households. These findings imply that educational policies aimed at encouraging enrollment can produce large social multiplier effects.
A growing literature documents how the availability of oral contraception affected the outcomes of young women in the 1960s and 1970s, but the effects of oral contraception on women's fertility remains disputed. In this paper, Ananat and Hungerman examine whether increased access to the pill affected women's fertility and whether the pill substituted for other fertility technologies. Using census data, the authors document that access to the pill led to falling short-term fertility rates for young women. They then use two different datasets to examine the impact of legal oral contraception on unwanted pregnancies, and in particular on abortions, in order to identify whether these fertility technologies were viewed to some extent as substitutable by young women. In both datasets, the authors find that access to oral contraception lowers the number of abortions for young women.
Fort assesses the causal effects of education on the timing of first order births, allowing for heterogeneity in the effects while controlling for self-selection of women into education. Identification relies on exogenous variation in schooling induced by a mandatory school reform rolled out nationwide in Italy in the early 1960s. Findings based on Census data (Italy, 1981) suggest that a large fraction of the women affected by the reform postpone the time of the first birth but catch up with this fertility delay before turning 26. There is some indication that the fertility behavior of these women is different from that of the average women in the population.
Gender inequality in South Asia is an important policy issue; gender imbalances in mortality have been of particular concern. Policymakers often argue that increasing the level of development and access to health care are crucial to addressing this inequality. Oster analyzes the relationship between access to child health investments and gender inequality in those investments in India. The first part of her paper explores the proximate causes of the gender imbalance in mortality in India. She finds that a large share of the gender imbalance (about 30 percent) can be explained by differential access to vaccination. The second part of the paper estimates the effect of changes in access to vaccination on gender inequality. Oster argues that the direction of these effects is not obvious. A simple model of (gender-biased) parental investments, and empirical work using variation in access to vaccination, both suggest that initial increases in vaccination availability from low levels will increase gender inequality; further increases will then decrease inequality. This non-monotonic pattern is also reflected in differences in mortality. This result may shed light on the contrast between the cross-sectional and time-series evidence on gender and development.
Low-income students and students whose parents have not attended college typically are less likely than middle- and upper-income students to complete high school and attend college, and are thus less likely to reap the benefits of attending college. By providing information on the types of high school courses students should take to prepare for college and on the financial aid available to pay for college, the Talent Search program seeks to address substantial informational hurdles. Using a large amount of administrative data compiled in Florida, Indiana, and Texas for one complete cohort of students, Constantine and Seftor were able to use complex propensity score matching models to identify nonparticipating students who were most similar to Talent Search participants. They find that Talent Search participants were more likely than comparison students to apply for federal financial aid and enroll in public postsecondary institutions in all three states. These findings suggest that assisting low-income students who have college aspirations to overcome information barriers-an important objective of the Talent Search program-may be effective in helping these students achieve their aspirations.
Teaching is the central task that colleges and universities perform for students. Differences in teaching quality, across instructors and institutions, thus may play a key role in determining students' academic experiences, interests, and successful transition into the labor force. Yet research about the importance of teacher quality focuses almost exclusively on the primary or secondary level. Hoffmann and Oreopoulos use administrative data from a large Canadian university between 1996 and 2005, matched to course instructors and instructors' teaching evaluations. Their main approach is to use the fact that many entering first-year students end up with different instructors because of scheduling conflicts or year-to-year replacements. They estimate the overall influence of postsecondary instructors on course dropout, enrollment, and grade outcomes by estimating the variance of the value added that instructors contribute to various outcome measures of academic achievement. They also examine more direct effects by estimating the consequences of entering a class with an instructor who ranks high or low in subjective teaching evaluations. Their main finding is that the variance among first-year students taking same courses but with different instructors is small, but not trivial: a two standard deviation switch in instructor quality would be expected to lower the likelihood of dropping the course by 1.5 percentage points, and increase the number of course subjects enrolled in the following year by less than 0.05 of its standard deviation. They also find little evidence that student evaluations of teacher quality significantly relate to student achievement.
Adams and Clemmons present new evidence on research and teaching productivity in universities. Their findings are based on a panel that covers 1981-99 and includes 102 top U.S. schools. Faculty size grows at 0.6 percent per year compared with growth of 4.9 percent in the industrial science and engineering workforce. Measured by papers and citations per researcher, productivity grows at 1.4-6.7 percent per year and productivity and its rate of growth are higher in private than public universities. Measured by baccalaureate and graduate degrees per teacher, teaching productivity grows at 0.8-1.1 percent per year and growth is faster in public than private universities. A decomposition analysis shows that growth in research productivity within universities exceeds overall growth. This is because research shares grow more rapidly in universities whose productivity grows less rapidly. Likewise, the research share of public universities increases even though productivity grows less rapidly in public universities. Together, these findings imply that allocative efficiency of U.S. higher education may have declined during the late twentieth century.Regression analysis of individual universities finds that R and D stock, endowment, and post-doctoral students increase research productivity, that the effect of nonfederal R and D stock is less, and that research is subject to decreasing returns. Since the nonfederal R and D share grows and is much higher in public universities, this could account for some of the rising allocative inefficiency. The evidence for decreasing returns in research also suggests limits on scale that restrict the ability of more efficient institutions to expand. Besides all this, the data strongly hint at growing financial pressures on U.S. public universities.
School accountability -- the practice of evaluating schools on the basis of the observed performance of students and rewarding and punishing schools according to these evaluations -- is ubiquitous in the world today, with nations on every continent experimenting with such policies. While there has been considerable research attention paid to the effects of school accountability plans on the standardized test scores of average students or low-performing students, as well as evidence concerning the incentives embedded within school accountability plans, there has been no published research to date investigating the effects of these plans on the high end of the academic distribution: those students who would surely have attained proficiency in the absence of school accountability plans. Figlio,Donovan, and Rush seek to fill this void; they exploit data from a state that changed the basis of its accountability system in 1999. This change directly influenced a large number of schools that immediately either transitioned from being threatened with sanctions to not being threatened at all, or vice versa. Using this identification strategy, they can measure the impact on students of the school they attend either becoming threatened or becoming less threatened. In order to implement this identification strategy, the authors use a remarkable dataset from a large selective public university in the state in question. They observe that school accountability plans have the potential to substantially affect high-achieving students' performance and study habits in college. They observe that accountability systems, whether they are based on a low-level test of basic skills or based on a higher-level standards-based assessment, tend to lead to increased cramming behavior and poorer study habits in college. However, the two types of accountability systems apparently lead to very different outcomes in college, as measured by course grades. An accountability system based on a low-level test of basic skills is associated with unambiguously worse performance by students in college. On the other hand, an accountability system based on a rigorous standards-based assessment apparently results in significantly improved mathematics performance in college, as well as improved performance in other technical mathematics-based courses such as chemistry, economics, engineering and physics that were not directly covered by the accountability system, with no ill effects on performance in less technical courses. These results indicate that the design of accountability systems is critically important in determining the degree to which high-performing students obtain skills that help them succeed in college.
Time to completion of the baccalaureate degree has increased markedly among college graduates in the United States over the last two decades. Between the cohorts graduating from high school in 1972 and 1992, the percent of graduates receiving a degree within 4 years dropped from 57.6 percent to 44 percent. Among the reasons that students may extend the collegiate experiences beyond the four-year norm are: the need for academic remediation, which lengthens the course of study; inability to finance full-time attendance, requiring part-time enrollment and employment; or simply a desire to extend the consumption experience of collegiate life. The consequences of extended time-to-degree may include individual loss of earnings, as well as the social cost of potentially reduced economic growth as the supply of college-educated workers is limited by tradeoffs between school and work. Bound, Lovenheim, and Turner find that the increase in time to degree is localized among graduates of non-selective public colleges and universities. They find no evidence that changes in student characteristics, including pre-collegiate achievement or parental characteristics, explain the observed increase in time to degree. The increase in time to degree has been the largest within states that have the largest increases in cohort size, consistent with dilution in resources per student at public colleges. As proximate causes, the authors find evidence of increased hours employed and greater propensity to transfer among institutions.
Two recent U.S. Supreme Court rulings have armed the constitutionality of certain types of racial preferences in college admissions. The main premise behind the Supreme Courts decisions was that affirmative action benefits not only the minority students targeted by the policy, but majority students as well. The purported benefit to majority students is rooted in an increased likelihood of inter-racial contact associated with increased minority representation on campus. However, affirmative action itself may not foster these relationships. Arcidiacono, Khan, and Vigdor show that inter-racial relationships are more likely to form among students with comparable test scores. Their simulations suggest that less aggressive admissions policies would actually increase inter-racial contact by reducing the disparities between majority and minority student characteristics.
As private insurers and the government attempt to constrain elderly medical spending in the coming years, a first-order consideration is the price sensitivity of the medical consumption of the elderly. For the non-elderly, the famous RAND Health Insurance Experiment (HIE) addressed the question of the sensitivity of medical consumption to its price, but RAND did not include the elderly in its HIE. The purpose of Gruber, Chandra, and McKnight's paper is to remedy this deficiency by studying a major set of copayment changes in a modern, managed care environment. The California Public Employees Retirement System (CalPERS) enacted a series of substantial copayment increases for both active employees and retirees, first for the state's PPO plans, and then for its HMO plans. The result was a staggered set of copayment changes that allow these authors to carefully evaluate the impact on the medical care utilization of the elderly. To evaluate these policy changes, the authors have compiled a comprehensive database of all medical claims for those enrolled continuously in several of the CalPERS plans. They find that both physician office visits and prescription drug utilization are price sensitive among the elderly, although the elasticities are modest, as with the RAND HIE. Unlike the HIE, however, this paper finds significant "offset" effects in terms of increased hospital utilization in response to the combination of higher copayments for physicians and prescription drugs. The most chronically ill individuals are equally responsive to copay increases in terms of their reduced use of physician care or prescription drugs, but there are much larger offset effects for these populations, so that there is little net gain from higher copayments for that group. This suggests that copayment increases targeted to health would be part of an optimal health insurance arrangement for the elderly.
The welfare implications of variations in how physicians treat patients depend on whether patients have different optimal treatments and are treated by physicians who are likely to provide those optimal treatments. Epstein, Ketcham, and Nicholson examine the extent to which expecting mothers direct themselves, or are directed to, particular physicians based on their preferences for physicians' treatment styles and the patients' health conditions. The authors capitalize on the largely random assignment to weekdays of weekend patients because of physicians' call schedules and the concentration of induced deliveries and scheduled c-sections, both opportunities for a patient to choose her physician. Using Florida and New York discharge data from 1999 to 2004 linked to information on physician practices, the authors find that one-third of the variation in treatment styles across physicians is attributable to patient-physician matching on unobserved characteristics, which implies that a considerable part of the variation in medical treatment rates may enhance welfare. In one-third of the group practices, certain physicians specialize in treating weekday patients with relatively high observed risk and others with relatively low observed risk, and there is more than twice as much variation across physicians within a practice in patients' observed health than one would expect if weekday patients were randomly assigned.
Given the rapid growth in health care spending that is often attributed to technological change, many private and public institutions are grappling with how to best assess and adopt new health care technologies. The leading technology adoption criteria proposed in theory and used in practice involve so called "cost-effectiveness" measures. However, little is known about the dynamic efficiency implications of such criteria, in particular how they influence the R and D investments that make technologies available in the first place. Philipson and Jena argue that such criteria implicitly concern maximizing consumer surplus, which many times is consistent with maximizing static efficiency after an innovation has been developed. Dynamic efficiency, however, concerns aligning the social costs and benefits of R and D and is therefore determined by how much of the social surplus from the new technology is appropriated as producer surplus. The authors analyze the relationship between cost-effectiveness measures and the degree of surplus appropriation by innovators driving dynamic efficiency. They illustrate how to estimate the two for the new HIV/AIDS therapies that entered the market after the late 1980s and find that only 5 percent of the social surplus is appropriated by innovators. They show how this finding can be generalized to other existing cost-effectiveness estimates by deriving how those estimates identify innovator appropriation for a set of studies of over 200 drugs. They find that these studies implicitly support a low degree of appropriation as well. Despite the high annual cost of drugs to patients, very low shares of social surplus may go to innovators, which may imply that cost-effectiveness is too high in a dynamic efficiency sense.
Dafny and Dranove evaluate the possibility that a failure to exploit regulatory loopholes could motivate corporate takeovers. They use the U.S. hospital industry in 1985-96 as a case study. A 1988 change in Medicare rules widened a pre-existing loophole in the Medicare payment system, presenting hospitals with an opportunity to increase operating margins by 5 or more percentage points simply by "upcoding" patients to more lucrative codes. The authors find that "room to upcode" is a statistically and economically significant predictor of for-profit but not of not-for-profit acquisitions in the period immediately following this policy change. They also find that hospitals acquired by for-profit systems subsequently upcoded more than a sample of similar hospitals that were not acquired, as identified by propensity scores. These results suggest that firms that do not fully exploit regulatory loopholes are vulnerable to takeover.
Insurance for prescription drugs is characterized by two regimes: flat copayments and variable coinsurance. Dor and Encinosa develop a simple model to show that patient compliance is lower under coinsurance because of uncertainty in cost sharing. Empirically, the authors derive comparable models for compliance behavior in the two regimes. Using claims data from nine large firms, they focus on diabetes, a common chronic condition that leads to severe complications when inappropriately treated. In the coinsurance model, an increase in the coinsurance rate from 20 to 75 percents results in the share of persons who never comply to increase by nearly 10 percent and reduces the share of fully compliant persons by almost 25 percent. In the co-payment model, an increase in the copayment from $6 to $10 results in a 6.2 percent increase in the share of never-compliers, and a concomitant 9 percent reduction in the share of full compliers. Similar results hold when the level of cost-sharing is held constant across regimes. While non-compliance reduces expenditures on prescription drugs, it may also lead to increases in indirect medical costs attributable to avertable complications. Using available aggregate estimates of the cost of diabetic complications, the authors calculate that the $6-$10 increase in copayment would have the direct effect of reducing national drug spending for diabetes by $125 million. However, the increase in non-compliance rates is expected to increase the rate of diabetic complications, resulting in an additional $360 million in treatment costs. These results suggest that both private payers and public payers may be able to reduce overall medical costs by switching from coinsurance to copayments in prescription drug plans.
Mortality rates have fallen dramatically over time, starting in a few countries in the eighteenth century, and continuing to fall today. In just the past century, life expectancy has increased by over 30 years. At the same time, mortality rates remain much higher in poor countries, with a difference in life expectancy between rich and poor countries too of about 30 years. This difference persists despite the remarkable progress in health improvement in the last half century, at least until the HIV/AIDS pandemic. In both the time-series and the cross-section data, there is a strong correlation between income per capita and mortality rates, a correlation that also exists within countries, where richer, better-educated people live longer. Cutler, Deaton, and Lleras-Muney review the determinants of these patterns: over history, over countries, and across groups within countries. While there is no consensus about the causal mechanisms, they tentatively identify the application of scientific advance and technical progress (some of which is induced by income and facilitated by education) as the ultimate determinant of health. Such an explanation allows a consistent interpretation of the historical, cross-country, and within-country evidence. They downplay direct causal mechanisms running from income to health.
Spolaore and Wacziarg study the barriers to the diffusion of development across countries over the very long run. They find that genetic distance, a measure associated with the amount of time elapsed since two populations' last common ancestors, bears a statistically and economically significant correlation with pairwise income differences, even after controlling for various measures of geographical isolation, and other cultural, climatic, and historical differences. These results hold not only for contemporary income differences but also for income differences measured since 1500, and for income differences within Europe. Similar patterns of coefficients exist for the proximate determinants of income differences, particularly for differences in human capital and institutions. This paper discusses the economic mechanisms that are consistent with these facts. It presents a framework in which differences in human characteristics transmitted across generations -- including culturally transmitted characteristics -- can affect income differences by creating barriers to the diffusion of innovations, even when they have no direct effect on productivity. The empirical evidence over time and space is consistent with this "barriers" interpretation.
What does genetic distance between populations measure? And, is it a good proxy for culture as well as a valid instrument for disentangling the causal relationship between culture and economic outcomes? Guiliano, Spilimbergo, and Tonon examine how economists may interpret the correlation between genetic distance and cultural and economic variables. They argue that currently used measures of genetic distance are a poor proxy for cultural differences. Rather, genetic distance, being determined among other things by geographical barriers, reflects transport costs between countries. To demonstrate this point, the authors construct a new measure of geographic distance within Europe that takes into account the existence of major geographical barriers. They show that this measure explains both genetic distance and trade between European countries.
Artificial states are those in which political borders do not coincide with a division of nationalities desired by the people on the ground. Alesina, Easterly, and Matuszeski propose and compute for all countries in the world two new measures of the degree to which states are artificial. One is based on measuring how borders split ethnic groups into two separate adjacent countries. The other measures how straight land borders are, under the assumption the straight land borders are more likely to be artificial. The authors then show that these two measures seem to be highly correlated with several measures of political and economic success.
Benmelech and Moskowitz study the political economy of financial regulation by examining the determinants and effects of U.S. state usury laws during the 18th and 19th centuries. They argue that regulation is the outcome of private interests using the coercive power of the state to extract rents from other groups. They find that strictness of usury coexists with other exclusionary policies, such as suffrage laws and lack of general incorporation or free banking laws, which also respond less to competitive pressures for repeal. Furthermore, the same determinants of financial regulation that favor one group and limit access to others, are associated with lower future economic growth rates, highlighting the endogeneity of financial development and growth.
Across countries, education and democracy are highly correlated. Glaeser, Shleifer, and Ponzetto empirically motivate and then model a causal mechanism that explains this correlation. In their model, schooling teaches people to interact with others and raises the benefits of civic participation, including voting and organizing. In the battle between democracy and dictatorship, democracy has a wide potential base of support but offers only weak incentives to its defenders. Dictatorship provides stronger incentives, but to a narrower base. As education raises the benefits of civic participation, it also raises the support for more democratic regimes relative to dictatorships. This increases the likelihood of democratic revolutions against dictatorships, and reduces that of successful anti-democratic coups.
Why is underdevelopment so persistent? One explanation is that poor countries do not have institutions that can support growth. Because institutions (both good and bad) are persistent, underdevelopment is persistent. An alternative view is that underdevelopment comes from poor education. Neither explanation is fully satisfactory, the first because it does not explain why poor economic institutions persist even in fairly democratic but poor societies, and the second because it does not explain why poor education is so persistent. Rajan and Zingales try to reconcile these two views by arguing that the underlying cause of underdevelopment is the initial distribution of factor endowments. Under certain circumstances, this leads to self-interested constituencies that, in equilibrium, perpetuate the status quo. In other words, poor education policy might well be the proximate cause of underdevelopment, but the deeper (and more long lasting) cause is the initial conditions (like the distribution of education) that determine political constituencies, their power, and their incentives. Although the initial conditions may well be a legacy of the colonial past, and may well create a perverse political equilibrium of stagnation, persistence does not require the presence of coercive political institutions. The authors present some suggestive empirical evidence. On the one hand, such an analysis offers hope that the destiny of societies is not preordained by the institutions they inherited through historical accident. On the other hand, it suggests that we need to understand better how to alter factor endowments when societies may not have the internal will to do so.
Analysis of high-frequency data shows that yields on Treasury notes are highly volatile around FOMC announcements, even though the average effects of fed funds target rate surprises on such yields are fairly modest. Fleming and Piazzesi partially resolve this puzzle by showing that yield changes seem to depend not only on the surprises themselves, but also on the shape of the yield curve at the time of announcement. They also show that the reaction of yields to FOMC announcements is sluggish, but that much of this sluggishness can be attributed to the few inter-meeting moves. Market liquidity around FOMC announcements behaves in a manner generally consistent with that found for other announcements, although the richness of FOMC announcement release practices induces differences in the market-adjustment process.
Earlier literature on the capital structure has only touched on a causal relation between liquidity and leverage (that is, liquidity affects leverage). Frieder and Martell use a two-stage least squares analysis to explore the notion that these variables are jointly determined. Consistent with the idea that debt forces managers to make better investment decisions, they find that as leverage increases, equity bid-ask spreads decrease. Using the fitted values from their first-stage regression, the results from the second-stage regression further imply that as liquidity decreases, leverage increases. This is consistent with the notion that managers rely on debt financing when the cost of equity financing increases. While controlling for the endogenous relationship between spreads and leverage greatly reduces the impact of spreads on leverage, the results here suggest that a single standard deviation increase in spreads results in a 3 percent increase in leverage. Not only do these results add to the understanding of the complex relationship between capital structure and liquidity, but they also shed light on the determinants of leverage and bid-ask spreads.
Goettler, Parlour, and Rajan model endogenous information acquisition in a limit-order market for a single financial asset. The asset has a common value and each trader has a private value for it. Traders randomly arrive at the market, after choosing whether to purchase information about the common value. They may either post prices or accept posted prices. If a trader's order has not executed, then he randomly re-enters the market, and may change his previous order. The model is thus a dynamic stochastic game with asymmetric information. The authors numerically solve for the equilibrium of the trading game, and characterize equilibria with endogenous information acquisition. Agents with the lowest intrinsic benefit from trade have the highest value for information and also tend to supply liquidity. As a result, market observables, such as bid and ask quotes, in addition to transaction prices, are informative about the common value of the asset. Asymmetric information creates a volatility multiplier (prices are more volatile than the fundamental value of the asset) that is especially severe when the fundamental volatility is high. In the latter case, the time to execution of each type of agent increases, and there is a change in the composition of trader types in the market at any given time.
Gupta, Singh, and Zebedee examine whether banks price expected liquidity in syndicated loan spreads. Using extensive data on U.S. term loans, they show that banks have the ability to discern the expected liquidity of a loan at the time of origination. More importantly, they show that loans with higher expected liquidity have significantly lower spreads at origination, after controlling for other determinants of loans spreads such as borrower, loan, syndicate, and macroeconomic variables. Therefore, they identify a new factor (expected liquidity) being priced in syndicated term loans, which, in the aggregate, results in an annual saving of $1.5 billion to the borrowing firms in their sample. Thus, for the first time in the literature, they document a link between the secondary market liquidity of an asset and its pricing in the primary market.
Using eleven years of NYSE specialist data, Hendershott and Seasholes examine daily inventory/asset price dynamics. The unique length and breadth of their sample enables the first longer-horizon testing of market making inventory models. They confirm such models' predictions: that specialists' positions are negatively correlated with past price changes and positively correlated with subsequent changes. A portfolio that is long in stocks with the highest inventory positions and short in stocks with the lowest inventory positions has returns of 0.10 percent and 0.33 percent over the next one and five days, respectively. These findings empirically validate the causal mechanism - liquidity supplier inventory - that underlies models linking liquidity provision and asset prices. Inventories complement past returns when predicting return reversals. A portfolio long on high-inventory/low-return stocks and short on low-inventory/high-return stocks yields 1.05 percent over the following five days. Order imbalances calculated from signing trades relative to quotes also predict reversals and are complementary to inventories and past returns. Finally, specialist inventories can be used to predict return continuations over a one-day horizon.
Recent research has shown that default risk explains only part of the total yield spread on risky corporate bonds relative to their riskless benchmarks. One candidate for the unexplained portion of the spread is a premium for the illiquidity in the corporate bond market. Using the portfolio holdings database of the largest custodian in the market, Nashikkar and Subrahmanyam relate the liquidity of corporate bonds, as measured by their ease of market access, to the non-default component of their corporate bond yields. They estimate the ease of access of a bond using a recently developed measure called latent liquidity, which weights the turnover of funds holding the bond by their proportional holdings of the bond. They use the credit default swap (CDS) prices of the bond issuer to control for the credit risk of a bond. At an aggregate level, they find a contemporaneous relationship between aggregate latent liquidity and the average non-default component in corporate bond prices. Additionally, for individual bonds, they find that bonds with higher latent liquidity have a lower non-default component of their yield spread. Bonds that are held by funds exhibiting greater buying activity command lower spreads (are more expensive), while the opposite is true for those that exhibit greater selling activity. Also, the liquidity in the CDS market has an impact on bond pricing, over and above bond-specific liquidity effects.