Recent Posts

Blog: How to lie with (FDA) statistics

May 2, 2014 by
Fred D. Ledley

The FDA approved 27 new drugs in 2013. Is this a downward trend? Is it an upward trend? Does it suggest that the pharmaceutical industry is failing, or that genomics is finally paying dividends? We revisit Darrell Huff’s 1954 classic How to Lie with Statistics for insights into these pressing questions.

The headline of a recent Associated Press release read “The Big Story, New drug approvals from FDA declined in 2013.” (1) Indeed, the FDA approved only 27 New Molecular Entities (NMEs) in 2013, compared to 39 in 2012. This discouraging news was widely reported in mainstream and business media.

The AP release quotes FDA officials as saying that “the tally of innovative medications approved last year is in line with the historical trend.” The official FDA report was more circumspect, stating that “CDER approved 27 NMEs in 2013, which is similar to average totals of other years from this time period. For instance from 2004 through 2012, CDER has averaged about 26 NME approvals per year. In 2012, CDER approved 39 NMEs, but this was an unusually high number….” (2).

The Associated Press repeated the FDA’s analysis, but then states that “Experts attribute the recent uptick to a combination of factors: a stable, well-funded FDA and a newly established research model among drug makers…” A commentator on the web site of the National Venture Capital Association, extolling the importance of regulatory reforms championed by the Association, echoed this view, writing: “…we believe that the upward trend in drug approvals over the past several years has increased investor confidence in the FDA's review and approval process” (3).

Reading the conflicting interpretations of the FDA data, I was reminded of Darrell Huff’s classic book How to Lie with Statistics (W.W. Norton, 1954) and his statement that “The secret language of statistics, so appealing in a fact-minded culture, is employed to sensationalize, inflate, confuse, and oversimplify.” Huff’s book illustrated the subjectivity that could be introduced into analysis by selective data analysis, graphical representation, and the association of correlations with trends or causation. For the record, the FDA has reported historical data (4) on the number of new drugs (New Chemical Entities and Biologicals) as shown in the attached figures. The top panel shows the raw data with three possible “trends” that could be inferred from selectively chosen linear regressions. The first, 1970-2013, might be interpreted as representing the “post-thalidomide” era of drug development. The second, 1996-2013, might be interpreted as representing the post-PDUFA (Prescription Drug User Fee Act of 1993) era of the FDA; and the others, 2005-2013, or the 2004-2013 time frame described by the FDA, might be used to assess recent trends. In the tradition of “How to Lie with Statistics,” each of these arbitrary analyses provides the basis for empirical, though not statistically significant, narratives suggesting that drug development trends are either positive (green) or negative (red).

Using standard regression models to evaluate long term trends is even more confusing. The bottom panel shows the application of various analytical formulations including linear regression (purple), 5, 10, 12, 15, and 20 year moving averages (blue), or polynomial (4th, 5th order) (orange) curve fits. The 10 year moving average is in bold with +/- SD of the residual shaded. It is apparent that analysis of drug approval data alone does not conveniently support either a “declining” or “upward” narrative.

Can anything be learned from the FDA data?

These data are seemingly consistent with Jurgen Drews’ 1995 prediction (5), that the number of new pharmaceutical products in development at that time was not sufficient to replace the number coming off patent and sustain continued growth of the biopharmaceutical industry. While each of the regression models in the bottom panel appear to show growth in the number of NMEs approved though the 1970s and early 1990s, this trend seems to be absent after 1995.

Less defensible is the statement in the AP release that “FDA drug approvals are watched closely by analysts as both a barometer of industry innovation and the federal government's efficiency in reviewing new therapies.” On the surface, it seems obvious to make a causal connection between innovation in industry, the efficiency of the review process, and the number of NMEs that result. Indeed, Drews’ concern about the inadequacy of the product pipeline is often referred to as the “innovation gap.”

Yet, as Huff warned, “flaws in assumptions of causality are not always so easy to spot, especially when the relationship seems to make a lot of sense or when it pleases a popular prejudice.” Is the assumption of causality in this instance flawed? We think it might be.

It is recognized that not all technological innovations have a sensible, sustaining effect on product development. Many technological innovations are incompatible with the capabilities and conventions of established industries. Such innovations may initially be disruptive to product development processes and markets until businesses adapt to new technological requirements and commercial opportunities. Disruptive innovations may require substantial periods of time to mature before they contribute to the development of competitive products.

In this context, short term changes in number of NME’s reflects only incremental or sustaining innovations, and not the type of radical and disruptive innovations embodied in genomics and other “omic” technologies that have emerged in recent decades. In fact, a 2001 report titled Fruits of Genomics predicted that genomic innovations in the pharmaceutical industry would initially have a negative impact on the timelines and cost of drug development (6). Their model predicted that it would not be until after 2010 that these innovations would begin to produce increasing numbers of NMEs.

Is there evidence to support the prediction that this “newly established research model” is finally producing increasing numbers of NMEs? I would suggest that you first read “How to Lie with Statistics,” then come to your own conclusions.

Fred Ledley is Director of the Center for Integration of Science and Industry at Bentley University, and Professor of Natural & Applied Science and Management.

1 -

2 -

3 -

4 -

5 - Drews, Jürgen, and Stefan Ryser. "Innovation deficit in the pharmaceutical industry." Drug Information Journal 30.1 (1996): 97-108.

6 – Brothers, Lehman. "The fruits of genomics." Lehman Brothers: New York (2001).

Blog: Why does society support science? And how to meet the expectations?

May 5, 2014 by
Fred D. Ledley

Public support for science is related less to the wonder of scientific discovery than the expectation that scientific and technological advances will lead to new product, jobs, and economic growth. Recent evidence suggests that these outcomes are not certain. Fred Ledley argues that the public is often promised the benefits of scientific discoveries without adequate consideration of the business challenges inherent in translating science for public benefit.

In his Presidential Address to the American Association for the Advancement of Science (AAAS), William Press asked “why society is willing to support an endeavor as abstract and altruistic as basic science research and an enterprise as large and practical as the research and development (R&D) enterprise as a whole.” His answer is that public support for science is related less to the wonder of scientific discovery, than the fact that “Discovery leads to technology and invention, which leads to new products, jobs, and industries.”

This formulation of the value of science and technology has deep roots in the history of human civilization, which measures epochs of human cultural evolution by advances in metallurgy and energetics. It also has deep roots in the scientific enterprise itself, which has often been associated with advances in armament, agriculture, alchemy, and artisan trades. The contemporary formulation of the relationship between scientific discovery and the value it creates was formalized by two seminal works of the mid 20th century.

The first was a report titled “Science The Endless Frontier ,” prepared by Vannevar Bush for President Franklin Roosevelt in 1945. That report made explicit the relationship between basic science and economic goals, stating: “One of our hopes is that after the war there will be full employment. To reach that goal the full creative and productive energies of the American people must be released. To create more jobs we must make new and better and cheaper products. We want plenty of new, vigorous enterprises. But new products and processes are not born full-grown. They are founded on new principles and new conceptions which in turn result from basic scientific research. Basic scientific research is scientific capital.” Bush’ implicit argument was that, since the frontiers of knowledge were endless, so too were the benefits that would accrue to the American public by supporting the scientific enterprise. The second was the work of economist Robert Solow who examined the contributions of economic capital and labor capital to an “aggregate production function.” Solow’s work recognized that economic and labor inputs could not account for the non-equilibrium condition of sustained economic growth. He postulated that the residuals in economic growth, that could not be accounted for by economic or labor capital, were due to technical change that increased the contribution, or productivity, or labor. Examining economic growth between 1909 and 1949, Solow concluded that economic growth had resulted from a doubling of worker productivity, with 87.5% of this increase attributable to “technical change” and the remaining 12.4% attributable to “increased use of capital.” In this formulation, “scientific capital,” which contributed to technical change, was formally recognized as a driver of economic growth. Press involves Solow’s work to argue that the long term exponential growth of the US GDP per capita since the late 19th century (Figure 1, from: WH Press, Science 2013;342:817-822) reflects the impact of technical change on the economy. Futurist Ray Kurzweil has argued that technology has advanced exponentially throughout the 20th century. He has shown, for example, that the speed of automated computation increased exponentially since 1900, progressing through “five paradigms ” of technologies, the most recent of which involves exponential advances in integrated circuits, often described as Moore’s Law. Kurzweil envisions a future of continuing, exponential change driven by advances in genetics/biotechnology, nanotechnologies, and robotics/artificial intelligence, collectively described as “GNR.” A seminal 2012 paper from the National Bureau of Economic Research titled “Is US Economic Growth over? Faltering innovations confronts the six headwinds” economist Robert Gordon presents a less optimistic view. Gordon’s analysis of recent economic trends suggests that technologies Kurzweil heralds have not produced the same economic growth as earlier innovations of the 19th and 20th century such as innovations in steam power, railroads, electricity, petroleum, or chemistry. While it is tempting to discount these observations as a temporary aberration association with the Great Recession, as Press does in his address to the AAAS, a careful analysis of GDP data since 1980 suggests a troubling trend. Figure 2 shows US GDP per capita, as well as the annual changes in US GDP per capita, from 1980-2012. While the GDP per capita has continued to grow over the past thirty years, episodic recessions notwithstanding, the rate of annual growth exhibits a long-term downward trend, suggesting that this growth has not been exponential (Data from: Recent decades have witnessed many extraordinary achievements in biotechnology, nanotechnology, computers, communications, and artificial intelligence, some of which are shown in figure 2. In fact, our research, and that of others, has demonstrated that many sectors of science and technology, ranging from computers and communications technologies, to biotechnology and nanotechnology, have experienced exponential progress throughout this period. If advances in science and technology constitute the dominant drivers of economic growth, as posited by Solow, then something is seriously amiss. Gordon ascribes the slowing of economic growth in recent decades to a number of demographic and economic “headwinds.” These include the ageing of the baby boom generation, the higher levels of education required to exploit new technologies, rising inequality, globalization, the constraints of energy and environmental resources, and patterns of borrowing and savings. In the face of these headwinds, Gordon writes that “…innovation does not have the same potential to create growth in the future as in the past…” The classical path by which scientific and technological innovations create economic growth involves two distinct enterprises; a scientific enterprise focused on the basic research, and a commercial enterprise that is responsible for the translation of these advances into products, services, jobs, and economic value. Bush noted in The Endless Frontier: “Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn.” The differentiated roles of the scientific enterprise and the commercial enterprise were further codified by the Bayh-Dole Act of 1980 , which normalized the transfer of this scientific capital from the scientific enterprise to commercial enterprises with business models that combine this scientific capital with the labor and economic capital required to develop successful products and services. In this context, the demographic and economic headwinds Gordon identifies are not categorical impediments to value creation from scientific and technological innovation, they are simply elements of the business environment faced by commercial enterprises involved in translational science. The fact that recent economic growth has lagged, even as science and technology have continued to advance exponentially, suggests that the prevailing business models for translational science are not working in the current business environment. A similar conclusion can be drawn from the limited progress that has been made in improving health outcomes for many common diseases, healthcare productivity, alternative energy production, sustainable resource utilization, food security, water management, transportation, and education in the face of exponential scientific and technological advances in these sectors. Gordon’s observations cannot be regarded as a temporary aberration; nor should they be a cause for undue pessimism. Rather, they should be seen as a clarion call for new, innovative business models that are attuned not only to the extraordinary potentials of emerging science and technology, but also to current and future business environments and the headwinds they will face. Press concluded his Presidential Address thus: “Our message is that science is a single, unified, long-term enterprise in which basic science discoveries, and research accomplishments of applied science and engineering, are things to be admired in their own right that also, often unpredictably, lead to better jobs and better lives, new products and new industries.” It is critical to recognize that the unpredictability of translational science arises as much from the complexity of business and the business environment, as from the complexity of science and its applications. Too often, the public is promised the benefits of scientific discoveries without adequate consideration of the business challenges inherent in translational science. Innovations that better integrate science and business in a long-term enterprise may improve the predictability and productivity of translational science and ensure that the public receives the return expected from society’s support for basic science. Fred Ledley is Director of the Center for the Integration of Science and Industry at Bentley University and also Professor of Natural & Applied Science and Management. Sources: Press, William H. "What's So Special About Science (And How Much Should We Spend on It?)." Science 342.6160 (2013): 817-822. Solow, Robert M. "Technical change and the aggregate production function." The review of Economics and Statistics 39.3 (1957): 312-320. Kurzweil, Ray. The singularity is near: When humans transcend biology. Penguin. com, 2005. Gordon, Robert J. Is US economic growth over? Faltering innovation confronts the six headwinds. No. w18315. National Bureau of Economic Research, 2012.

Research: What was the value of Human Genome Science?

Jan 22, 2014 by
Fred D. Ledley
Laura McNamee

When Human Genome Sciences (HGS) was acquired by GlaxoSmithKline (GSK) in 2012, its “fair value” was $3.6B, less than the $3.9B capital investments in the company. Does this truly reflect the value of a product (BenlystaTM) with billion-dollar potential, HGS’ product pipeline, and >600 patents? A recent paper from the Center for Integration of Science and Industry reviews the history of HGS and valuation of HGS and argues that there is a need for greater alignment between the milestones of translational science and measures of corporate value.

Research: Can newly-public biotech succeed at translational science? by Laura McNamee and Fred Ledley

Jan 22, 2014 by
Fred D. Ledley
Laura McNamee

Early-stage biotech companies play a critical role in the entrepreneurial ecosystem that is expected to develop commercial products from nascent scientific discoveries. Recent research from the Center for Integration of Science and Industry suggests that companies in the IPO “class of 2000” were ineffective in developing therapeutic products and asks whether the business models of newly-public biotech companies are up to the task.