Once upon a time, what we now call scientific research was undertaken by a) those with sufficient time and personal wealth b) those who convinced private patrons that their work was interesting/useful/showy enough to be supported and c) those who could make money through patents or the sale of related goods or labour. Things haven’t, of course, changed all that much, except that, since the late 19th century, government and business have taken over private interest as the major sponsors of scientific research.
Another fundamental shift that took place alongside this development was the rise and rise of the argument that ‘basic’ or ‘pure’ research should be supported by government because it would, in the long term, provide as-yet-unknown benefits to mankind or, at least, the nation. Then, as now, this argument was made by pointing to examples of research done that led to serendipitous discoveries or new applications. Today’s favourite example seems to be the internet, replacing yesterday’s favourite example, non-stick frying pans. In 1830 Charles Babbage pointed to some slightly less familiar examples of the tardy application of fundamental principles such as Stevinius’s hydrostatic paradox or Joseph Black’s discovery of latent heat.
His point was that the applications could make money for their inventors, but the “abstract truths” uncovered by these men of “powerful genius” did not. Because the individuals capable of furnishing “abstract principles” were less common than those capable of applying them, it was essential that they were supported: “unless the government directly interfere, the contriver of a thaumatrope may derive profit from his ingenuity, whilst he who unravels the laws of light and vision, on which multitudes of phenomena depend, shall descend unrewarded to the tomb” [Reflections on the Decline of Science in England (1830), p. 19].
In the 1830s Babbage’s call for government funding for “pure” science, unconnected to teaching positions or with an eye on applications, was something of a voice in the wilderness. In an age of small government and voluntarism (sound familiar?) most simply did not see support of speculative science as the business of the state. Most suspected that, if money were made available, it would go, not to the most deserving, but those in political favour. Babbage was, in this view, being strangely naive in, on the one hand, attacking the corruption of Royal Society patronage (his chief target) but, on the other, claiming that men of science would not be sacrificing independence by taking money from government.
However, the nature of government and the shape of society underwent such dramatic change over the next half century that the call for government funding of pure science became a commonplace, certainly among the Scientific Naturalists such as John Tyndall. But, despite the fact that government was now involved in support of science and people’s lives in a way that would have been inconceivable at the beginning of the century, “the notion of ‘abstract’ scientific inquiry as a career, supported by the State, was no more than a vision” before the 20th century. This quote is from Steven Shapin’s The Scientific Life [p. 43], which is fascinating for charting how this change took place. Many remained firmly opposed: George Airy, Astronomer Royal from 1835-1881, always maintained that any work for which the public paid should be demonstrably utilitarian.
Airy was, however, speaking for an earlier generation, and by the first decades of the 20th century the idea that funding should be available for what is variously called speculative, disinterested, fundamental, basic or pure research (on some of these terms see Sabine Clarke’s recent article in Isis) became uncontroversial, although debates about the extent to which government rather than business or other interests should be the provider of those funds have continued. Likewise, there is no real agreement over the process by which pure science research becomes usable (two interesting pieces on this are David Egerton’s ‘The “linear model” did not exist’ and this article ‘In defence of the linear model’) In addition, there has been plenty of discussion over whether science can or should be undertaken without consideration of the potential applications or ends.
In these days when the ‘Impact’ of research must be itemised, and economic cases have to be made, it is unlikely that anyone can get away without thinking of ends. However, there are moral as well as pragmatic reasons to give genuine consideration to as many potential uses of ‘pure’ research as possible. There are some who have claimed that freedom from considering future applications is necessary for good science, and others who absolve themselves, or science more generally, from responsibility for negative outcomes. An interesting critique of this rhetoric is Milton Leitenberg, “The Classical Scientific Ethic and Strategic-Weapons Development” (Impact of Science on Society 21 (1971), 123-36, sadly not available online), which essentially blames the classic scientific ethic of disinterestedness for the development of weapons research. On which note I must quote Tom Lehrer, who, as so often, encapsulates a point with a neat rhyme: ‘“Once the rocket goes up, who cares where it comes down./ That’s not my department” says Werner von Braun’…