In 2001, Naomi Oreskes was still on the side of light. Paper Philosophical Issues in Model Assessment, authored by her and her husband Kenneth Belitz, demonstrated that the major climate models were not valid and used by “major national and international agencies” misleadingly.
From Introduction:
The inherent uncertainties of models have been widely recognised, and it is now commonly acknowledged that the term ‘validation’ is an unfortunate one, because its root — valid — implies a legitimacy that we are not justified in asserting … But old habits die hard and the term persists. In formal documents of major national and international agencies that sponsor modelling efforts, and in the work of many modellers, ‘validation’ is still widely used in ways that assert or imply assurance that the model accurately reflects the underlying natural processes, and therefore provides a reliable basis for decision-making. This usage is misleading and should be changed. Models cannot be validated.
When scientific research is conducted to support specific policies or ideologies, the customers of this research are not interested in knowledge, but in confirmation of their beliefs and positions.
If predictions are unreliable, then why do scientists make them? When models are built in aid of public policy, scientists may feel that they have to make predictions to serve the agencies and constituencies that support them.
A little bit of history:
A good example is global climate change. A great deal of climate research is dedicated to the construction of complex General Circulation Models (GCMs) with the goal of predicting the effects of increased atmospheric carbon dioxide. This is in part the result of a policy decision. In the early 1990s, the first Bush administration in the United States defined the primary problem of climate change to be scientific uncertainty, and made research its central policy response (Brunner 1991; Brunner and Ascher 1992; Rayner 2000). Eliminating or at least greatly reducing uncertainty was considered a prerequisite for action, and this led to greatly increased funding for climate research, particularly climate modelling. Scientists did not object; the idea of knowledge as a basis for action is eminently reasonable. And what scientist would protest better funding?
The paper is not limited to the climate models and is worth reading even for its survey of relevant biases, sane advice to modelers to resist “the demand for predictions that are likely to be misleading or simply wrong” and references.