The now infamous Naomi Oreskes eviscerated climate models in 1994. But she has not stopped then. In her 1998 paper Evaluation (Not Validation) of Quantitative Models, she disclosed a wider pattern of computer models being either misrepresented or deliberately produced to fit predetermined agendas. The paper was not focused on the climate change studies, but showed the fraud behind The Limits to Growth (1972) and political pressure on scientists from EPA.
On The Limits to Growth (emphasis is mine):
Why did the world modelers make what is in retrospect such an obvious mistake? One reason is revealed by the post hoc comments of Aurelio Peccei, one of the founders of the Club of Rome. The goal of the world model, Peccei explained in 1977, was to “put a message across,” to build a vehicle to move the hearts and minds of men (59,21). The answer was predetermined by the belief systems of the modelers. They believed that natural resources were being taxed beyond the earth’s capacity and their goal was to alert people to this state of affairs. The result was established before the model was ever built. In their sequel, Beyond the Limits, Meadows et al. (60) explicitly state that their goal is not to pose questions about economic systems, not to use their model in a question-driven framework, but to demonstrate the necessity of social change. “The ideas of limits, sustainability [and] sufficiency,” they write, “are guides to a new world.”
On EPA, regulatory demands and how environmentalists were corrupting the science.
The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability.
The following quote knocks down the climate models:
Hodges and Dewar (29), in a report for the RAND Corporation on computer models used by the military to evaluate the efficacy of weapons systems in battlefield scenarios, make the distinction between two kinds of models: those that can be validated and those that cannot. … Oreskes et al. and Oreskes (32,33), in a discussion of computer models in the earth sciences, note that the criteria outlined above — measurability, accessibility, and temporal and spatial invariance — are precisely those features typically lacking in the natural systems that scientists are increasingly exploring with computer models.
The paper mentions a well known difference between how scientists understood their models and how the political customers presented them to the public:
Most scientists are aware of the limitations of their models, yet this private understanding contrasts the public use of affirmative language to describe model results. … The conspicuous absence of negative language in the scientific literature of validation should give us pause, for it raises the following question relevant to both scientific and regulatory perspectives: Is the computer model a vehicle to prove what we think we already know or is it an honest attempt to find answers that are not predetermined? Put this way, it becomes clear that the goal of scientists working in a regulatory context should be not validation but evaluation, and where necessary, modification and even rejection.