Computer models and the limitations in safety design
Published: 01 January, 2009
Alan N Beard, Civil Engineering Section, School of the Built Environment, Heriot-Watt University (Edinburgh), outlines some of the worrying factors surrounding the use of computer models for fire safety design.
A colleague commented to me at a recent conference on fire safety modelling that “any fool can press a button”. The point was that it is generally not difficult to run a computer-based model and get results, given the availability of packages today. However the implication was that it is very difficult to employ a model so that it makes a genuinely valuable contribution to real-world decision-making, rather than leading to inaccuracy and inappropriate interpretation.
Different users may produce quite different results, even when using the same probabilistic model and applying it to the same case. In a European study concerning probabilistic modelling in the oil and gas industry  it was found that risk estimates produced by different users differed by “several orders of magnitude”. As an order of magnitude connotes a factor of 10 then this implies factors of about one hundred or one thousand.
A similar point may be made about deterministic models; different users may produce very different results when applying the same model to the same case. (See April 2008 Industrial Fire Journal , where a “Round Robin” exercise was described.) In a different exercise in which different users applied different deterministic models to the same case then, again, very different results were found . Further, reference  shows significant differences found by the same user in applying two different CFD-based models to the same case. This shows the kinds of problems which exist in using models as part of fire safety decision-making. Inter alia, the knowledge and experience of the user become crucial.
Error sources in fire modelling
Although the following relates to deterministic modelling, the general categories of error apply to probabilistic modelling as well. For example, suppose we wish to estimate the temperature at a given position or an average temperature over a given volume, at a given time, of the hot gases in a smoke layer resulting from a fire. Our estimate will be an approximation to the “real” temperature. Sources of error will be considered here. For further discussion see reference  and the further references contained therein.
(a) Lack of reality of the theoretical and numerical assumptions
The conceptual and numerical assumptions in a model are only an approximation to the real world. For example a control volume model may assume one of a number of different models for entrainment into the fire plume. A CFD-based model may assume one of a number of turbulence models. As well as variability in the qualitative assumptions made, there will usually be scope to use different values for parameters because of uncertainty in a particular application and still be able to claim that the numbers used are “reasonable”; for example, with regard to turbulence or convective heat transfer.
(b) Lack of fidelity of the numerical solution procedures
Systems of equations need to be solved numerically rather than analytically, except in the case of a simple model with relatively simple equations. (“Analytical” refers to a theoretical, general, solution being found to a problem, without using a computer, leading to exact results.) Different numerical solution techniques may produce different results. An example is provided by the grids used in CFD-based models to form the cells. Results usually depend upon the resolution of the grid. The differences between the results of using a coarse grid and those for a fine grid may be significant. Further, results from a CFD-based model will in general depend on the boundary conditions assumed as well as the time step, in addition to other factors. (It might be mentioned here, also, that deterministic chaos has been found in the Newton-Raphson technique; a method which is commonly used.)
(c) Direct mistakes in software
Also, it is possible that the software will not be an accurate representation of the model and numerical solution procedures. One estimate suggests there may be around eight errors per thousand lines of computer source code and, even for safety-critical applications, there could be around four errors per thousand lines of code.
Errors may result from mistakes in the software. There is also the possibility that the physical system which is being modelled may enter a condition which the software is not suitable for; this may relate to lack of realism of the theoretical and numerical assumptions made in the model.
Examples of software error are very hard to find because of the intrinsic nature of the error and because of a lack of willingness to be open about such matters by those who have specific commercial or other interests in a case. Some examples, though, have come into the public domain. One case relates to a code which was intended for a commercial nuclear reactor in Canada.
The nuclear regulatory body demanded independent examination and researchers from McMaster University found serious errors. It is very important that the source codes for computer programs used as part of fire safety decision-making be open to examination by the public in general and the scientific community in particular without large sums of money being demanded; commercial considerations should not be allowed to stand in the way of this. As a general rule, it must be assumed that there will be errors in any complex piece of software. Procedures need to be produced in order to try to reduce the likelihood of software errors existing in models which may be used as part of fire safety decision-making. Examination of software error should be part of independent assessment of a model.
(d) Faults in computer hardware.
It is usually assumed that hardware is very reliable and the possibility of a computer making a mistake because of its hardware has been generally ignored, at least by those who are not in the field of computer science.
It is the case, though, that a fault may exist in hardware as a result of: (d1) Mistakes in the design of micro-processors, (d2) Faults in the manufacture of micro-processors, (d3) A combination of (d1) and (d2). A specific case is seen in the fault in the Pentium processor which came to be realised during 1994. In this case the manufacturer “made no public announcement when it first detected the error”, see . It was effectively left up to users to find the fault, by chance, and attempt to contact other users.
(e) Mistakes in application
A model user may make an error whilst putting input into a model or in the analysis of output due to: (e1) Misunderstanding of the model or its numerical solution techniques; (e2) Misunderstanding of the design of the software; (e3) A slip in inserting input or reading output.
Specific information is hard to find. One study has suggested that, in very general terms, an error exists of the order of 1 in 1,000 for relatively simple mistakes such as misreading a number or pressing a wrong computer key.
Further, there may be “effective errors” in using a model because of poor documentation. For example, an ability may be implied for a model which it does not possess.
This is a crucial point; model documentation must state clearly and explicitly the conditions for which the software is suitable or unsuitable. In one particular case, for example, a CFD model was, in reality, suitable only for smooth walls. However, the documentation did not mention this, implying that the model could also be applied in the case of rough walls.
A theoretical prediction for, say, a gas temperature cannot be directly compared with a temperature in a real fire. Likewise for other variables. This means it is necessary to compare with experimental results. This is more difficult than might be thought, although there is a humorous saying: “nobody believes a theory except the theoretician; everybody believes an experiment, except the experimenter”.
It is necessary to assume a questioning attitude to experimental results as well as to theoretical predictions. False inferences drawn from the results of an experiment may be associated with:
• Lack of control of the conditions of an experiment. For example, ambient humidities may vary from day to day or month to month leading to different temperatures for ostensibly identical experiments. Figure 1 shows temperatures measured in two tests which were intended to be identical. The particular case is not relevant, however, the results of Figure 1 were not a product of ‘bad science’ ; they were the result of work in a well respected laboratory conducted by well known and respected scientists. A second example: in full-scale bedroom tests carried out by Factory Mutual Research Corporation in 1973/74, test 1 produced full room involvement (FRI) at 17.5 minutes while test 2 produced FRI at 7 minutes after ignition . Variability of this kind does exist in well conducted experimental work. It shows that there is a need for experimental tests to be repeated in as identical conditions as possible (ie tests should be replicated) and for distributions of experimental results to be produced, for each given case. This raises problems: replication of experimental tests is expensive and there is a lack of willingness to carry out replication because of this. It is essential that it be done, though, and it strongly suggests the need for collaboration at an international level which is aimed at producing acceptable data sets and distributions of results from replicated tests. Results from a single experimental test may well not be at the mean of a distribution of replicated test results for the same case. Further, there is a need for large scale experimental tests in addition to smaller scale tests
• Design of the experiment. For example, different sets of thermo-couple positions will lead to different sets of raw data.
• Direct error in measurement. For example, error associated with a thermo-couple reading; amongst other things, difficulty in controlling tip emissivity may affect measured gas temperatures.
• Raw data processing algorithms. Such algorithms are used to obtain the final “results” from an experiment from raw data, eg assumptions associated with finding an average temperature from a set of raw data.
• In comparing theoretical predictions with experimental results there are three generic types of comparisons: a priori, blind and open; see [2, and 5]. There is a need for far more a priori comparisons.
At a presentation to the European Parliament Transport Committee in December 2008, the author raised these kinds of issues and made recommendations which attempt to deal with some of the problems. These are to be considered by the Parliament. The presentation was centred on a report by the author, Assessment of the Safety of Tunnels, which is available on the web-site of the European Parliament, under the rubric of “Science and Technology Options Assessment” (STOA). Although the report is nominally about tunnel safety, many of the issues and recommendations were of a generic nature, covering all cases; as indicated in this article. It is hoped that the European Parliament will take action to address some of the problems involved in conducting risk assessments.
 Hawker, C. R., ‘Offshore Safety Cases - BG E&P’s Experience’, proceedings of conference on Safety Cases - Can We Prove the Next Ten Years will beSafer than the Last Ten?, London, 1995. Organized by IBC Technical Services Ltd and DNV (UK).
 Beard, A.N., ‘Reliability of Computer Models in Fire Safety Design’, Industrial Fire Journal, 71, pp 39-40, April 2008,
 Hostikka, S. & Keski-Rahkonen, O., Results of CIB W14 Round Robin for Code Assessment, Technical Research Centre of Finland, Espoo, Finland, 1998
 Tuovinen, H., Holmstedt, G. & Bengtson, S., ‘Sensitivity Calculations of Tunnel Fires using CFD’ , Fire Technology, 32 (2), pp 99-119, 1996
 Beard, A.N., ‘Problems with Using Models for Fire Safety’, chapter 14 of The Handbook of Tunnel Fire Safety, eds Alan Beard and Richard Carvel, pub Thomas Telford, 2005
 Beard, A.N., ‘Limitations of Computer Models’, Fire Safety Journal, 18, pp375-391, 1992