Phase 1 and 2 of the project are now complete and the documentation from the project is available on these pages.
What follows is an overview of the project. This includes the test case specifications used in the project.
The Fire Modelling Standards/Benchmark (FMSB) project marks the first step in the development of a set of standards/benchmarks that can be applied to fire field models. The project is led by the Fire Safety Engineering Group (FSEG) at the University of Greenwich and funded by the Home Office Fire Research and Development Group. It is not the intent of the current phase of this project to definitively define the entire range of standards/benchmarks but to suggest and demonstrate the principle behind the proposed standards and to propose the required next steps. It is expected that the suite of cases will evolve over time as suitable new experimental data is made available or as new theoretical cases are developed.
The ultimate purpose of the proposed standards/benchmarks is to aid the fire safety approvals authority e.g. fire brigade, local government authority, etc in assessing the appropriateness of using a particular model for a particular application. Currently there is no objective procedure that assists an approval authority in making such a judgement. The approval authority must simply rely on the reputation of the organisation seeking approval and the reputation of the software being used. In discussing this issue it must be clear that while these efforts are aimed at assisting the approval authorities, there are in fact three groups that are involved, the approvals authority, the general user population and the model developers. Ideally, the proposed standards/benchmark should be of benefit to all three groups. In proposing the standards/benchmark, it is not intended that meeting these requirements should be considered a SUFFICIENT condition in the acceptance process, but rather a NECESSARY condition. Finally, the benchmarks are aimed at questions associated with the software, not the user of the software.
Fire field models are made up of essentially two components, the CFD code and the Fire model. The CFD code is the core of the fire field model and provides the fire model with the basic transport mechanisms for energy, momentum and mass (including convection, radiation and conduction). It is therefore at the heart of the fire field model. The fire model contains the detailed specification of the fire description. This involves the boundary conditions and the representation of combustion. In order to obtain good predictions from a fire field model it is essential to have a good CFD code, a good fire model and ultimately, a user skilled in the art of fire modelling.
A number of fire field models are currently available. They fall into one of two camps, specific fire field modelling software that is intended only for modelling fire and general purpose CFD codes that can be used for fire modelling applications. The general purpose CFD codes PHOENICS, CFX and Fluent are examples of the latter. These general purpose CFD products are often used in fire modelling applications. The fire field models, JASMINE (developed by FRS and which makes use of an early version of the PHOENCIS code as its CFD engine), KAMELON (developed by SINTEF/NTH), SMARTFIRE (developed by UoG) and SOFIE (developed by Cranfield/FRS) are examples of the former.
It is essential to set standards/benchmarks to assess both the CFD engine and the fire model component for each type of code. However, within the fire modelling community, testing of fire field models has usually completely ignored the underlying CFD engine and focussed on the fire model.
Thus, when numerical fire predictions fail to provide good agreement with the benchmark standard, it is not certain if this is due to some underlying weakness in the basic CFD engine, the fire model or the manner in which the problem was set-up (i.e. questions of user expertise). Furthermore, the case that is being used as the benchmark/standard is usually overly complex or cannot be specified to the precise requirements of the modellers. All of this is often to the benefit of the code developer/user as it allows for a multitude of reasons (some may say excuses) to explain questionable agreement.
Furthermore, what fire modelling testing that is undertaken is usually done in a non-systematic manner, performed by a single individual or group and is generally based around a single model. Thus it is not generally possible for other interested parties to exactly reproduce the presented results (i.e. verify the results) or to apply the same protocol to other models. This makes verification of the results very difficult if not impossible and the comparison of one model with another virtually impossible.
When discussing standards/benchmarks, there are essentially three groups of interested party, the approvals authority, the user groups and the software developer. While maintaining the highest level of safety standards is of general interest to all parties, each interest group has a specific reason for requiring a standard/benchmark. In order to maintain safety standards, the approvals authority must be satisfied that appropriate tools have been employed, the user wants to be assured that he is investing in technology that is suited to the intended task, while the developer would like to have a definable minimum target to achieve.
To satisfy the differing requirements of the approvals authority, user and software developer populations, any suite of benchmarks/standards must be both diagnostic and discriminating. Hence, the proposed suite of benchmarks/standards would ideally exercise each of the components of the fire field model i.e. CFD engine and fire model. This means that standards based simply around instrumented room fire tests are insufficient. This would for example require benchmarks/standards for simple recirculating flows, buoyant flows, turbulent flows, radiative flows, etc. Furthermore, in addition to the quality of the numerical results, details of the computer and compiler used to perform the simulations and the associated CPU time expended in performing the calculations should be provided. While not of particular interest to the approvals authority, this will be of interest to the user community.
Ideally, the proposed benchmarks/standards will evolve into a measure of quality, indicating that the fire model has reached a minimum standard of performance. This does not necessarily mean that the software may be used for any fire application, however it would eliminate from consideration those software products that have not demonstrated that they can attain the standard.
Several developers of well known fire field models currently used in the UK were approached to participate in this project namely, the developers of JASMINE, SOFIE, CFX, PHOENICS and SMARTFIRE. Three code developers agreed to participate in this first phase. These were:
the general purpose CFD codes,
CFX and
PHOENICS
and the specific fire field model,
SMARTFIRE.
Representatives from the organisations responsible for the identified SP constitute the Benchmark Task Group (BTG). In addition, the BTG consists of one independent user of fire field models drawn from the user community (Arup Fire) and a representative from the FRDG. The role of the BTG is to review the proposed benchmarks and specified solution procedures and to review the final results. The BTG is chaired by Prof Ed Galea of FSEG.
The benchmarks are divided into two types, basic CFD and fire. Two
phases of simulation are to be performed by each software product (SP) being
subjected to the benchmarks, these are to be known as phase 1 and phase
2 simulations.
The nature of the phase 1 simulation has been rigidly defined by FSEG under
review by the BTG, this includes the mesh specification, physics to be
activated, algorithms to be employed and results to be generated. Where
possible, the specification of phase 1 simulations has been such that all of
the SP participating in the trial will be able to achieve the specification.
It is acknowledged that this process will not necessarily produce optimal
results for all of the SP.
The phase 1 simulations will be completed before proceeding to attempt the phase 2 simulations. The phase 2 simulations will be free format in nature, allowing the participants to repeat the simulation using whatever specification they desire. Phase 2 simulations will allow the participants to demonstrate the full capabilities of their SP. However, phase 2 simulations will only be allowed to utilise features that are available within their software products i.e. additional code or external routines are not permitted.
Each phase 1 simulation will be performed at least once. FSEG will run each phase 1 simulation with each SP. The participants are requested to run at least two of the 10 phase 1 simulations using their SP. Participants were free to choose which of two simulations to run, however these must include at least one from the CFD category and one from the fire category. Participants were of course free to (and indeed encouraged to) run all 10 of the phase 1 simulations. It is however imperative that the participants did not inform fseg which of the phase 1 simulation they intend to run. It should be remembered that the purpose of repeating the simulations is to ensure that FSEG have not fabricated results.
On completing the phase 1 simulations participants will be invited to undertake their phase 2 simulations. All participants must complete a similar pro-forma that has been supplied for the phase 1 simulations. This is necessary as FSEG will repeat the phase 2 simulations in order to ensure that the results have not been fabricated. A blank pro-forma in MS-word 97 format is available here.
As a first attempt at defining the benchmarks, 10 cases were considered, these involved five CFD cases and five fire cases. All of the phase 1 simulations were defined with relatively coarse meshes in order to keep computation times to reasonable levels. Participants are of course free to refine meshes when undertaking the phase 2 simulations.
The cases are defined as follows:
CFD Cases:
2000/1/1 Two dimensional turbulent flow over a backward
facing step.
2000/1/2 Turbulent flow along a long duct.
2000/1/3 Symmetry boundary condition.
2000/1/4 Turbulent buoyancy flow in a cavity.
2000/1/5 Radiation in a three-dimensional cavity.
Fire Cases:
2000/2/1 Steckler Room (heat source).
2000/2/2 Steckler Room (combustion model).
2000/2/3 Fire in a completely open compartment with lid
(heat source).
2000/2/4 CIB W14 fire (combustion model).
2000/2/5 Large fire (combustion model)
These test cases have now been examined using the three SPs completing Phase 1 of the project. The results from the study are available in the online report and a summary of the project findings can be found here. The results from the phase-2 study are available here.
If you have any enquires concerning this project please contact FSEG team members:
For operational issues:
Mr Angus Grandison, Email:
a.j.grandison@gre.ac.uk
For all other issues:
Prof Ed Galea, Email: e.r.galea@gre.ac.uk
Dr Mayur Patel, Email: m.k.patel@gre.ac.uk