When you have built an initial STEM model and produced some business results, it is critical to explore the reliability and sensitivity of certain input assumptions, to measure their relative impact on the results. For some inputs, a variation in the value may have little effect on the results, whereas for other inputs a small change can have a significant impact. Understanding the relative sensitivity will help you to focus effort on making sure that these crucial inputs are well-researched. Such inputs may be ideal candidates for further scenario analysis.
As described in 9.3 Working with scenarios, STEM provides a sophisticated mechanism for defining scenarios as sets of parameters which can be run alongside and compared with other scenarios of the same model. Scenario management allows you to create a number of different, potentially complex variants where several parameters have specific values and they are all changed
together.
In contrast, sensitivity analysis
requires many parameters to be varied or ‘perturbed’ independently. Suppose, for example, that you want to investigate five parameters and see how the results are changed if each of them is increased or decreased by 1%. To achieve this with scenarios, you would have to create a dimension for each of them, with variants for the base case, +1% and –1%. You would then have to select the appropriate sets of results.
From version 7.1 onwards STEM has featured a sensitivity element that is specifically designed to carry out independent sensitivity analyses of various parameters with minimum modelling overheads. You simply identify a number of parameters which STEM then varies up and down by a certain number of steps, independently of each other, and in turn.
The text in the following sections explains how to add sensitivities to a model and examine their impact using the Results program. For the sake of clarity, an extremely simple model comprising just one service and one resource is used to illustrate these principles.