Abstract
Human reliability analysis (HRA) is a risk assessment tool that is used in nuclear power generationand other process control industries to predict the potential impact of human actions on sociotechnical
system operations. Specifically, understanding the potential for human error is critical for determining
the factors that, in the operations of these complex systems, can affect the most variable factor of
all—the human. HRA methods seek to calculate the probability of human errors occurring during
specific tasks. That probability then becomes an input to the overall risk analysis of a system or
scenario. HRA methods follow many different avenues to achieve their purposes, but all incorporate
the notion of performance shaping factors (PSFs) to some degree. A PSF is a circumstance that
modifies the probability of human error, making it more or less probable. One challenging aspect of
HRA is the sheer diversity of perspectives and approaches to determining these probabilities. Among
the variations seen in HRA methods are different levels of action abstraction, conflicting PSF
definitions and values, and significant disagreement in the applications of HRA. The present research
aims to establish some first principles for HRA as a whole—specifically in terms of action abstraction
and PSF definitional validity—and to demonstrate a novel application of HRA in cognitive modeling.