Queen Mary, University of London logo HUM project logo UCL logo

HUM project


Work packages

Studies of human error
People

Human Error Modelling (HUM) Project

HUM: Foundation publications

  1. A. Blandford, R. Butterworth and P. Curzon. (2004) "Models of Interactive systems: a case study on programmable user modelling". International Journal of Human-computer Studies. Volume 60, Issue 2, Pages 149-200 Available from ScienceDirect.
  2. Lisp simulations of the models described are available.
    • Abstract Models of interactive systems can be used to answer focused questions about those systems. Making the appropriate choice of modelling technique depends on what questions are being asked. We present two styles of interactive system model and associated verification method. We show how they contrast in terms of tractability, inspectability of assumptions, level of abstraction and reusability of model fragments. These tradeoffs are discussed. We discuss how they can be used as part of an integrated formal approach to the analysis of interactive systems where the different formal techniques focus on specific problems raised by empirical investigations. Explanations resulting from the formal analyses can be validated with respect to the empirical data.
      The first modelling style, which we term ‘operational’, is derived directly from principles of rationality that constrain which user behaviours are modelled. Modelling involves laying out user knowledge of the system and task, and their goals, then applying the principles to reason about the space of rational behaviours. This style supports reasoning about user knowledge and the consequences of particular knowledge in terms of likely behaviours. It is well suited to reasoning about interactions where user knowledge is a key to successful interaction. Such models can readily be implemented as computer programs; one such implementation is presented here.
      Models of the second style, ‘abstract’, are derived from the operational models and thus retain important aspects of rationality. As a result of the simplification, mathematical proof about selected properties of the interactive system, such as safety properties, can be tractably applied to these models. This style is well suited to cases where the user adopts particular strategies that can be represented succinctly within the model.
      We demonstrate the application of the two styles for understanding a reported phenomenon, using a case study on electronic diaries.

  3. P. Curzon and A. Blandford (2004) "Formally Justifying User-centred Design Rules: a Case Study on Post-completion Errors", to appear in Proceedings of Integrated Formal Methods, Lecture Notes in Computer Science, Springer.
    • Abstract We show how a formal cognitive architecture can be used to verify the correctness of design rules designed to prevent user error. We consider the example of a design rule to prevent post-completion errors. Both a machine- and user-centric version of the design rule are formalised and it is proved using the HOL proof system that the design rule prevents the cognitive architecture from making post-completion errors.
    • This paper is based on the report below but contains more of the technical detail of the formalism.


  4. Paul Curzon and Ann Blandford (2003) "A formal justification of a design rule for avoiding post-completion errors". Middlesex University Interaction Design Centre Technical Report: IDC-TR-2003-005.
    • Abstract We show how a formal cognitive architecture can be used to verify the correctness of design rules designed to prevent user error. We consider the example of a design rule to prevent post-completion errors. Both a machine- and user-centric version of the design rule are formalised and it is proved using the HOL proof system that the design rule prevents the cognitive architecture from making post-completion errors.

  5. P. Curzon and A. Blandford (2002), "From a Formal User Model to Design Rules", Interactive Systems. Design, Specification and Verification, 9th Int Workshop. P. Forbrig, B. Urban and J. Vanderdonckt and Q. Limbourg (eds). Lecture Notes in Computer Science, 2545, pp 1-15, June. pdf
    • Abstract Design rules sometimes seem to contradict. We examine how a formal description of user behaviour can help explain the context when such rules are, or are not, applicable. We describe how they can be justified from a formally specified generic user model. This model was developed by formalising cognitively plausible behaviour, based on results from cognitive psychology. We examine how various classes of erroneous actions emerge from the underlying model. Our lightweight semi-formal reasoning from the user model makes predictions that could be used as the basis for further usability studies. Although the user model is very simple, a range of error patterns and design principles emerge.

  6. A. Blandford and G. Rugg (2002) "A case study on integrating contextual information with usability evaluation", International Journal of Human-Computer Studies. 57.1, 75-99. Available from Science Direct.
    • Abstract The work reported here integrates an analytical evaluation technique, Programmable User Modelling, with established knowledge elicitation techniques; the choice of techniques is guided by a selection framework, ACRE. The study was conducted in conjunction with an ongoing industrial design project. Techniques were selected to obtain domain knowledge in a systematic way; the rationale behind each choice is discussed. The use of ‘negative scenarios’ as a means of assessing the severity of usability findings is introduced.
  7. P. Curzon and A. Blandford (2001) "Detecting Multiple Classes of User Errors", in Engineering for Human-Computer Interaction, 8th IFIP International Conference, EHCI 2001, Toronto Canada, Murray Reed Little and Laurence Nigay (Eds) pp 57-71, LNCS 2254, Springer. pdf
    • Abstract Systematic user errors commonly occur in the use of interactive systems. We describe a formal reusable user model implemented in higher-order logic that can be used for machine-assisted reasoning about systematic user errors. We consider how this approach allows errors of a range of specific kinds to be detected and so avoided by proving a single theorem about an interactive system. We illustrate the approach using a simple case study.
  8. A. Blandford, R. Butterworth and P Curzon (2001), "Puma Footprints: Linking Theory and Craft Skill in Usability Evaluation". Proceedings of Interact 2001, pp577-584, IOS Press, July. pdf
    • Abstract Footprints are marks or features of a design that alert the analyst to the possible existence of usability difficulties caused by violations of design principles. PUMA Footprints make an explicit link between the theory underlying a Programmable User Model and the design principles that can be derived from that theory. While principles are widely presented as being intuitively obvious, it is desirable that they should have a theoretical basis. However, working directly with theory tends to be time-consuming, and demands a high level of skill. PUMA footprints offer a theory-based justification for various usability principles, with guidelines on detecting violations of those principles.
  9. P. Curzon and A. Blandford (2001b), "A User Model for Avoiding Design Induced Errors in Soft-Key Interactive Systems", TPHOLs 2001: Supplementary Proceedings, ed by R.J.Bolton and P.B. Jackson, U. of Edinburgh, Informatics Research Report, ED-INF-RR-0046, pp 33-48.
    • Abstract Hard-key user interfaces are ones where the interface does not change during an interaction. Soft-key interfaces, on the other hand, change the meaning of inputs such as buttons as the interaction progresses. We demonstrate how interactive systems with either kinds of interface can be verified in HOL by combining a generic user model and a device specification incorporating a specification of the interface. In particular we show how design problems which result in users systematically making errors can be detected. We have extended the user model from previous versions to detect new errors related to soft-key interfaces.
  10. P. Curzon and A. Blandford (2000a), "Using a Verification System to Reason About Post-Completion Errors". Participants Proc. of DSV-IS 2000: 7th Int.Workshop on Design, Specification and Verification of Interactive Systems, at the 22nd Int. Conf. on Software Engineering, ed by P. Palanque and F. Paternò, pp 292-308.
    • Abstract Formal verification research has concentrated on ensuring that implementations meet specifications or that safety or liveness properties hold of a specification. However, systems verified in this way are still prone to systematic user errors. A common class of systematic user error is the post-completion error where the user omits some necessary termination action. Work from the fields of cognitive psychology and human computer interaction suggest that such user errors are avoidable if systems are designed appropriately. We demonstrate that, by adopting a user-centric approach to system verification, formal proof methods and tools can both detect and prove the absence of post-completion errors. Furthermore, this we show how this approach can be integrated with traditional system verification methods.
  11. P. Curzon and A. Blandford (2000b), "Reasoning about Order Errors in Interaction", inThe Supplementary Proceedings of the 13th International Conference on Theorem Proving in Higher Order Logics, edited by M. Aagaard, J. Harrison and T. Schubert, Oregon Graduate Institute, Technical Report CSE 00-009, pp 33-48, August, Portland U.S. pdf
    • Abstract We demonstrate how the HOL theorem prover can be used to detect and prove the absence of the family of errors known as order errors. We provide an explicit generic user model which embodies theory from the cognitive sciences about the way people are known to act. The user model describes action based on user communication goals. These are goals that a user adopts based on their knowledge of the task they must perform to achieve their goals. We use a simple example of a vending machine to demonstrate the approach. We prove that a user does achieve their goal for a particular design of machine. In doing so we demonstrate that communication goal based errors cannot occur.

This page last modified 11 September, 2007 by George Papatzanis

 

IMC logo
Logo of EPSRC
go to UCLIC home page