Distinguishing Between Use Error and Design Deviation: The Role of Human Factors Engineering

Published on January 6, 2026.

Clinical and Regulatory Context

In medical device adverse event reporting, a frequent point of contention is the categorization of the failure. Was the incident caused by a “user error” (e.g., a surgeon misreading a display or a nurse improperly loading tubing), or was it a downstream effect of the device interface design?


Historically, user error was often treated as an external variable outside the manufacturer’s control. However, modern regulatory frameworks view the user interface (UI) as an integral part of the device’s safety profile. For legal and insurance professionals, analyzing the Design History File (DHF) requires distinguishing between “abnormal use” and “foreseeable misuse,” a nuance often contested in technical reviews.


Technical Overview: Human Factors Engineering (HFE)

Human Factors Engineering (HFE), often synonymous with Usability Engineering, applies knowledge of human behavior, abilities, and limitations to device design. The governing standards are ANSI/AAMI/IEC 62366-1:2015 (Application of usability engineering to medical devices) and current FDA guidance.


Under 21 CFR 820.30(g), and aligned with ISO 13485 Clause 7.3.6 under the 2026 QMSR transition, manufacturers must validate that the device design meets user needs. This ensures the device is safe and effective for the intended population in the intended environment. The engineering process typically involves two distinct phases:

  • Formative Evaluation: Iterative testing conducted during design to identify interface strengths and weaknesses. This data drives design changes to mitigate observed use errors.
  • HF Validation (Summative) Testing: The final test on the production-equivalent device. It demonstrates that the interface is safe and risk controls are effective. Note: Regulators scrutinize this phase heavily; “training away” a design issue during validation is a frequent observation in regulatory warning letters.


The “Abnormal Use” Defense vs. Regulatory Reality

A core concept in HFE is the distinction between Correct Use, Use Error, and Abnormal Use.

  • Use Error: An action or lack of action by the user that leads to a different result than intended by the manufacturer or expected by the user.
  • Abnormal Use: An act or omission by the user that is essentially reckless or intentionally destructive (e.g., using a surgical drill as a hammer).

The Litigation Nuance: While IEC 62366-1 excludes “Abnormal Use” from the usability engineering process, ISO 14971 (Risk Management) requires manufacturers to identify “reasonably foreseeable misuse.” If a “reckless” act was foreseeable (e.g., off-label use that is common practice), the manufacturer may still be obligated to evaluate risk controls.


Risk Management Framework

The FDA and ISO standards mandate a risk-based approach. If a use error is foreseeable (e.g., confusing two similar buttons on a dialysis machine), the manufacturer is expected to implement risk controls following a strict hierarchy of efficacy:

  1. Inherent Safety by Design: (Most Effective) e.g., connectors that cannot physically fit into the wrong port.
  2. Protective Measures: e.g., guards, shields, or safety interlocks.
  3. Information for Safety: (Least Effective) e.g., warnings in the Instructions for Use (IFU) or training.

Reliance solely on “Information for Safety” is often a point of contention. If a design modification was feasible but the manufacturer opted for a warning label to save cost, this is often viewed as a failure to reduce risk “as far as possible” (AFAP) or “as low as reasonably practicable” (ALARP), depending on the jurisdiction.


Litigation and Claims Context

When analyzing an incident involving potential use error, an independent engineering review typically focuses on the Usability Engineering File and the Risk Management File. A forensic analysis typically focuses on these key areas:

  • Risk Assessment Methodology (RPN vs. ISO 14971): Did the manufacturer use a legacy Risk Priority Number (Severity x Occurrence x Detection) to justify the risk? Modern standards discourage using “Detection” as a factor to lower risk scores for safety-related hazards. The focus must be on Severity and Probability of Harm.
  • uFMEA (Use Failure Mode and Effects Analysis): Did the manufacturer identify the specific use error that occurred? If identified, was the risk accurately categorized, or was it minimized to avoid design iteration?
  • Validation Study Participants: Did the test participants represent the actual user population? (e.g., Testing a home-use autoinjector on nurses rather than arthritic patients is a critical validation deviation).
  • “Close Calls” and “Recoverable Errors”: During validation, did the manufacturer record user difficulties (close calls), or were these excluded from the final report? A “close call” in a controlled test is often considered a predictor of failure in the real-world environment.
  • Post-Market Surveillance (CAPA): Has the manufacturer received complaints regarding similar use errors? Under 21 CFR 820.100, failure to feed these inputs back into the risk management process may indicate a quality system non-conformance.


Conclusion

The distinction between user culpability and design input is defined by the thoroughness of the Human Factors Engineering process. In the regulatory landscape, a device is part of a system involving the user, the environment, and the interface. A comprehensive HFE program demonstrates that a manufacturer has systematically analyzed the UI to reduce risk. Conversely, the absence of such analysis in the Design History File may suggest a deviation from standard industry practices regarding foreseeable misuse.