Validation of GxP Systems

Summary: %nbsp;Navigating an alphabet soup of acronyms to create a cohesive and pragmatic approach to validating your systems​ can be daunting. This first in a series of articles provides an overview of how to approach GxP validation.

Overview

We use technology to augment every aspect of our life. Patient care and clinical R&D are no different. Technology is critical to managing risk, ensuring patient safety, verifying data quality, and ensuing trials complete as efficiently as possible. Although we expect the technology we use in other parts of our life to be high quality, in the clinical arena, regulatory authorities have mandated that the quality of software systems must be verified. This is especially important for systems that collect, control, and report on patient data. 

A set of FDA regulations and guidances laid the groundwork for what the agency expects an organization to do in order to prove the system they intend to use to collect data is secure, reliable, and suited to its intended use. The primary intention of these rules is to ensure the validity of every piece of data in the system, so that at any given time you are able to recreate what the value of a data point was when it was first collected or generated, in the case of data that is calculated based on other data, and every change to that value that has occurred since its initial input. 

Essentially, computer system validation is meant to prove that the system collects data accurately and that it captures all changes to that data point thereafter. This could be a very large ask if it was not limited – testing every conceivable use case of the system, every potential method of manipulating data after it is collected. Therefore, the regulations stipulate that the organization (the system users) must validate that the system functions correctly for its intended business use.   

Note: The term “validation” gets thrown around in a lot of different situations. What we mean by validation here is a formal process for proving a system does what it was built to do and that it functions as the user organization needs it to. 

Categories of Validation Testing

There are three types of validation testing:

  1. Installation Qualification (IQ)
  2. Operational Qualification (OQ)
  3. Performance Qualification (PQ) or User Acceptance Testing (UAT)

These categories do not cover testing by the software manufacturer, which is performed as part of normal software development. Validation testing is performed against released software, that is, software that has been made generally available for sale by the manufacturer. 

Installation Qualification (IQ) – The system manufacturer is expected to provide the software in a form that allows it to be installed on a host computer system. At the basic level, IQ is intended to prove that the system is installed correctly when the procedure provided by the vendor is followed.  Additional steps that are typically included involve configuring the system for its intended use.

The goal of IQ is to prove the reliability and repeatability of the installation process. After the initial installation, during implementation, subsequent installations for software updates (often termed “upgrades” or “patches”) must be tested to ensure the upgrade does not impact collected data. Further, the manufacturer must document those sections or modules of the system that a given patch will modify.

For a more extensive explanation of IQ, see this article. 

Operational Qualification (OQ) – Software is designed and developed to meet certain functionality requirements – the way the system is generally supposed to work. The OQ tests are meant to ensure that the system is functioning per those requirements.

For a more extensive explanation of OQ, see this article. 

Performance Qualification or User Acceptance Testing (PQ or UAT) – This level of testing is focused on recreating the primary use cases that the users of the system will be performing once the system is put into production. The testing should be based on the organization’s user and business requirements, as well as its policies and procedures around security and access. It is typical that these tests are at least performed by the actual people who are the end users of the system, to ensure the tests accurately reflect how these users interact with the system.

For a more extensive explanation of PQ/UAT, see this article. 

SaaS vs. On-premise Software

The system may be installed on a server that is in the user organization’s server room (“on-premise”) or it may be at a remote location owned and managed by a hosting company (often referred to as a co-location) or, more common recently, it may reside with a cloud service provider, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, or other similar vendors. If the manufacturer of the system offers the system as a hosted software-as-a-service (SaaS), it typically would use one of the major cloud service providers or perhaps a smaller vendor with a focus in the life sciences. In either case, it is typical that the system would be provided to clients as a multi-tenant offering, meaning a single system is served to multiple clients/users, with security amongst the tenants handled logically.
 

From a validation perspective, there are a number of benefits to utilizing a SaaS system.

  • the technology vendor is responsible for installing and maintaining the software
  • the vendor is responsible for the majority of validation testing, specifically, the IQ and OQ, allowing the user to focus on PQ/UAT
  • often, the scripts the vendor creates for OQ can be leveraged and updated to serve as the basis for PQ scripts
  • given multi-tenancy, it is likely the vendor will commit to a reliable upgrade/maintenance schedule, providing the user with plenty of lead time to prepare for patches and upgrades

On the other hand, multi-tenancy can present challenges for the user organization in maintaining the system in a validated state.

  • As one user (or tenant) amongst many, you have limited control over what patches the vendor decides to apply to the system. If a large client is experiencing impact to its operations due to a bug that is specific to their use of the system, the vendor may decide to apply an i-patch (a software upgrade that is very specific and limited in what issue it addresses) even though most of its clients are not impacted, and it may provide little lead time to prepare. 
  • Similarly, if your business use of the system becomes severely hampered by a bug, you will probably have to wait for a regularly scheduled patch/upgrade before the issue is addressed. This means that you may have to manage the issue by utilizing a workaround, which may substantially impact your workflow efficiency and/or data quality. 
  •  As a single tenant, you will not have the capability to customize the system for your specific business needs. This is generally not a good idea anyway – it is preferable to work within the configuration options – but it is worth noting. 

See also: Validating SaaS systems; Your Technology Vendor is your Validation Partner; Creating your Validation Story; Common Validation Terms

Share this:

Like this:

Like Loading...