We rely on technology in most areas of our life. All of the tools we use to get through our day are now built, operated, and managed via technology. The understanding that we have reached with technology is that it will do what its supposed to do, with minimal interference or oversight from us. And it will do that consistently and reliably every time. We turn a knob and the stove starts to heat up to precisely the temperature we designate. Push a button and the car start up – in some cases while we are still in our house. All of this happens in a repeatable and reliable fashion. But if it didn’t, the repercussions would be manageable – read the manual, google it, call the help desk, etc. In the meantime, we would “workaround” the problem, we’d manage without the technology for a short period of time. 

Technology is also utilized to provide much more important function than our stover or car. Where the stakes are higher, the benefits of technology become even more important. Keeping our electrical grid up and running, ensuring our drinking and waster water systems are handled and supported correctly, regulating our traffic lights to ensure the flow of cars without the downside of accidents – all of these beneficial functions that make the various aspects of life safer, while allowing us to focus on other parts of life. 

An area where benefits of technology are essential and where the reliability and consistency is critical is for those systems that support our health and well-being. Safeguarding our health – and more importantly, that of our loved ones – is perhaps the most important benefit of technology. If we and our loved ones are not healthy, all of the other benefits that technology provides are less important. 

So, the appropriate management of healthcare technology is in everyone’s best interest. Making sure that all of the different types of technology that help us ensure we are well, diagnose problems when we are not, and help our doctors cure us of diseases are working correctly, are corrected when there is a problem, and are improved over time is a critical part of technology management. A key part of managing these systems is through risk assessment and management. 

When we are talking about a clinical technology system, such as one that is used to run a clinical trial or manage medical inquiries, etc., the regulations are even more specific and detailed than for more general healthcare systems. The regulations require that the system manufacturer follows an established process to delineate requirements, design the system, write the system code, and conduct unit and system testing. Further, the user of the system must test the system that is installed and configured for their specific intended use of the system, how the user organization will use the system in its business practice. The testing at both levels must be performed initially and then subsequently as the software is updated – whether to address issues (bugs) or to add new functionality. 

The testing and documentation of that testing represents a significant amount of work. Testing that offers no- or low-benefit – meaning it does not test important parts of the system – can be minimized. Those parts of the system that represent significant benefit or high risk to the data collected should be maximized to ensure that all realistic scenarios are covered in testing. 

The most effective approach to ensuring a technology system is working correctly – whether it is already available for general use, or it is a brand new system that is still being tested, or it is an upgraded and improved version of an existing system – is to use a risk-based approach to evaluating, testing, and approving the system. We use this process because – given finite time and resources – it allows us to utilize pragmatism and critical thinking to prioritize our approach to quality control. By identifying the risks to a patient or patient data that a system malfunction may have allows us to objectively analyze a number of key criteria and to selectively focus testing on certain parts of the system. 

What is a risk?

Risk has been defined in many different ways. Often, the definition depends on the area of interest. The definition that we employ here is simply “the potential for an event to occur”. There are two inferences to this definition: 1) a risk can be positive, negative, or neutral, and 2) the occurrence is generally considered to be out of the normal set of occurrences – that is, it does not happen often. These are absolute conditions and we typically only focus on risks of negative occurrences, but it is useful to remember that some infrequent and perhaps unexpected occurrences may be beneficial. 

When we focus on negative risk, the motivation is to collect all possible risks in a given area of interest so that we can then evaluate them in relation to one another and then, for those considered important, to determine ways to avoid or mitigate the risk. Typically, an initial ‘risk assessment’ is conducted, along with a risk mitigation plan. These coordinated reports set the baseline for evaluation of the system, the possible risks that are identified, and potential methods to avoid or mitigate the risks. It is a good practice to periodically re-visit the risks you have identified, because risks change as the condition of the system, the ways it is used, and the people and other systems that use it change. 

Components of risk

Although there are many types of risks that we can identify and many subjective ways to compare those risks, it is generally preferred to make decisions regarding assessment and management of risks using certain criteria consistently across the system that provide objective comparisons to the set of risks.

  1. Impact – An assessment of how impactful the occurrence of a risk may be, should it occur. Clearly, more impactful risks warrant more consideration and planning than less impactful risks. 
  2. Probability – The likelihood of a risk occurring. The more likely the probability, the more closely the risk should be assessed and tracked. 
  3. Detectability – The ability to observe that a risk has occurred, either by a user or by a system. A risk that is difficult to detect will generally justify more evaulation and assessment than a risk that is easily noted should it occur. 

Taking these factors together and assigning relative values in each category for every risk, we are able to being rank and prioritize the set of risks that we have assembled. A risk that is highly impactful, very likely to occur, and not easily detectable would be on one end of the spectrum, while a risk that is not impactful, not likely to occur, and easily detectable would be at the other end. You evaulate each risk accordingly to determine which should be prioritized. For that set of risks, your focus should be to determine if you can do anything to remove or avoid the risk or, if that is not possible, to mitigate the risk. 

Mitigation strategies include methods to reduce the impact that an occurring risk can have, reduce the probability that the risk will occur, and/or take actions to make the risk more detectable, should it occur. Often the same mitigation actions can positively impact similar risks, which is ideal. 

Share this:

Like this:

Like Loading...