Showing posts with label Risk Management. Show all posts
Showing posts with label Risk Management. Show all posts

Monday, November 25, 2019

Supplement to Part IV Submissions to FDA: Risk Management and Use Errors

Additional Thoughts:

As part of your risk analysis and use error identification, you should identify the circumstances or origin of the reported use error. 


  • Origins of reports of use errors, in order of likely relevance:
    1. Field reported use errors: These require special attention because they were discovered while in use by users under actual conditions. That's the reason why field reported use errors are the most important.They deserve special consideration, particularly if the use error was responsible for any harm and that the use error occurs significantly more often than originally predicted. Consider performing a root cause analysis to determine why the use error occurred. Most especially consider the use or environmental conditions and who made the use error to be factored into why the use error was made. Be sure to determine which assumptions regarding use, conditions for use and predicted user characteristics were violated. 
    2. Errors reported from empirical studies: These are use errors that have been observed under laboratory or other kinds of testing conditions defined by researchers. Furthermore, these use errors are from members of the expected user population(s) who have the expected requisite level of education and training. Thus in testing sessions the conditions of use including the environment have been structured and manipulated by the researchers. The results of the research and the use errors detected may be valid, but narrow in scope in both situations of use, the use environment, actual user characteristics including education and training, etc. 
    3. Analysis based:
      • Scenarios: Scenarios are set of connected events with a beginning, a series of possible steps or actions and an end point. They are generally derived from real world knowledge of the environment, the people involved -- their characteristics such as education, training, responsibilities, experiences, situaetc. --  and the kinds of actions they would engage with the systems and devices in development in order to accomplish a particular task. Scenarios also consider possible paths and actions that would lead to making a use error. Scenarios particularly worst-case scenarios can be an effective means for detecting possible use errors. With newly designed products and systems, this maybe one of the first means to identify possible use errors and determine their possible harm. Nevertheless, scenario-based use errors come from thought experiments and lack empirical validation.
      • Brain-storming: Brain storming is an unstructured or free-form process of analysis to capture the possible use errors. Brain storming sessions can be a particularly useful means of capturing conditions and use errors that may not have been seen or consider using other methods. However use errors derived from brain storming are not based on empirical evidence. Nevertheless, those who uncover use errors by brain storming often have high levels of expertise and experience in the technical area under consideration.
Each process as its own value and every method to detect or originate possible use errors should be considered. And when reporting use errors in the submission, I suggest that where the reported use error originated should be included in the final submitted report. And a summary of where use errors originated should be included in the narrative.





Wednesday, November 20, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies: Part IV

Section 5: Analysis of hazards and risks associated with use of the device


This section is one of the most important sections of your submission. Your narrative should have both a summary of your findings and highlight any notable findings from your risk analysis and include any important steps taken to mitigate risks. 
5Analysis of hazards and risks associated with use of the device
  • Potential use errors
  • Potential harm and severity of harm that could result from each use error
  • Risk management measures implemented to eliminate or reduce the risk
  • Evidence of effectiveness of each risk management measure
Risk assessment and management for human engineering for medical devices focuses on the identification and management of use errors. This is defined in IEC 62366 (part 1).  It is valuable for representatives from human engineering to be a part of the risk management team because of the additional insights that experienced human engineering professionals can bring to the process of identifying, managing and mitigating risks. However, the specific focus of the human engineering file with respect risk management is on use errors.

Narrative should be both a summary and a means to highlight events and areas of specific importance. This section can be relatively brief, but include the following information.  


Potential Use Errors, Identifying Harm and Severity of Harm


Identify the number of use errors discovered categorized according to their level of harm/severity. Any use errors categorized as high or critical should be highlighted. Be certain to include the method or methods used to identify and categorize potential use errors: this could include methods such as general system analysis, scenarios, field data, etc. 


Risk Management Measures Implemented to Eliminate or Reduce the Risk


Once you've identified the potential use errors, discuss what was done to mitigate or eliminate them. (Of Note: Root cause analysis is a particularly effective method for understanding use errors and correcting them.)  In the narrative you can make general statements regarding what was done to manage use errors of medium risk and lower. For High and Severe risk use errors, the narrative should include the specifics of how these risks were managed. Risk management can include anything from changes in design, system error trapping and error prevention measures to updates to instructions.


Evidence of Effectiveness of Each Risk Management Measure


Finally, you will need empirical data to demonstrate that the use errors identified have been properly addressed. That will require testing with members of the targeted user population. Your narrative should include a summary (abstract) of the study or studies that you performed. And be sure to provide clear references to the documentation included in your submission. 

Note: if your targeted user population is particularly narrow and tightly specified, consider including other members from a wider group. For example, if the targeted group consists of ICU nurses, consider including general hospital RNs in your user testing group. All too often, medical devices intended for one highly trained group often appear for use by others who may be technically astute but lack the specific training of the highly trained group.

A Word to the Wise ...

You should show that your human engineering program assessed risks, identified use errors and mitigated use errors long before reaching verification and validation testing. This should have been addressed in your formative research and by periodic testing before reaching the Verification and Validation stage of testing. V & V should not be the place to uncover use errors. It should be the place that validates all of the work you've done up to that point. Now, it's likely that V&V will uncover some areas of concern, but these should be relatively minor and addressable in a relatively easily. If you have discovered numerous problems at the point of V&V, then you have a problem with your human engineering process and revamping that process should become a major focus of your organization. 






Sunday, November 25, 2018

International Medical Device Database

For anyone interested in medical device safety, you should bookmark this website: https://medicaldevices.icij.org

It has been created by the International Consortium of Investigative Journalists to:

"Explore more than 70,000 Recalls, Safety Alerts and Field Safety Notices of medical devices and their connections with their manufacturers." 

Wednesday, March 25, 2015

New York Times Opinion: Why Health Care Tech Is Still So Bad

This was an opinion piece published 21 March 2015 in the New York Times written by Robert M. Wachter, Professor of Medicine, University of California, San Francisco and author of "The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” also published in the New York Times.

Here's the link to the article: http://www.nytimes.com/2015/03/22/opinion/sunday/why-health-care-tech-is-still-so-bad.html?smid=nytcore-ipad-share&smprod=nytcore-ipad

I have commented on several quotes from the article.

1. "Even in preventing medical mistakes — a central rationale for computerization — technology has let us down. (My emphasis.) A recent study of more than one million medication errors reported to a national database between 2003 and 2010 found that 6 percent were related to the computerized prescribing system.

At my own hospital, in 2013 we gave a teenager a 39-fold overdose of a common antibiotic. The initial glitch was innocent enough: A doctor failed to recognize that a screen was set on “milligrams per kilogram” rather than just “milligrams.” But the jaw-dropping part of the error involved alerts that were ignored by both physician and pharmacist. The error caused a grand mal seizure that sent the boy to the I.C.U. and nearly killed him.

How could they do such a thing? It’s because providers receive tens of thousands of such alerts each month, a vast majority of them false alarms. (My emphasis.) In one month, the electronic monitors in our five intensive care units, which track things like heart rate and oxygen level, produced more than 2.5 million alerts. It’s little wonder that health care providers have grown numb to them."

Comments: Before I read the third paragraph, I was thinking How can you blame the computer when it provided you with an alert regarding the prescribing error that you made? 

It is well known that systems that produce a high percentage of false alarms, that those alarms over time will be ignored or discounted. I consider this is a devastating indictment. We must do better.

I have been a human factors engineer and researcher for decades. One of the mantras of human factors is preventing errors. That's central to what we're about. But if the systems we help engineer generate false alarms at a rate that has our users ignoring the correct ones, then we have failed and failed miserably.

I think the problem of false alarms requires further research and commentary.


2. "... despite the problems, the evidence shows that care is better and safer with computers than without them."

Commentary: This is nice to read, but we as medical technologists need to do better. We really need to follow up on the repercussions of our technology we create when it's deployed and used in the field.


3. "Moreover, the digitization of health care promises, eventually, to be transformative. Patients who today sit in hospital beds will one day receive telemedicine-enabled care in their homes and workplaces."

Commentary: I agree. Of course that's a central theme of this blog.


4. "Big-data techniques will guide the treatment of individual patients, as well as the best ways to organize our systems of care. ... Some improvements will come with refinement of the software. Today’s health care technology has that Version 1.0 feel, and it is sure to get better.

... training students and physicians to focus on the patient despite the demands of the computers.

We also need far better collaboration between academic researchers and software developers to weed out bugs and reimagine how our work can be accomplished in a digital environment."

Commentary: Agreed again. But, I believe that technologist just can't dump these systems into the healthcare environments without significant follow-up research to insure that these systems provide or suggest the correct treatment programs and effectively monitor patients. Investment in systems like these will be cost effective and improve lives, but only if the necessary level of care and follow-up is performed.


5. "... Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design."

Commentary: All this is true. And as noted above that it would be a good idea to do more extensive research on medical systems before we deploy them to the field as well. That this is not done may be a regulatory issue that the FDA has not required the kind of rigorous research as performed in aircraft cockpit design. They should require more research in real or simulated environments. Right now, all that appears to be required is a single verification and single validation test before allowing commercialization. I think it would be valuable for regulators to require more research in real or simulated settings before allowing companies to commercialize their products.

Or, requiring more extensive follow-up research. Grant companies the right to sell their medical products on a probationary basis for (say) 1 year after receiving initial commercialization certification. During that year, the company must perform follow-up research on how their medical product performs in real environments. If there are no significant problems ... such as overly abundant number of false alarms ... then the product no longer on probation and would be considered fully certified for commercialization.
However, if significant problems emerge, the FDA could:

a) continue to keep the product in a probationary status pending correction of those problems and another year of follow-up research or

b) it could require the withdrawal of the product from sale. A product that had been withdrawn would have to go through the entire commercialization certification process just as if it were a new product before commercialization and sale would be allowed.


A final thought ... I think there's a reality in commercial aviation that is not true in medicine. If commercial aircraft killed and injured as many people as are killed and injured by medical practitioners, then the commercial aviation would come to a halt. People would refuse to fly because they perceive it to be too dangerous. But, if you're sick, then you have little choice but the clinic, ER or hospital.







Friday, March 20, 2015

Biotronik Eluna Pacemaker System Given FDA Approval for MRI Full Body Scans

A brief note ... the FDA has given approval the patients implanted with the Biotronik Eluna pacemaker can safely undergo full MRI body scans. Note, this approval does NOT say that this pacemaker is MRI-safe. Someone implanted with this pacemaker cannot be placed in an MRI without making the necessary changes.

The pacemaker is MRI-conditional ... meaning that it is safe for a patient with a Biotronik Eluna pacemaker with the ProMRI technology to undergo a full-body MRI scan under the condition that the settings on this pacemaker are properly set to their MRI conditional settings.

... sometimes I worry that announcements like these can be misinterpreted and could lead to something bad happening. I've seen an article regarding a Biotronik pacemaker that stated that the pacemaker was "MRI-safe" when it wasn't true. As of this date and as far as I know, there are no MRI-safe pacemakers. The one's where the FDA has approved MRI compatibility are all MRI-conditional.

Here are links to two articles announcing FDA's approval:

Friday, July 1, 2011

Some Articles of Interest Before the 4th

I came across two long investigative articles that I thought could be of interest those in the medical products field. One article is from the National Journal and the other from Pro Publica. Here are the links to the articles with short clips.


Medical journals have long had to wrestle with the possibility that financial bias influences the work they publish, but if the growing controversy over Medtronic's Infuse spinal product is any indication, they may not be doing enough.

Comment: This is an area that should concern everyone in the field of medical devices and device research. I am very aware that companies fund a lot of empirical and academic research much of which is published in peer-reviewed and respected medical journals. On the face of it, nothing wrong with that. When I was a graduate student, some of my research was funded the research and development division of a well-known (non-medical) company. The funding had absolutely no bearing of the design of the research program, the data collected or interpretation of the data. The concern expressed in this article is whether data maybe suppressed or not reported in an unbiased fashion particularly when it comes to reporting data related to the risks. You be the judge.


Critics of last year’s health care law pounced on what seemed like a damning new survey, but the details were a lot murkier than the headlines.

Comment: This is an interesting article well worth your time to read.


Finally, here's a short article that just came across indicating how rural health may well be the driver behind telemedicine. Here's the link to the article:


Rural Healthcare to Drive the Global Telemedicine Industry

...[C]ountries face various problems in the provision of medical services and health care, including funds, expertise, and resources. To meet this challenge, the governments and private health care providers are making use of existing resources and the benefits of modern technology. Besides, with limited medical expertise and resources, telecommunication services have the potential to provide a solution to some of these problems. As telemedicine has the potential to improve both the quality and the access to health care regardless of the geography; the rural market is driving the incessant growth of the telemedicine market.

Tuesday, June 28, 2011

Hacking Grandpa's ICD: Why do it?

Background

I am part of another professional discussion group with an interest in Medical Data, System and Device security.  One of the topics was whether medical devices are a likely target for cyber-attacks.  I made a contribution to the discussion and stated that I believed that although unlikely, I thought that medical devices will eventually be targets of cyber-attacks.  But putting data security measures into medical devices is at odds with the directions that the medical device industry wants to take its product lines.  The trends are for smaller and less power-hungry devices.  Adding data security measures could increase power demands, increase battery sizes and thus increase device size.  Nevertheless, I believe that starting the process of putting data security measures into the medical devices has merit.

I received a well-reasoned response that hacking medical devices was highly unlikely and research funding on security measures for medical devices would be money best spent elsewhere.  That response started a thought process to develop a threat scenario to address his points.

I reviewed my earlier article on "hacking medical devices," http://medicalremoteprogramming.blogspot.com/2010/04/how-to-hack-grandpas-icd-reprise.html.  I revisited the paragraph in my regarding the motivation for hacking a medical device, an extortion scheme. 

When I wrote that article, I did not have any particular scheme in mind.  It was speculation based more on current trends.  Furthermore, I did not other motivations as particularly viable - data theft, not much money or value in stealing someone's implant data or killing a specific person, there are easier ways to do this although it might make a good murder mystery.

I did come up with a scenario, and when I did, it was chilling.





The Threat Scenario

First, as I had previously suggested, the motivation for hacking medical devices would be extortion.  The target of the extortion would be the medical device companies.  Before getting into the specifics of the extortion scenario requires that you understand some of the technologies and devices involved.

The wireless communications of interest occurs between a "base station" and a wirelessly enabled implanted device as shown in the figure below.

The base station need not be at a permanent location, but could be a mobile device (such as with the Biotronik Home Monitoring system).  The base station in turn communicates with a large enterprise server system operated by the medical device company.


The two systems communicate use wireless or radio communication.  For example, St. Jude Medical uses the MICS band - a band designed by the FCC for medical devices in the range of 400Mhz.  To insure that battery usage for communications is minimal, the maximum effective range between is stated as 3 meters.  (However, I have seen a clear connection established at greater 3 meters.)  


In general, the implant sends telemetry data collected it has collected to the base station.  The base station sends operating parameters to the implant.  Changing the operating parameters of the medical device is know as reprogramming the device and define how the implant operates and the way the implant exerts control over the organ to which it is connected.


Device Dialogue of Interest to Hackers

As you probably have guessed, the dialogue of interest to those with criminal intent is the one between the base station and the device.  The "trick" is to build a device that looks like a legitimate base station to the medical device.  This means that the bogus device will have to authenticate itself with the medical device, transmit and receive signals that the device can interpret.  In an earlier article (http://medicalremoteprogramming.blogspot.com/2010/03/how-to-hack-grandpas-icd.html), I discussed an IEEE article (http://uwnews.org/relatedcontent/2008/March/rc_parentID40358_thisID40398.pdf**) where the authors had constructed a device that performed a successful spoofing attack on a wireless Medtronic ICD. So, based on the article, we know it can be done.  However, based on the IEEE article, we know that it was done at distance of 5 cm.  This was aptly pointed out in a comment on my "How to Hack Grandpa's ICD" article.


Could a Spoofing/Reprogramming Attack be Successful from Greater than 5 cm or Greater than 3 meters?


I believe the answer to the question posed above is "yes."  Consider the following lines of reasoning ...
  1. As I had mentioned earlier, I know that base stations and medical devices communicate at distances of 3 meters and can communicates greater distances.  The limitation is power.  Another limitation is the quality of the antenna in the base station.  The communication distance could be increased with improvements in the antenna and received signal amplification. 
  2. The spoofing/reprogramming attack device could be constructed to transmit at significantly greater power levels than current base station.  (Remember, this is something built by a criminal enterprise.  They need not abide by rules set by the FCC.)  Furthermore, a limited number, maybe as few as one or two, of these systems need be constructed.  I shall explain why later.
  3. A base station can be reverse-engineered.  Base stations can be easily obtained by a variety of means.  Medical devices can be stolen from hospitals.  Documentation about the communication between the medical device and the base station can be obtained.
Thus, I believe the possibility exists that a device that emulates a base station and could successfully perform a spoof/reprogramming attack from a significant distance from the target is possible.  The question is, what is to be gained from such an attack?


Attack Motivations


Extortion: Earlier I mentioned that in an other article, I suggested that the motivation would be extortion: money, and lots of it.  I think the demands would likely be in the millions of US dollars.

In this scenario, the criminal organization would contact the medical device companies and threaten to attack their medical device patients.  The criminal organization might send device designs to substantiate their claims of the ability to injure or kill device patients and/or send the targeted company with news reports sudden unexplained changes in medical devices that have caused injuries or deaths in device patients.


Market Manipulation: Another strategy would be as a means to manipulate the stock prices of medical device companies - through short-selling the stock.  In this scenario the criminal organization will create a few base station spoofing/reprogramming systems. Market manipulation such as placing the value of the stock at risk could be a part of the extortion scheme.




Book of Interest: Hacking Wall Street: Attacks And Countermeasures (Volume 2)


In another article I'll discuss how someone might undertake an attack.




** Halperin, D, Heydt-Benjamin, T., Ransford, B., Clark, S., Defend, B., Morgan, W., Fu, K., Kohno, T., Maisel, W. Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses, IEEE Symposium on Security and Privacy, 2008, pp 1-14.

How to Hack Grandpa's ICD: New Develoments

A little over a year ago I published a couple of articles in this blog regarding "Hacking Grandpa's ICD." Here are the links: 



I receive a bit of flack from some people regarding the unlikelihood of such a thing occurring. I even wrote another article that I never published because I had convinced myself that ICD hacking scenario would be so unlikely. 

Well, it suffices to say that I have changed my mind. It seems that McAfee has take this seriously. Here are two articles for your consideration.

After this, I'm publishing the article that I had originally decided not to publish.

Sunday, July 18, 2010

HE-75, Usability and When to Prototype and Usability Test: Take 1

Prototyping and Testing will be a topical area where I shall have much to contribute.  Expect numerous articles to appear on this topic.

I had a discussion a few days ago with one of my colleagues who has worked as a user interface designer, but has little knowledge of human factors.  He was completely unaware of the concepts of "top-down" and "bottom-up" processes to user interface design.  I provide for you the essence of that discussion.

Top-Down Approach

The top-down approach begins with a design.  Most often the initial design is a best or educated guess based on some set of principles.  Could be aesthetics or "accepted" standards of good design, or something else.  The design is usability and/or acceptance tested in some manner.  (Anywhere from laboratory testing to field-collected data.)  In response to the data, the design reworked.  The process is continual.  Recent experience has suggested that the top-down approach has become predominant design methodology, particularly for the development of websites.

Top-down is a valid process, particularly for the deployment of new or unique products where the consequences of a failed design do not lead to serious consequences.  It can get a design into user hands more quickly.  The problem with a top-down approach (when practiced correctly) is that it relies on successive approximations to an ill-defined or unknown target.  To some degree it's similar to throwing darts blindfolded with some minimal correction information provided after each throw.  The thrower will eventually hit the bull's eyes, but it may take lots and lots of throws.

The top-down approach may have a side benefit in that it can lead to developing novel and innovative designs.  Although, it can have the opposite effect when designs are nothing more than "knock-offs" of the designs from others.  I have seen both coming out of the top-down approach.

Bottom-Up Approach

HE-75 teaches the use of a bottom-up approach where first one defines and researches the targeted user population.  Contextual Inquiry is also a bottom-up approach.  Since I have already discussed researching the targeted user population in depth, I'll not cover it here.  

With the bottom-up approach, the target is clear and understood.  And tailoring a design to the user population(s) should be a relatively straight forward process.  Furthermore, the bottom-up approach directly addresses the usefulness issue with hard data and as such, more likely to lead to the development of a system that is not only usable, but useful.

Useful vs. Usable

I'll address this topic more deeply in another article.  It suffices to say that usability and usefulness are distinctly different system qualities.  A system may be usable, that is, the user interface may require little training and be easy to use, but the system or its capabilities are not useful.  Or, and this is what often happens particularly with top-down approaches, much of what the system provides is not useful or extraneous.

Personal Preference

I am a believer in the bottom-up approach.  It leads to the development of systems that are both usable and useful sooner than the top-down approach.  It is the only approach that I would trust when designing systems where user error is of particular concern.  The top-down approach has its place and I have used it myself, and will continue to use it.  But, in the end, I believe the bottom-up approach is superior, particularly in the medical field. 

Monday, May 3, 2010

HE-75 Topic: Risk Management

One more HE-75 topic before proceeding into design and design related activities.  The topic, risk management.

Reading HE-75, you will note that this document continually discusses risk management and reducing risk.  In fact, the entire document is fundamentally about reducing risk, the risks associated with a poor or inappropriate design.

If you drive a car, especially if you have been driving cars for more than a decade or two, you will note that a driving a car with well-designed controls and well-laid out and designed displays seems inherently easier than one that is poorly designed.  Furthermore, it has been demonstrated time and again that driving safety increases when a driver has been provided well-designed controls and displays, driving become less risky for everyone concerned.

Car makers now see safety as selling point.  (Look at a car that was built in the 40s, 50s or 60s and you'll note how few safety features the car included.)  Manufacturers are beginning to include in their luxury models driver-error detection systems.  For example, one manufacturer has a system that signals the driver of the existence of an other vehicle the space the driver wants to move to.  One of the qualities of a well-designed user interface is the ability to anticipate the user and identify and trap errors or potential user errors, and provide a means or path for preventing or correcting the error without serious consequences.  Car manufacturers have been moving in this direction.  I suggest that the adoption of HE-75 will be the FDA's way of pushing medical manufacturers in the same direction.


Risk Management: Creating a Good Design and Verifying It


My many blog postings on HE-75 will address the specifics of how to create a good design and verify it, and the process of incorporating these design and verification processes in the a company's risk management processes.  In this posting I want to address a two issues at a high level.


First, I want to address "what is a good design and how to do to create it." Creating a good design requires a process such as one outlined by HE-75.  I am often amused at hiring managers and HR people who want to see a designer's portfolio having no conception regarding how the design were created.  A good design for a user interface is not artistry, it is a result of an effective process.  It should not only look good, but it should enable users to perform their tasks effectively and with a minimum of errors.  Furthermore, it should anticipate users and trap errors and prevent serious errors occurring.  And finally, it should provide users with paths or instructions on how to correct the error.  This is what HE-75 teaches in that it instructs researchers and designs   And to that end, the design process should reduce risk.  Think this is not possible?  Then I suggest you spend some time in the cockpit of a commercial airline.  It is possible.


Second, HE-75 teaches that design verification should be empirical and practiced often throughout the design process.  This is an adjunct to classic risk management that tends to be speculative or theoretical in that it relies on brainstorming and rational analysis.  HE-75 teaches that medical device and system manufacturers should not rely just on opinions - although opinions provided by subject-matter experts can provide valuable guidance.  HE-75 instructs subjects drawn from the targeted population(s) should be used to guide and test the design at each stage of the process.  This becomes the essence of risk management and risk reduction in the design of user interfaces.



Additional Resources

I have this book in my library.  It provides some good information, but it's not comprehensive.  Unfortunately, it's the only book I know of in this field.  
















These books I do not owe, but provide you with the links for information purposes.  I am surprised at how few books in the field of medical risk management there are.  It may go a long ways to explain the large number medical errors, especially the ones that injure or kill patients.

Risk Management Handbook for Health Care Organizations, Student Edition (J-B Public Health/Health Services Text) 

Medical Malpractice Risk Management