Showing posts with label Medical Errors. Show all posts
Showing posts with label Medical Errors. Show all posts

Saturday, April 4, 2015

UK Perspective Regarding FDA Regulatory Requirements

A Linked-In colleague posted a link to this article. I read it and found it interesting enough to post the link and comment on it. It's by a UK publication and discusses the FDA regulatory process as it relates to Human Engineering requirements for device approval for commercialization.

Here's the link:
http://www.emdt.co.uk/daily-buzz/what-are-fda-usability-testing-requirements-device-approval?cid=nl.qmed01.20150325

In addition, I provide my own perspective on the article in the "Commentary" section below. I do not critique the article. I only attempt to expand on a few points from it.

But first, a brief summary of the article.

Article Summary


Medical errors have become an increasing concern of the FDA. I became interested in medical errors when I was a consultant at St. Jude Medical Cardiac Rhythm Division in Sylmar, CA. During my time at St. Jude (2009-2010), deaths by medical error were being reported as being 100,000 to 120,000 per year. Last year, I posted links to two articles that stated that deaths by medical errors could be closer to 400,000 per year. (http://medicalremoteprogramming.blogspot.com/2014/07/rip-death-by-medical-error-400000-year.html)

It has been noted by the FDA a large proportion of medical errors can be attributed to poorly designed medical device user interfaces. Since a fundamental mission of the FDA is increasing patient safety and reducing injuries and fatalities in the practice of medicine, the FDA has begun placing greater emphasis on improving the usability of medical device user interfaces.

This article provides measures that show the FDA's increasing emphasis on usability and human factors issues by showing the increasing frequency that companies seeking medical device clearance for the US market mention the terms "usability" and "human factors." Figure 1 from the article clearly shows the increasing usage of these terms in company filings.



The focus should be on the trends, not the absolute numbers because not all filing documents have been included in the count. But the trend clearly shows an increased emphasis by companies to increasingly use the terms "usability" and "human factors" in their filings with the FDA. The two figures that follow suggest the degree that companies have incorporated the FDA prescribed human factors engineering process and design guidance documentation.

The documents listed below are specifically targeted to defining and supporting the human factors engineering process and the development of the Human Engineer File that's included as part of a company's filing to the FDA.


  • ISO 62366, Medical Devices - Application of Usability Engineering to Medical Devices
  • AAMI / ANSI HE75:2009, Human Factors Engineering - Design of Medical Devices (General)

I'll discuss the documents above in greater detail and describe how they're intended to fit within the human factors engineering process when developing medical devices.


  • IEC 60601-1-6 Medical electrical equipment - Part 1-6 General requirements for Safety - Collateral standard: Usability
  • IEC 60601-1-8 Ed. 1, Medical Electrical Equipment - Part 1-8: General Requirements for Safety - Collateral Standard: Alarm Systems - Requirements, Tests and Guidance - General Requirements and Guidelines for Alarm Systems in Medical Equipment (General)

The two documents above are engineering standards. They're engineering specifications that medical devices must meet. They are technical and specific.

I show Figure 3 from the article before showing Figure 2. 



The increasing reference of 60601-1-8 is not surprising given the increased emphasis on safety. My real interest is in the significant increase in reference to ISO 62366. As mentioned, this is process standard the lays out how human factors engineering should be engaged to reduce "use errors." The emphasis in this standard is on the reduction of risk. Risk management is extremely well embedded in the medical device design and engineering process. It would seem that from a cultural perspective, ISO 62366 fits with the medical device engineering process. 

I want to contrast the dramatic, increasing references to ISO 62366 with the references to AAMI/ANSI HE75 shown in Figure 2 below.



References to AAMI/ANSI HE75 rise and fall from 2010 to 2013 instead of a steady upward trend that you see with ISO 62366 in Figure 3. I would like to emphasize that ISO 62366 and AAMI/ANSI HE75 should be considered as companion documents. (I'll expand on this in the Commentary section below.)

Commentary


The article does support the contention that the FDA and the companies it regulates are paying increasing attention to usability and human factors. That they're paying enough attention is another matter entirely. As new medical devices are introduced we should see two things. First, the use error rate for the newly introduced medical devices (once users have adapted to them) should decline in relationship to other similar devices currently in use. Second, we should see over time the number of per year of deaths and injuries from medical errors begin to decline. This will take time to detect.

Without a doubt, the push by the FDA to define a human engineering process in the design and testing of medical devices, and to press for testing under actual or simulated conditions is needed. In many ways the FDA is mirroring many of the processes that have already been adopted by the US Department of Defense (DoD) in the area of human engineering. Admittedly, the DoD doesn't always get it right, there is an understanding within the DoD that it is important ... life saving, battle-winning important ... to insure that those at the controls can do their jobs quickly, effectively and with as few errors as possible.  So from that standpoint, the FDA has adopted processes from programs that have proven effective. But the FDA has just passed the starting line. And much more will be required going forward.

ISO 62366 vs AAMI/ANSI HE75

As I mentioned earlier ISO 62366 and AAMI/ANSI HE75 should be consider complementary or companion documents. HE75 is a much larger document than 62366 and includes a significant amount of device design guidance and guidelines. 62366 is almost entirely a process document that's devoted to directing how to go about managing the research and development process of a medical device. In addition, the focus of 62366 is managing risks, risks in the realm of reducing use errors.

I found it interesting that references to HE75 were not increasing at the rate as references to 62366. I would have expected Figures 2 and 3 to have a similar appearance with respect to 62366 and HE75 in large part because the documents significantly overlap. In fact I might have reasonably expected references to HE75 to outpace 62366 because HE75 includes design specific guidelines in addition.

One possible reason for references to HE75 not being referenced in the same accelerated way as HE75 may have to do with the fact that the European Union has not adopted HE75, so it's required for medical devices that will be marketed in the EU (CE).  (I am currently unaware of the regulatory requirements of other countries on this matter.) Medical device companies are international companies and the documents that they file in one country are generally the same in each country. Thus since the EU hasn't adopted HE75, references to HE75 and HE75's use as a foundational process and design document may be less.

DESIGN RATIONALE

I'm not sure that this is true at this point in time, but I am certain that the following will be true going forward at some time in the future. I believe that the FDA will hold companies to account for their user interface designs. I believe that the FDA will demand that companies clearly define how they came up with their user interface designs and that those designs are well-grounded in empirical evidence.

This is what I mean ... the FDA will demand that the design choices ... these include: controls, placement of controls, number of controls, actions performed by controls, the way the control responds, methods for interacting with the device (e. g., touch screen, buttons, mouse), size of the display, etc. ... for medical device user interfaces must be grounded in empirical data.

Commercial websites are often designed by graphic artists. Often times the design of webpages reflect the artist's aesthetic sensibilities. Layout appear they way that they do because they look good.

I believe that the FDA will require that user interface designs for medical devices have an empirically grounded design rationale. Companies will be required to point to specific research finding to justify the design and the design choices that they made. Furthermore, as the design of the user interface evolves with each iteration of testing, the FDA will require that changes to the design be based on research findings.

Finally, I believe that soon if it is not occurring already, that the FDA will require:

  1. That companies submit documentation to show in detail the full evolutionary design process beginning from product inception, including ...
  2. Detailed pre-design research ... population(s), method(s), research questions and rationale, etc ... as well as the findings and what they suggest for the design of the user interface
  3. A design that includes with a full discussion of the design rationale ... why was it designed the way it was ... 
  4. A detailed description of the evolution of the design that include full and clear justification(s) for each change in the design ... and require that changes be grounded empirical data 
  5. A full description of pre-commercialization testing process and method ... with a clear justification for why this testing meets FDA testing requirements
  6. And a complete and clear analysis of the testing data.
What I'm suggesting above is that the process of designing and testing a medical device user interface should be more than going through the prescribed steps, collecting the data, doing the tests, etc. There should be a clear thread that ties all the steps together. When in a subsequent step, one should be able to point back to the previous steps for the rationale to explain why the user interface was designed to appear and operate the way it does ... to this point.

As near as I can tell, what I described above is rigorous than is currently required by the FDA. However, I believe that it would be in any company's best interest to follow what I've suggested because there may come a time when the FDA's enforcement becomes more rigorous. 

Another reason may be lawsuits. If a company can show that it went beyond the FDA's regulatory requirements at the time, those suing would likely have less of a chance of collecting damages. And if damages were awarded, they may likely be lower. Also, if the company went beyond the FDA requirements, it would be likely that there would be fewer people injured and that should lower damages.

FINALLY

This article has been a springboard for me to discuss a number of topics related to human engineering for medical devices user interfaces. This topic will remain a central part of this blog. I'll return this within a week or two, and discuss in depth other topics related to the human engineering process for medical device user interfaces. 




Friday, March 27, 2015

Welch Allyn Published Patent Application: Continuous Patient Monitoring

I decided to review this patent application in light of the New York Times Opinion piece I commented on. Here's the to my commentary: http://medicalremoteprogramming.blogspot.com/2015/03/new-york-times-opinion-why-health-care.html

Also, I've gone back to the origins of this blog ... reviewing patents. The first patent I reviewed was one from Medtronic. Here's the link: http://medicalremoteprogramming.blogspot.com/2009/09/medtronics-remote-programming-patent.html

The issue raised of particular interest was the high "false alarm" rate generated reported by the author that would lead medical professionals to disregard warnings generated by their computer systems. I wrote that I wanted to follow-up on the issue of false alarms.

The patent application (the application has been published, but a patent has not yet been granted) describes an invention intended to 1) perform continuous automated monitoring and 2) lower the rate of false alarms.

Here are the details of the patent application so that you can find it yourself if you wish:



The continuous monitoring process from a technical standpoint is not all that interesting or new. What is interesting is the process they propose to lower the false alarm rate and determine whether this process in turn will not lower the false negative rate.

Proposed Process of Lowering False Alarms

As mentioned in my earlier article, it appears that false alarms have been a significant issue for medical devices and technology. Systems that issue too many false alarms issue warnings that are often dismissed or ignored. Or waste the time and attention of caregivers who spend time and energy on responding to a false alarm. This patent application is intended to reduce the number of false alarms. However, as I mentioned earlier, can it do that by not increasing the number of false negatives, that is, failure to detect when there is a real event where an alarm should be going off.

Getting through all the details of the patent application and trying to make sense of what they're trying to convey, the following is what I believe is the essence of the invention:


  • Measurement a sensor indicates an adverse patient conditions and an alarm should be initiated.
  • Before the alarm is initiated, the system cross-checks against other measurements that are: 
              1) from another sensor measuring essentially the same physiological condition as the
                  sensor that detected the adverse condition, the measurement from the second sensor
                  would confirm the alarm condition or indicate that an alarm condition should not exist; or
              2) from another sensor or sensors that take physiological measurements that would confirm
                  the alarm condition from the first sensor or indicate that an alarm condition should not
                  exist.

In this model at least two sensors must provide measurements that point to an alarm state.

Acceptable Model for Suppressing False Alarms and Not Increasing False Negatives?

Whatever you do in this domain of detecting adverse patient conditions, you don't want to lower your accuracy of detecting the adverse condition. That is, increase your false negative rate.

So is this one way of at least maintaining your currently level of detecting adverse events and lowering your false alarm rate? On the face of it, I don't know. But it does appear that it might be possible.

One of the conditions the inventors suggest that initiates false alarms are those times when patients move or turn over in their beds. This could disconnect a sensor or cause it to malfunction. A second sensor taking the identical measurement may not functioning normally and have a measurement from the patient indicating that nothing was wrong. The alarm would be suppressed ... although, if a sensor was disconnected, one would expect that there would be a disconnected sensor indicator would be turned on.

Under the conditions the inventors suggest, it would appear that cross checking measurements might reduce false positives without increasing false negatives. I would suggest that care should be given to insure that a rise in false negative rates do not increase. With array of new sensors and sensor technology becoming available, we're going to need to do a lot of research. Much of it would be computer simulations to identify those conditions were an adverse patient condition goes undetected or suppressed by cross-checking measurements.

Post Script

For those who do not know, I am on numerous patents and patent applications (pending patents). Not only that I have written the description section of a few patent applications. So I have a reasonable sense of what is what is not patentable ... this is in spite of the fact that I'm an experimental, cognitive psychologist and we're not general known for our patents.

So, what is my take on the likelihood that this applications will be issued a patent? My sense is not likely. As far as I can tell there's nothing really new described in this application. The core of the invention, the method for reducing false alarms, is not new. Cross-checking, cross-verifying measurements to determine if the system should be in an alarm state is not new. As someone who has analyzed datasets for decades, one of first things that one does with a new dataset is to check for outliers and anomalies - these are similar alarm conditions. One of the ways to determine whether an outlier is real, is to cross check against other measures to determine if they're consistent with and predictive of the outlier. I do not see anything that is particularly new or passes what known in patent review process as the "obviousness test." For me cross checking measures does not reach the grade of patentability.







Wednesday, March 25, 2015

New York Times Opinion: Why Health Care Tech Is Still So Bad

This was an opinion piece published 21 March 2015 in the New York Times written by Robert M. Wachter, Professor of Medicine, University of California, San Francisco and author of "The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” also published in the New York Times.

Here's the link to the article: http://www.nytimes.com/2015/03/22/opinion/sunday/why-health-care-tech-is-still-so-bad.html?smid=nytcore-ipad-share&smprod=nytcore-ipad

I have commented on several quotes from the article.

1. "Even in preventing medical mistakes — a central rationale for computerization — technology has let us down. (My emphasis.) A recent study of more than one million medication errors reported to a national database between 2003 and 2010 found that 6 percent were related to the computerized prescribing system.

At my own hospital, in 2013 we gave a teenager a 39-fold overdose of a common antibiotic. The initial glitch was innocent enough: A doctor failed to recognize that a screen was set on “milligrams per kilogram” rather than just “milligrams.” But the jaw-dropping part of the error involved alerts that were ignored by both physician and pharmacist. The error caused a grand mal seizure that sent the boy to the I.C.U. and nearly killed him.

How could they do such a thing? It’s because providers receive tens of thousands of such alerts each month, a vast majority of them false alarms. (My emphasis.) In one month, the electronic monitors in our five intensive care units, which track things like heart rate and oxygen level, produced more than 2.5 million alerts. It’s little wonder that health care providers have grown numb to them."

Comments: Before I read the third paragraph, I was thinking How can you blame the computer when it provided you with an alert regarding the prescribing error that you made? 

It is well known that systems that produce a high percentage of false alarms, that those alarms over time will be ignored or discounted. I consider this is a devastating indictment. We must do better.

I have been a human factors engineer and researcher for decades. One of the mantras of human factors is preventing errors. That's central to what we're about. But if the systems we help engineer generate false alarms at a rate that has our users ignoring the correct ones, then we have failed and failed miserably.

I think the problem of false alarms requires further research and commentary.


2. "... despite the problems, the evidence shows that care is better and safer with computers than without them."

Commentary: This is nice to read, but we as medical technologists need to do better. We really need to follow up on the repercussions of our technology we create when it's deployed and used in the field.


3. "Moreover, the digitization of health care promises, eventually, to be transformative. Patients who today sit in hospital beds will one day receive telemedicine-enabled care in their homes and workplaces."

Commentary: I agree. Of course that's a central theme of this blog.


4. "Big-data techniques will guide the treatment of individual patients, as well as the best ways to organize our systems of care. ... Some improvements will come with refinement of the software. Today’s health care technology has that Version 1.0 feel, and it is sure to get better.

... training students and physicians to focus on the patient despite the demands of the computers.

We also need far better collaboration between academic researchers and software developers to weed out bugs and reimagine how our work can be accomplished in a digital environment."

Commentary: Agreed again. But, I believe that technologist just can't dump these systems into the healthcare environments without significant follow-up research to insure that these systems provide or suggest the correct treatment programs and effectively monitor patients. Investment in systems like these will be cost effective and improve lives, but only if the necessary level of care and follow-up is performed.


5. "... Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design."

Commentary: All this is true. And as noted above that it would be a good idea to do more extensive research on medical systems before we deploy them to the field as well. That this is not done may be a regulatory issue that the FDA has not required the kind of rigorous research as performed in aircraft cockpit design. They should require more research in real or simulated environments. Right now, all that appears to be required is a single verification and single validation test before allowing commercialization. I think it would be valuable for regulators to require more research in real or simulated settings before allowing companies to commercialize their products.

Or, requiring more extensive follow-up research. Grant companies the right to sell their medical products on a probationary basis for (say) 1 year after receiving initial commercialization certification. During that year, the company must perform follow-up research on how their medical product performs in real environments. If there are no significant problems ... such as overly abundant number of false alarms ... then the product no longer on probation and would be considered fully certified for commercialization.
However, if significant problems emerge, the FDA could:

a) continue to keep the product in a probationary status pending correction of those problems and another year of follow-up research or

b) it could require the withdrawal of the product from sale. A product that had been withdrawn would have to go through the entire commercialization certification process just as if it were a new product before commercialization and sale would be allowed.


A final thought ... I think there's a reality in commercial aviation that is not true in medicine. If commercial aircraft killed and injured as many people as are killed and injured by medical practitioners, then the commercial aviation would come to a halt. People would refuse to fly because they perceive it to be too dangerous. But, if you're sick, then you have little choice but the clinic, ER or hospital.







Sunday, July 27, 2014

RIP: Death by Medical Error, 400,000 year in the US

In 2013 there were over 35,000 traffic deaths in the US. That's over 10 fatalities per 100,000. (Scotland appears the safest at just over 3 per 100,000, Germany by contrast has a rate less than 5 per 100,000, Argentina has over 12 per 100,000 and South Africa the "winner" per over 27 per 100,000.)

Contrast that with an estimated 400,000 deaths by medical errors ... that's around 130 deaths per 100,000. I don't know about you, but for me that raises real concerns. When I got into the field of human engineering for medical devices in 2009, I saw reports of around 100,000 per year in the US. I found that shocking. Now it's being reported that medical errors are killing 4 times more people than we originally believed? Takes your breath away.

The article that reports this finding is:

James, John T. (2013) A new evidence-based estimate of patient harms associated with hospital care. Journal of Patient Safety, Lippincott Williams & Wilkins.

Here a link to the article that report this with a portion of the abstract. The article is free and worth reading.

Link: http://journals.lww.com/journalpatientsafety/Fulltext/2013/09000/A_New,_Evidence_based_Estimate_of_Patient_Harms.2.aspx

Abstract (Redacted)

Based on 1984 data developed from reviews of medical records of patients treated in New York hospitals, the Institute of Medicine estimated that up to 98,000 Americans die each year from medical errors. The basis of this estimate is nearly 3 decades old; herein, an updated estimate is developed from modern studies published from 2008 to 2011.
[T]he true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.

Another article that suggests that death by medical error may still be underreported

Here's a recent article in Baltimore's THE SUN that describes how Maryland hospitals are underreporting their medical errors. This is likely just the tip of the iceberg nationally on this story. 


This article cites the James article above.

Thursday, June 30, 2011

Are Electronic Prescription Systems Failing to Trap Errors?

A Brief Introduction

Before I jump into the topic of electronic prescription systems, I want to make known how I came across the article I am about to post. I am creating a website that includes a substantial portion of the human factors related work I have produced over the years. That website also includes posting articles on the home page related specifically to human factors - and that includes article related to medical errors: a topic of interest to me.

The new human factors website is not yet ready for viewing. I have just created a usable home page. The bulk of the work is to come. I'll post the address when it's reached a usable state.


What's Going on with Electronic Prescription Systems?


Bloomberg news recently reported the results of a study that indicated that prescription errors are as frequent whether handwritten or written through an electronic prescription system. Here is the address of the Bloomberg article:
http://www.bloomberg.com/news/2011-06-29/errors-occur-in-12-of-electronic-drug-prescriptions-matching-handwritten.html

I have not yet had the opportunity to read the study. However, I shall and I'll continue to update this blog on this topic based on what I find. 


With respect to the Bloomberg article, this quote caught my eye:


"The most common error was the omission of key information, such as the dose of medicine and how long or how many times a day it should be taken, the researchers said. Other issues included improper abbreviations, conflicting information about how or when to take the drug and clinical errors in the choice or use of the treatment, the researchers said."


I have been a human factors professional for a long time and as I read the quote above my jaw dropped. The errors described in the quote are some of the most fundamental and easily trappable and correctable errors. It seems beyond belief that an electronic prescription system would allow a user to make such errors. In the environments where I have worked, I have designed and installed subsystems to insure that users do not make the kinds of errors as described in the Bloomberg article. When I have a chance to read the report, I'll cover specific errors, their detection and correction. And means to insure that patients are not harmed.


Here's a link to another publication that reported on the same study:


http://www.eurekalert.org/pub_releases/2011-06/bmj-oep062811.php