Showing posts with label accuracy. Show all posts
Showing posts with label accuracy. Show all posts

Monday, March 30, 2020

30 March 2020 Projected Number of US Deaths from Now into April 2020

I have two models that I'm currently using to project the number of COVID-19 deaths for the first half of April. Here are the numbers: 


3/30/2020

3,179312849
3/31/2020

4,017323600
4/1/2020

5,074334548
4/2/2020

6,410345746
4/3/2020

8,097357259
4/4/2020

10,229369171
4/5/2020

12,9223711587
4/6/2020

16,3243814639
4/7/2020

20,6213918495
4/8/2020

26,0504023366
4/9/2020

32,9084129521
4/10/2020

41,5724237296
4/11/2020

52,5164347120
4/12/2020

66,3424459530
4/13/2020

83,8074575210
4/14/2020

105,8714695020
4/15/2020

133,74347120047


The first column is of course the date. The third column is the projected deaths from what I call the "aggressive model." The aggressive model has over the last week been more accurate than the "conservative model." The number of deaths for the conservative model are found in the fifth column. (The numbers in the fourth column are "index" values used in the computations.) 

Both models predict that around 4/4 or 4/5, the number of US deaths will be near 10K. And around 4/14 and 4/15 the number of deaths will be around 100K. I'm hoping that things will crest soon and we will not see these numbers anytime soon. But for now I don't see the curves beginning to asymptote. If they do not begin to asymptote, we could see a million deaths in the US from COVID-19 by the end of April or the first part of June.

UPDATE: 

I stopped showing my projections above on 4/15 because the numbers were frightening enough. However, the Federal Government has made an announcement that if everything from now goes perfectly, we should expect the number of deaths to be up to 200,000. I believe they're making the statement that they did because they believe that the predicted number of deaths from COVID-19 are pretty much "baked in." Given that, I decided to show more of what my models predict, this time to 4/30. I begin with today's number of deaths (2nd column) and go from there. To review, the aggressive model is the 3rd column and the conservative model is the 5th column. Both models predict over 2 million deaths by 4/30. I believe the limiting factor on this many deaths in this short a time is the number people an infected person can infect. At some point the pool people who can be infected because so many are infected becomes increasingly limited. Where that comes into play is something I haven't had a chance to work through. I suspect that it something on the order of when 30 to 50 percent of the population becomes infected. 


3/30/20203,1483,179312,849
3/31/2020

4,017323,600
4/1/2020

5,074334,548
4/2/2020

6,410345,746
4/3/2020

8,097357,259
4/4/2020

10,229369,171
4/5/2020

12,9223711,587
4/6/2020

16,3243814,639
4/7/2020

20,6213918,495
4/8/2020

26,0504023,366
4/9/2020

32,9084129,521
4/10/2020

41,5724237,296
4/11/2020

52,5164347,120
4/12/2020

66,3424459,530
4/13/2020

83,8074575,210
4/14/2020

105,8714695,020
4/15/2020

133,74347120,047
4/16/2020

168,95248151,667
4/17/2020

213,43249191,615
4/18/2020

269,62150242,085
4/19/2020

340,60351305,848
4/20/2020

430,27152386,405
4/21/2020

543,54753488,181
4/22/2020

686,64454616,764
4/23/2020

867,41455779,214
4/24/2020

1,095,77356984,453
4/25/2020

1,384,253571,243,749
4/26/2020

1,748,678581,571,343
4/27/2020

2,209,044591,985,221
4/28/2020

2,790,609602,508,112

Thursday, February 13, 2020

Article: Establishing Trust in Wearable Medical Devices

Many of the topics covered in this article I have covered in this blog. Most recently, my discussion of signal detection and the Apple Watch (https://medicalremoteprogramming.blogspot.com/2019/12/signal-detection-and-apple-watch.html) that I suggest that you read after reading this article from Machine Design.


Here are a few quotes from the article.

To say that we can get personal health insight from continuous monitoring presumes a “chain of trust.” In other words: 
  • The interpretation of any data must not only be accurate but reliable. The challenge lies in handling “borderline” data. Any interpreting strategy or algorithm faces data sets that it finds ambiguous. For an algorithm to be reliable, users must be able to quantitatively understand its detection limits and error characteristics.
  • The data and/or its interpretation must reliably reach the decision-maker for it to become actionable.
  • The data must be correctly associated with historical records of the patient for it to have context.
  • The data must be proven to be authentic to trigger any meaningful action.

However, using clinical equipment to capture vital signs that are representative of the wearable use cases is often difficult and sometimes inaccurate. To avoid a rash of false positives or false negatives, one must carefully select the population of test subjects and carefully develop the representative use cases. It’s also important to compare data from the patient’s own history or baseline, keeping in mind that this baseline isn’t static as the patient ages and undergoes other changes.


Monday, November 25, 2019

Supplement to Part IV Submissions to FDA: Risk Management and Use Errors

Additional Thoughts:

As part of your risk analysis and use error identification, you should identify the circumstances or origin of the reported use error. 


  • Origins of reports of use errors, in order of likely relevance:
    1. Field reported use errors: These require special attention because they were discovered while in use by users under actual conditions. That's the reason why field reported use errors are the most important.They deserve special consideration, particularly if the use error was responsible for any harm and that the use error occurs significantly more often than originally predicted. Consider performing a root cause analysis to determine why the use error occurred. Most especially consider the use or environmental conditions and who made the use error to be factored into why the use error was made. Be sure to determine which assumptions regarding use, conditions for use and predicted user characteristics were violated. 
    2. Errors reported from empirical studies: These are use errors that have been observed under laboratory or other kinds of testing conditions defined by researchers. Furthermore, these use errors are from members of the expected user population(s) who have the expected requisite level of education and training. Thus in testing sessions the conditions of use including the environment have been structured and manipulated by the researchers. The results of the research and the use errors detected may be valid, but narrow in scope in both situations of use, the use environment, actual user characteristics including education and training, etc. 
    3. Analysis based:
      • Scenarios: Scenarios are set of connected events with a beginning, a series of possible steps or actions and an end point. They are generally derived from real world knowledge of the environment, the people involved -- their characteristics such as education, training, responsibilities, experiences, situaetc. --  and the kinds of actions they would engage with the systems and devices in development in order to accomplish a particular task. Scenarios also consider possible paths and actions that would lead to making a use error. Scenarios particularly worst-case scenarios can be an effective means for detecting possible use errors. With newly designed products and systems, this maybe one of the first means to identify possible use errors and determine their possible harm. Nevertheless, scenario-based use errors come from thought experiments and lack empirical validation.
      • Brain-storming: Brain storming is an unstructured or free-form process of analysis to capture the possible use errors. Brain storming sessions can be a particularly useful means of capturing conditions and use errors that may not have been seen or consider using other methods. However use errors derived from brain storming are not based on empirical evidence. Nevertheless, those who uncover use errors by brain storming often have high levels of expertise and experience in the technical area under consideration.
Each process as its own value and every method to detect or originate possible use errors should be considered. And when reporting use errors in the submission, I suggest that where the reported use error originated should be included in the final submitted report. And a summary of where use errors originated should be included in the narrative.





Monday, November 11, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies

Background

About a year ago I was asked what I thought was the most difficult phase of the medical device human engineering process. Frankly, I'd never before considered such a question. I could not identify any phase of the process that I considered more difficult than any other. I considered whether early phase formative research and testing or risk identification, use errors and risk management would be the most difficult. No, actually for me these phases have always proven to be the most interesting phases in the process. Challenging, yes; difficult, no.

The questioner had an answer in mind: she believed that the validation testing was the most difficult. I disagreed. I believe that the validation testing was most often the least difficult of all the phases of the process. Why? Because validation comes at the end of the entire process and is based on all of the work that you've done previously to get one to the point of validation testing. Thus the procedure of a validation test should flow naturally and easily from the earlier work.

In the end, we agreed to disagree. However, the question never left me. 

I have finally come up with the answer. The most difficult phase of the process is the creation of the narrative for the regulatory reviewers. What I refer to is more than putting together the folder of documents of all of the human engineering related activities. It is the construction of a cohesive and understandable narrative the provides to a reviewer an overall view of the reasoning and logic of the steps taken and that the procedures performed that will demonstrate that the human engineering process was sound and resulting from it is a system that will be safe to use. This is the most difficult phase of the human engineering process, especially if writing the narrative comes at the end.

Human Engineering Pre-Market Submission

Here is the outline of what the FDA expects in a Human Engineering Pre-Market Submission as provided by the FDA on their website:

Sec.Contents
1Conclusion
The device has been found to be safe and effective for the intended users, uses and use environments.
  • Brief summary of HFE/UE processes and results that support this conclusion
  • Discussion of residual use-related risk
2Descriptions of intended device users, uses, use environments, and training
  • Intended user population(s) and meaningful differences in capabilities between multiple user populations that could affect user interactions with the device
  • Intended use and operational contexts of use
  • Use environments and conditions that could affect user interactions with the device    
  • Training intended for users
3Description of device user interface 
  • Graphical representation of device and its user interface
  • Description of device user interface
  • Device labeling
  • Overview of operational sequence of device and expected user interactions with user interface
4Summary of known use problems 
  • Known use problems with previous models of the subject device
  • Known use problems with similar devices, predicate devices or devices with similar user interface elements
  • Design modifications implemented in response to  post-market use error problems
5Analysis of hazards and risks associated with use of the device
  • Potential use errors
  • Potential harm and severity of harm that could result from each use error
  • Risk management measures implemented to eliminate or reduce the risk
  • Evidence of effectiveness of each risk management measure
6Summary of preliminary analyses and evaluations
  • Evaluation methods used
  • Key results and design modifications implemented in response
  • Key findings that informed the human factors validation test protocol
7Description and categorization of critical tasks 
  • Process used to identify critical tasks
  • List and descriptions of critical tasks
  • Categorization of critical tasks by severity of potential harm
  • Descriptions of use scenarios that include critical tasks
8Details of human factors validation testing
  • Rationale for test type selected (i.e., simulated use, actual use or clinical study)
  • Test environment and conditions of use
  • Number and type of test participants
  • Training provided to test participants and how it corresponded to real-world training levels
  • Critical tasks and use scenarios included in testing
  • Definition of successful performance of each test task 
  • Description of data to be collected and methods for documenting observations and interview responses
  • Test results: Observations of task performance and occurrences of use errors, close calls, and use problems 
  • Test results: Feedback from interviews with test participants regarding device use, critical tasks, use errors, and problems (as applicable)  
  • Description and analysis of all use errors and difficulties that could cause harm, root causes of the problems, and implications for additional risk elimination or reduction 
Each one of the items listed will include one or more documents that address in some manner each one of the sub-points listed in each one of the items. You could place these documents into a folder, identify where they belong and submit this to the FDA or other regulatory body. However, I advocate something more: write an over arching narrative that takes the reader through the path of not only what was performed but why as well. The narrative can be and probably should be based on the outline above. (I'm sure there are exceptions, but beginning with the outline above should be your starting point.)
The narrative should provide a comprehensive flow that interconnects each of the phases and explains reasoning for what was done including the rationale behind the design and operation of the system as well as why it's appropriate for use by the identified user population as well as other likely populations who might encounter it.

Why Write a Narrative?

Is a narrative required as part of a submission? From all that I can tell: no, it's not a requirement as part of a submission to regulatory bodies. 

However, consider the fact that if you can't explain the logic of what you did to yourselves, how hard will it be for a reviewer to comprehend? And if the reviewer can't comprehend what you did -- including the reasoning and logic behind it -- could your submission be at risk for rejection? The answer is "yes," you may be putting your submission at risk for rejection.

The narrative is analogous to a completed jigsaw puzzle. A human engineering file without a narrative is analogous to just the jigsaw puzzle pieces. Yes everything is there, but what is it suppose to be? 

Submitting a human engineering file that includes a comprehensive narrative can insure that your submission is understandable: to you as well as your reviewers. It can insure that there are no gaps or issues that should have been included in your submission are left out. Again, going back to the jigsaw puzzle analogy, you don't know that you've got a missing piece or pieces until you've assembled the puzzle.

The narrative provides reviewers with framework to understand what you've done. Interestingly enough this will likely minimize any questions reviewer might have about your submission. And will likely minimize the likelihood that you'll get questions that you cannot answer.

One more thing to note: if your narrative is clear and comprehensive, it's likely that the reviewer or reviewers will often read no further or will simply scan the foundational documents to insure that the foundational documents do in fact support what is stated in the narrative. This could speed the regulatory review and acceptance. 

More Articles on This Topic

I'll be writing a series of articles on the topic of human engineering file submission narratives over the next week or two. I'll focus on specific areas of the narrative and discuss some of what I have done with regards to putting together narratives for submission to regulators. 



Friday, September 14, 2018

Apple Watch 4 -- FDA Announcement: Statement from FDA Commissioner Scott Gottlieb, M.D., and Center for Devices and Radiological Health Director Jeff Shuren, M.D., J.D., on agency efforts to work with tech industry to spur innovation in digital health

The FDA just provided what amounts to a "shout-out" to companies that design and manufacture intelligent, wearable devices that include medically-related monitoring devices and specifically, the Apple Watch 4.

Here's the link to the FDA statement: https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm620246.htm

And here's an interesting quote from the announcement:

... [There have come] a new swath of companies that are investing in these new opportunities [e.g., wearable, intelligent monitoring devices measuring medically-related, physiological characteristics with analysis capabilities.] These firms may be new to health care products and may not be accustomed to navigating the regulatory landscape that has traditionally surrounded these areas. A great example is the announcement of two mobile medical apps designed by Apple to work on the Apple Watch. One app creates an electrocardiogram, similar to traditional electrocardiograms, to detect the presence of atrial fibrillation and regular heart rhythm, while the other app analyzes pulse rate data to identify irregular heart rhythms suggestive of atrial fibrillation and notify the user. The FDA worked closely with the company as they developed and tested these software products, which may help millions of users identify health concerns more quickly. Health care products on ubiquitous devices, like smart watches, may help users seek treatment earlier and will truly empower them with more information about their health.

---------------
I find it interesting that Dr. Gottlieb states that the Apple Watch analyzes pulse rate data, not the ECG, to detect "rhythms suggestive of atrial fibrillation." Yeah, that's a way to do it, but analysis of the ECG is a much better way. When I do a deep dive on the Apple Watch 4, I'll look into this and questions like it.


Friday, March 27, 2015

Welch Allyn Published Patent Application: Continuous Patient Monitoring

I decided to review this patent application in light of the New York Times Opinion piece I commented on. Here's the to my commentary: http://medicalremoteprogramming.blogspot.com/2015/03/new-york-times-opinion-why-health-care.html

Also, I've gone back to the origins of this blog ... reviewing patents. The first patent I reviewed was one from Medtronic. Here's the link: http://medicalremoteprogramming.blogspot.com/2009/09/medtronics-remote-programming-patent.html

The issue raised of particular interest was the high "false alarm" rate generated reported by the author that would lead medical professionals to disregard warnings generated by their computer systems. I wrote that I wanted to follow-up on the issue of false alarms.

The patent application (the application has been published, but a patent has not yet been granted) describes an invention intended to 1) perform continuous automated monitoring and 2) lower the rate of false alarms.

Here are the details of the patent application so that you can find it yourself if you wish:



The continuous monitoring process from a technical standpoint is not all that interesting or new. What is interesting is the process they propose to lower the false alarm rate and determine whether this process in turn will not lower the false negative rate.

Proposed Process of Lowering False Alarms

As mentioned in my earlier article, it appears that false alarms have been a significant issue for medical devices and technology. Systems that issue too many false alarms issue warnings that are often dismissed or ignored. Or waste the time and attention of caregivers who spend time and energy on responding to a false alarm. This patent application is intended to reduce the number of false alarms. However, as I mentioned earlier, can it do that by not increasing the number of false negatives, that is, failure to detect when there is a real event where an alarm should be going off.

Getting through all the details of the patent application and trying to make sense of what they're trying to convey, the following is what I believe is the essence of the invention:


  • Measurement a sensor indicates an adverse patient conditions and an alarm should be initiated.
  • Before the alarm is initiated, the system cross-checks against other measurements that are: 
              1) from another sensor measuring essentially the same physiological condition as the
                  sensor that detected the adverse condition, the measurement from the second sensor
                  would confirm the alarm condition or indicate that an alarm condition should not exist; or
              2) from another sensor or sensors that take physiological measurements that would confirm
                  the alarm condition from the first sensor or indicate that an alarm condition should not
                  exist.

In this model at least two sensors must provide measurements that point to an alarm state.

Acceptable Model for Suppressing False Alarms and Not Increasing False Negatives?

Whatever you do in this domain of detecting adverse patient conditions, you don't want to lower your accuracy of detecting the adverse condition. That is, increase your false negative rate.

So is this one way of at least maintaining your currently level of detecting adverse events and lowering your false alarm rate? On the face of it, I don't know. But it does appear that it might be possible.

One of the conditions the inventors suggest that initiates false alarms are those times when patients move or turn over in their beds. This could disconnect a sensor or cause it to malfunction. A second sensor taking the identical measurement may not functioning normally and have a measurement from the patient indicating that nothing was wrong. The alarm would be suppressed ... although, if a sensor was disconnected, one would expect that there would be a disconnected sensor indicator would be turned on.

Under the conditions the inventors suggest, it would appear that cross checking measurements might reduce false positives without increasing false negatives. I would suggest that care should be given to insure that a rise in false negative rates do not increase. With array of new sensors and sensor technology becoming available, we're going to need to do a lot of research. Much of it would be computer simulations to identify those conditions were an adverse patient condition goes undetected or suppressed by cross-checking measurements.

Post Script

For those who do not know, I am on numerous patents and patent applications (pending patents). Not only that I have written the description section of a few patent applications. So I have a reasonable sense of what is what is not patentable ... this is in spite of the fact that I'm an experimental, cognitive psychologist and we're not general known for our patents.

So, what is my take on the likelihood that this applications will be issued a patent? My sense is not likely. As far as I can tell there's nothing really new described in this application. The core of the invention, the method for reducing false alarms, is not new. Cross-checking, cross-verifying measurements to determine if the system should be in an alarm state is not new. As someone who has analyzed datasets for decades, one of first things that one does with a new dataset is to check for outliers and anomalies - these are similar alarm conditions. One of the ways to determine whether an outlier is real, is to cross check against other measures to determine if they're consistent with and predictive of the outlier. I do not see anything that is particularly new or passes what known in patent review process as the "obviousness test." For me cross checking measures does not reach the grade of patentability.