Friday, November 20, 2009

Remote Monitoring Equals Healthier Patients

I know I promised an article that discussed the Biotronik studies.  However, I just came across a brief article that I wanted to share.  It's a brief description of an article that shows the introduction of remote monitoring can substantially reduce hospital admissions.  Here's the link:  http://articles.icmcc.org/2009/11/20/remote-monitoring-yields-healthier-patients/.

This is the kind of article that provides additional, supporting evidence that demonstrates the benefits of remote monitoring: to patients and to the bottom-line of health care.  Furthermore, as I remarked in http://medicalremoteprogramming.blogspot.com/2009/11/virtual-doctor-visit-washington-post.html, the people I've known have wanted to stay out of hospitals.  So this should be considered a win all way around.

Thursday, November 19, 2009

Body Area Networks

This is one of the best articles I have seen recently that discusses emerging technologies and standards for Body Area Networks. It's published by ZDNet.  Here's the link: 7 things you should know about Body Area Networks (BANs).

Next article I'll discuss the two articles on the Biotronik Home Monitoring system: TRUST articles.

 

Tuesday, November 17, 2009

The Virtual Doctor Visit: Washington Post

I grew up around elderly people.  My parents were middle-aged when I was born, grandparents were elderly, many of my parents friends were elderly.  I cannot think of one person who said that they liked being in a hospital.  A continual fear of my parents, grandparents and my parents elderly friends was the fear of wasting way in either a hospital or nursing home.  Death was a better alternative.  Not that they wanted to die, but that they did not want to die in the confines of a hospital or nursing home.

This is an article published today (Tuesday, 17 November 2009) in the Washington Post that discusses remote monitoring as an alternative to a hospital admission.  There's a trial underway to determine if remote monitoring can provide the kind of information that physician require to keep people from being admitted to the hospital.  It's care in the home.  Here's the link: The Virtual Doctor Visit.

Here's an update on the Digital Plaster trial: http://tech.kikil.com/2009/11/medical-debut-for-smart-band-aid/.

Monday, November 16, 2009

Maintaining Communication Security

Having a secure channel is particularly important for remote monitoring and remote programming.  Here's an article that was recently published regarding a company that has taken an interest approach to the problem. Here's the link: Boosting the security of implantable devices.  

I am the inventor of a data communications security technology and a founder of a security company.  (I am currently a silent partner.)  So, I have an interesting in security technology and systems.  In later articles, I'll cover some of the issues regarding maintaining communications security.
 

Friday, November 13, 2009

Biotronik Home Monitoring: Update

Biotronik Home Monitoring recently received the industry's first European CE Mark.  Here is the link to one of the publications that announced this: Biotronik Home Monitoring Receive Industry Approval.  The approval appears to be founded on the studies conducted by Varma that are referenced in the article.  I hope to have more information on this subject in the near future.

Thursday, November 12, 2009

Near Future: Remote Monitoring and Programming

This article will focus on a system of remote medical monitoring and remote programming as shown in the figure below.





I've discussed elements of this design in earlier posts, so I'll not go into detail about things that I have already covered.  This is particularly true with respect to the communications model wherein that involved a mobile and a central server.  The model I show in the figure is more "ready" for commercial deployment in that there multiple, redundant Central Servers in multiple locations.  This is in keeping with telecommunications philosophy for achieving near perfect connectivity through the backbone systems.

Another addition is that of WiMax (802.16 standard, for more information: WiMax Wikipedia) that is now being commercially deployed. This adds another viable data channel from which to send data.  As I mentioned before, the system that we developed was able to move traffic over one or all channels simultaneously, and traffic can be rerouted based on additional channel acquisition or loss. 

The important elements of this design for this discussion are at the ends.  Let's begin at the bottom of the diagram.  A patient could be implanted with multiple devices from multiple manufacturers.  In the diagram I show an insulin pump from Medtronic (http://www.medtronic.com/our-therapies/diabetes-management/index.htm), an ICD from St. Jude Medical (http://www.sjmprofessional.com/Products/US/ICD-Systems/Current-RF-ICD.aspx) and a pacemaker from Boston Scientific (http://www.bostonscientific.com/Device.bsci?page=HCP_Overview&navRelId=1000.1003&method=DevDetailHCP&id=10103841&pageDisclaimer=Disclaimer.ProductPage).  We could include devices from Biotronik (http://www.biotronik.com/portal/home) as well.  The mobile server in the diagram can communicate with all the devices and address and communicate with them individually.  (We have already proven this technology.)  We would assume that the data traffic from the devices would be bidirectional and that delivery is guaranteed and secure across the connection, to and from the analysis and device servers.

Without going into substantial detail, each device has a specific and separate device managing process running on the mobile server.  Using a "plug-in" architecture, each process communicates with the multi-layered, distributed system that moves data across the network.  Each device has a continuous, virtual connection with its counterpart Analysis and Device Management server to support both remote monitoring and remote programming.



The digital plaster (or plastic strips) would generate various types of monitoring data as shown in the diagram.  A single, multi-threaded process could manage any number of strips.  

It would be conceivable for the device managing processes to subscribe to any of the digital plaster processes and send the collected data from the patients to any or all of the Analysis and Device Management Servers. The digital plaster strips could collect data from any number of locations and a variety of types of data.  This would reduce the need for building the monitoring capabilities inside of the devices and conceivably provide the kind of the data the device could never provide.  

This system is primarily software-defined and is highly flexible and extensible. Furthermore, it provides the flexibility to incorporate a wide variety of current and future monitoring systems.  I'll continue to update this model as I find more products and technologies to include.

Sunday, November 8, 2009

Remote Monitoring: Predictability

One of the most controversial subjects in measurement and analysis is the concept of predictably.  Prediction does not imply causality or a causal relationship.  It is about an earlier event or events indicating the likelihood of another event occurring.  For example, I've run simulation studies of rare events.  If any of my readers have done this, you'll notice that rare events tend to cluster around each other.  This means that if one rare event has occurred, it's likely that the same event will occur again in a relatively short time.  

Interestingly, the clustering does not seem to be an artifact of the simulation system.  There are some real-world examples.  Consider the paths of hurricanes. At any one time, it is rare that a hurricane will make landfall at a particular location.  However, once a hurricane has hit a particular location, it appears that one can predict that the likelihood of the next hurricane hitting in that same general area goes ups.  I can think of a couple of examples in recent history.  In 1996, hurricanes made landfall two times around the area of Wilmington, NC. Furthermore, a third hurricane passed by.  In 2005, New Orleans was hit solidly twice.  If you look at two hurricane seasons - 1996, 2005 - you'll note that they show quite different patterns.  The rare event paradigm suggests that when the patterns for creating rare conditions are established, they will tend to linger. 

In medicine the objective is to find an event or conditions preceding an event before the event of concern occurs.  For example, an event of concern would be a heart attack.  It is true that once one has had a heart attack, another one could soon follow.  The conditions are right for a follow-on event.  However, the objective is to prevent a heart attack - not wait for a heart attack to occur in order to deal with the next one that is likely to soon occur.  Physicians employ a variety of means to attempt to detect possible conditions that may indicate an increased likelihood of a heart attack.  For example, cholesterol levels that are out of balance might signal an increase in likelihood of having a heart attack.  


The problem is that most of the conditional indicators that physicians currently employ are weak indicators of an impending heart attack.  The indicators are suggestive.  Let me provide an example using a slot machine as an example.  Let's assume that hitting the jackpot is equivalent to an heart attack.  Each pull of the lever represents another passing day.  On it's own, with the settings that the machine is initially set to, the slot machine has a possibility of hitting a jackpot with each pull of the lever.  However, the settings on the slot machine can be biased to make it more likely to hit a jackpot.  This is what doctors search for ... the elevated conditions that make a heart attack more likely.  Making hitting a jackpot more likely does not mean that you're ever going to hit one.  It just increases the likelihood that you will hit one.  


To compound the problem, the discovery of biasing conditions that appear to increase the likelihood of events such as heart attacks are often difficult to clearly assess.  One problem is that apparent biasing indicators or biasing conditions generally don't have a clear causal relationship. They are indicators, they have a correlative relationship (that is not always strong), and not a causal relationship.  There are other problems as well.  For one, extending conclusions to an individual from data collected from a group is generally considered suspect.  Yet, that is what's going on with respect to measuring performing assessments on individuals.  Individuals are compared to norms based on data collected from large groups of individuals.  Overtime and with enough data, norms may be considered predictors.  Search out the literature.  You'll note that many times, measurement that once were considered predictive, now no longer are.


The gold standard of prediction is the discovery of predecessor event or events.  It is something that precedes the watched-for event.  In Southern California everyone is waiting for the great earthquake.  Scientists have been attempting to discover a predecessor event to that great earthquake.  Same goes for detecting a heart attack or other important medical events that are threats to ones health.  Two clear problems stand in the way of discovering a clear predecessor event.  The first is finding that event that seems to precede the event of interest.  This not easy.  A review of the literature will inform you of that.  Second, is once you've found what appears to be a predecessor event, what's its relationship to the target event, the event of interest?  Often times that is a very long process and even with effectively predictive predecessor events, the relationship is not always one to one.  In that, one predecessor event may not precede the event of interest.  Several predecessor events could precede the event of interest.  Or, the predecessor event does not always appear before the event of interest.


This ends my discussion of predictability.  Next time ... I'm going to speculate on what may be possible in the near term and how the benefits of remote monitoring and remote programming can be made available relatively inexpensively to a large number of people.


Article update notice

I have updated my article on Digital Plaster.  I have found an image of digital plaster that I have included, plus a link to one of the early news releases from the Imperial College, London, UK.  I shall include Digital Plaster in my next article.

Remote Monitoring: Update to Sensitivity and Accuracy

Before I dive into the subject of predictability (following article), I have an update on one of my previous articles: Remote Monitoring: Sensitivity and Accuracy.  It comes from a discussion I had with a colleague regarding what appeared to be counter-intuitive results.  The issue was the data sampling rate over a fix period of time.  As the sampling rate increased, accuracy decreased.  Thus with seemingly more data, accuracy went down.

Going back to the Signal Detection paradigm, the paradigm suggests that as a rule increasing the number of data points will reduce the false positives (alpha). And reducing false positives was a major objective of this research.  Frankly for a time I was flummoxed.  Suddenly I realized that I was looking at the problem incorrectly.  I realized that the problem is with the resolution or granularity of the measurement.

The Signal Detection paradigm has as a fundamental assumption the concept of a defined event or event window - and detecting whether or not within that event window a signal is present. The increased sampling rate compounded error, particularly false positive errors.  In effect, the system would take two samples, within the conditions that set-off the false positiveThus producing more than one false positive within an event window where only one false positive should have been recorded.

How to overcome the problem of oversampling, of setting the wrong size event window?  Here are some things that come to mind:
  • First, recognizing that there's an event-window problem may be the most difficult.  This particular situation suggested an event-window problem because the results were counter to expectations.  Having primarily a theoretical perspective, I am not the best one to address this issue. 
  • Finding event windows may involve a tuning or "dialing-in" process.  However it is done, it may take many samples at various sampling resolutions to determine the best or acceptable level of resolution.
  • Consider adding a waiting period once a signal has been detected.  The hope is that the waiting period will reduce the chances of making a false positive error.
On a personal note: I find it amusing that before this time, I had never encountered a granularity-related issue.  I theory I have understood it, but ever encountered it in my own research.  This was in part because the research I have performed has always had clear event boundaries.  Nevertheless, within days of writing about Sensitivity and Accuracy and the granularity issue in this blog, I encounter a granularity problem.

Tuesday, November 3, 2009

Sensor Technology: Digital Plaster and Stethoscope

Digital Plaster


Toumaz Technology has announced the clinical trials of what they are calling "digital plaster" that should enable caregivers to remotely monitor patients.  In the initial trial it would allow caregivers to remotely monitor patients when they are in the hospital.  However, conceivably patient could carry a mobile monitoring system like the one that I discussed in my article: Communication Model for Medical Devices.  

Here is a link the article on Digital Plaster: http://www.sciencecentric.com/news/article.php?q=09110342-digital-plaster-monitoring-vital-signs-undergoes-first-clinical-trials

Update:  Here's an image of digital plaster from a UK website.  This is to provide you with an image of the size and means of application of digital plaster.  It's a sensor placed into a standard plastic or cloth strip.  Simple to apply and it's disposable.  



For more information, here's the link: Imperial College, London, UK.  This is a 2007 article.  This is a good reference point to investigate the technology. 

Digital Stethoscope


Another development was the announcement at TEDMED of the digital ste.  Here's the link to the article: http://mobihealthnews.com/5142/tedmed-wireless-health-has-killed-the-stethoscope/.  This article discusses this and other new wireless medical devices that will enable patients to be remotely monitored from virtually anywhere.  Thus providing the capability to keep people out of hospitals or keep them for shorter periods of time.  Furthermore, these technologies have the capability of improving care while lowering costs.  Again I think it would be instructive to read my articles on mobile, wireless data communications:  1) Communication Model for Medical Devices and 2) New Communications Model for Medical Devices.

Sunday, November 1, 2009

Remote Monitoring: Sensitivity and Accuracy ... using wine tasting as a model

This article focuses on measurement accuracy, sensitivity and informativeness.  Sometime later I shall follow will an article that will focus on predictability.  

I discuss measurement accuracy, sensitivity and informativeness in this article in the abstract and use an example, wine tasting. However, in later articles when I drill-down into specific measurements provided by remote monitoring systems.  I shall make reference to concept foundation articles such as this one when I discuss specific measurements and measurement systems.



For remote monitoring to be a valuable tool, the measurements must be informative.  That is, they must provide something of value to the monitoring process - whether that monitoring process is an informed and well trained person such as a physician or software process.  However, there are conditions that must first be met before any measurement can be considered informative.

For any measurement to be informative, it must be accurate.  It must correctly measure whatever it was intended to measure.  For example, if the measurement system is designed to determine the existence of a particular event, then it should register that the event occurred and the number of times that it did occur.  Furthermore, it should reject or not respond when conditions dictate that the event did not occur - that is, it should not report a false positive.  This is something that I covered in detail on my article on Signal Detection.  Measurement extend beyond mere detection and to the measurement tied to a particular scale, e. g., such as the constituents in a milliliter of blood.


A constituent of accuracy is granularity.  That is, how fine is the measurement and is it fine enough to provide meaningful information.  Measurement granularity can often be a significant topic of discussion, particularly when defining similarities and differences.  For example, the world class times in swimming are to the hundredth of second.  There have been instances when the computer that sensed that two swimmers touched the end simultaneously and that the times were identical.  (I can think of a particular race in the last Olympics that involved Michael Phelps and the butterfly.)  At the resolution of the computer touch-timing system (and I believe it's down to a thousandth of a second), the system indicated that both touched simultaneously and that they had identical times.  However, is that really true?  If we take the resolution down to a nanosecond, one-billionth of a second, did they touch simultaneously?  

However, at the other end, if measurements are too granular, do they lose their meaningfulness?  This is particularly true when defining what is similar.  It can be argued that with enough granularity, every measurement will differ from all other measurements on that dimension. How do we assess similarities because assessing similarities (and differences) is vital to diagnosis and treatment.


We often make compromises when in comes to issues of granularity and similarity by categorizing.  And often times, categorization and assessments of similarities can be context-specific.  This is something that we do without thinking.  We often assess and reassess relative distances.  For example,  Los Angeles and San Diego are 121 miles from each other.  (I used Google to find this distance.)  To people living in either city, 121 miles is a long distance.  However, to someone is London, England, these two cities would seem to be nearly in the same metropolitan area.  They appear within the same geographic area from a far distance. 



Sensitivity is a topic often unto itself.  Since I discussed it at some length when I discussed Signal Detection, I shall make this discussion relatively short.  In the previous discussion, I discussed the issue related to a single detector and its ability to sense and reject.  I want to add the dimension of multiple detectors and the capability to sense based on multiple inputs.  In this case I am not discussing multiple trials to test a single detector, but multiple measures on a single trial.  Multiple measurements on different dimensions can provide greater sensitivity when combined even if the accuracy and sensitivity of each individual measurement system is less accurate and sensitive than the single measurement system.  I'll discuss this more in depth in a later article.


Informativeness ... this has to do with whether the output of the measurement process - its accuracy (granularity) and sensitivity - provides one with anything of value.  And determining the value depends on what you need that measurement to do for you.  I think my example provides a reasonable and accessible explanation.


Wine Tasting - Evaluating Wine


Over the years, people interested in wine have settled on a 1-100 scale - although, I do not know of an instance where I have seen anything less than an 80 rating.  (I am not a wine expert by any stretch of the imagination.  I know enough to discuss it, that's all.  If you're interested, here's an explanation, how ever they will want to sell you bottles of wine and some companies may block access, nevertheless, here's the link: http://www.wine.com/v6/aboutwine/wineratings.aspx?ArticleTypeId=2.)   Independent or "other" wine raters use a similar rating system.  Wine stores all over the US often have their own wine rater who "uses" one of these scales.  In theory, you'll note that they're reasonably similar.  In practice, they can be quite different.  Two 90 ratings from different wine raters don't always mean the same thing.


So, what is a buyer to do?  Lets look at wine rating in a mechanistic way.  Each wine rater is a measuring machine who is sensitive to the various constituents of a wine and how those constituents provide an experience.  Each rating machine provides us with a single number and often a brief description of the tasting experience.  But, for most people buying wine, it's the number that's the most important - and can often lead to the greatest disappointment.  When we're disappointed, the measurement has failed us.  It lacks informativeness.

How to remedy disappointment of expectation and often times, over payment?  I think of four ways:
  1. Taste the wine yourself before you buy it.  The wine should satisfy you.  You can determine if it's worth the price.  However, I've met many who are not always satisfied with this option for a variety of reasons, ranging from they do not trust their own tastes or lack of "wine knowledge" to the knowing that they are not in a position to taste the wide variety of wines available to professional wine tasters, and thus are concerned about "missing out."  Remote monitoring provides a similar situation.  A patient being remote monitored is not in the presence of the person doing the monitoring, thus the entire experience of seeing the patient along with the measurement values is missing.  However, remote monitoring provides the capability to provide great deal of information about many patients without the need to see each individual.  The problem is, the person doing the monitoring needs to trust the measurements from remote monitoring.
  2. Find a wine rater who has tastes similar to yours.  This might take some time or you might get lucky and find someone who likes wine the way you like it.  Again, this all boils down to trust.
  3. Ask an expert at the wine store.  The hope is that the person at the store will provide you with more information, ask you about your own tastes and what you're looking for.  Although this is not experiential information, you are provided with more information on more dimensions with the ability to re-sample on the same or different dimensions (i. e., ask a question and receive an answer).  In this sense, you have an interactive measurement system.  (At this juncture, I have added by implication remote programming to mix.  Remote programming involve adjusting, tuning or testing additional remotely monitored dimensions.  In this sense, the process of remote monitoring can be dynamic, inquiry-driven.  This is a topic for later discussion.)
  4. Consolidate the ratings of multiple wine raters.  Often several wine raters have rated the same wine.  This can get fairly complicated.  In most cases not all wine raters have rated the same wine and you'll probably get a different mix of raters for each wine.  This too may involve some level of tuning based on the "hits" and "misses." 
This ends this discussion of measurement.  Measurement is the foundation of remote monitoring.  For remote monitoring what its measuring and the accuracy and sensitivity of that measurement and whether that measurement is informative is key to its value.  We've also seen a place for remote monitoring as a means for getting at interesting measurements; changing measurement from a passive to an active, didactic process.


Next time I discuss a recent development with respect to physiological measuring systems.  Here's a link to an article that I believe many will find interesting.  http://mobihealthnews.com/5142/tedmed-wireless-health-has-killed-the-stethoscope/