Sunday, August 1, 2010

HE-75 Topic: Cleaning Up the Mess

I received a reminder this week of what usability professionals are often called on to do – cleaning up the mess created by a failed process. Somehow, the persons responsible for designing an awful, unusable and in some case, useless user interface expect the usability expert to come in, take one look and create a beautiful user interface. This is absurd!  It was the "nightmare" come true - something related to one of my other postings: HE-75 topic: Design first and ask questions later

Writing from my own perspective, there is nothing that a usability professional likes to do less than to correct a failed design that resulted from a failed design process. This week I was asked to save a group of programmers and user interfaced designers from the monstrosities that they had created. What was particularly strange was that the leader of the project thought that I could just redesign something by looking at what they had created. It was bizarre. Unfortunately, I had to deliver several harsh messages regarding the design process and the design, that were not well received. (Nevertheless, that is my job.)

Here is the point I want to make to anyone who reads this. Process and the resulting design should be considered as two sides of the same coin. Good design process nearly always results in a good design. A nonexistent or poor design process leads to a poor design. HE-75's design process can serve as a foundation design process for designing user interface in nearly any industry, particularly in those industries where the harm is particularly severe. Where I am currently working, I plan to use HE-75 as one of the foundation documents to set user interface design standards. And as I mentioned, I am not currently working in the medical or medical device industry. However, I have come to believe that in this industry, the level of harm can be significant. Thus, I shall incorporate HE-75.
 
Next time, I'll review so of the literature that might be of some use to the community.

Saturday, July 24, 2010

Advanced Technology

I mentioned in an earlier article that I have move out of medical devices for the time being.  However, I have moved out of remote monitoring or remote programming (it is called "remote configuration" where I am now.)

We have been given the go ahead to explore a variety of new and what some may consider, off the beaten path technologies.  Although I shall not be able to discuss specific studies or approaches, I shall be able to discuss how some technologies not currently used by the medical and medical device communities might be useful to them.

I shall have periodic updates on this topic from time to time.

Here are some platforms to consider for mobile technology.  (This is not part of the work that I am doing now.  It is more related to my earlier work.)










































Useful Verses Usable

This is a discussion that you are not likely to read in usability texts – the topic of useful verse usability, and the value of each. Just recently I had a discussion with someone on just this topic. Furthermore, I have had numerous discussions with others and each time it surprises me that people often do not know the difference between the two and the value of each.

Useful and Usable

If you go to the dictionary, you will discover that “useful” means “to be of serviceable value, beneficial” and in addition “of practical use.” Pretty straight forward.

On the other hand, the definition of “usable” is “capable of being used” and “convenient and viable for use.” Also a straightforward definition.

However, if you probe more deeply into the definitions, you will note that “useful” is the first, necessary quality of a tool or system. It must be useful or why use it? Usable is a quality of a tool or system. However, it is not primary in relationship with the quality of the being “useful.” It is secondary. Necessary, yes, nevertheless, it is still secondary.

The usefulness as a quality of a tool or system is not addresses in HE-75, or any other usability standard that I have encountered. (If any one knows of a standard where usefulness is addressed, please add a comment to this discussion.) Usefulness is assumed.

However, I have learned that in the real world, the usefulness of any tool or system should not be assumed. It should be tested. Furthermore, with complex systems, the fundamental capabilities of a system or tool are often useful. However, not all of the capabilities of that system may be.

I direct experience with a remote monitoring system where the primary or fundamental capabilities of the system have clear use. However, with each release of this system, as more capabilities are added, the useless capabilities may be on the verge of out numbering the useful ones.

Bottom Line


  • Usefulness is always more important than usability. If it is not useful, it is fundamentally worthless or at best, excess baggage, and a drag on the actual and perceived quality of the tool or system.

  • Usefulness should never be assumed. It should be demonstrated. I know of too many projects where usefulness was not demonstrated. This lead to capabilities being developed that waste time and money, and can damage reputations.

Sunday, July 18, 2010

Gadgets of the Future

An interesting and to some degree a light-hearted article about future systems that could monitor us.  This was published in the Chicago Tribune.


Here's the link: http://www.blogger.com/post-create.g?blogID=1944904461287889974

HE-75, Usability and When to Prototype and Usability Test: Take 1

Prototyping and Testing will be a topical area where I shall have much to contribute.  Expect numerous articles to appear on this topic.

I had a discussion a few days ago with one of my colleagues who has worked as a user interface designer, but has little knowledge of human factors.  He was completely unaware of the concepts of "top-down" and "bottom-up" processes to user interface design.  I provide for you the essence of that discussion.

Top-Down Approach

The top-down approach begins with a design.  Most often the initial design is a best or educated guess based on some set of principles.  Could be aesthetics or "accepted" standards of good design, or something else.  The design is usability and/or acceptance tested in some manner.  (Anywhere from laboratory testing to field-collected data.)  In response to the data, the design reworked.  The process is continual.  Recent experience has suggested that the top-down approach has become predominant design methodology, particularly for the development of websites.

Top-down is a valid process, particularly for the deployment of new or unique products where the consequences of a failed design do not lead to serious consequences.  It can get a design into user hands more quickly.  The problem with a top-down approach (when practiced correctly) is that it relies on successive approximations to an ill-defined or unknown target.  To some degree it's similar to throwing darts blindfolded with some minimal correction information provided after each throw.  The thrower will eventually hit the bull's eyes, but it may take lots and lots of throws.

The top-down approach may have a side benefit in that it can lead to developing novel and innovative designs.  Although, it can have the opposite effect when designs are nothing more than "knock-offs" of the designs from others.  I have seen both coming out of the top-down approach.

Bottom-Up Approach

HE-75 teaches the use of a bottom-up approach where first one defines and researches the targeted user population.  Contextual Inquiry is also a bottom-up approach.  Since I have already discussed researching the targeted user population in depth, I'll not cover it here.  

With the bottom-up approach, the target is clear and understood.  And tailoring a design to the user population(s) should be a relatively straight forward process.  Furthermore, the bottom-up approach directly addresses the usefulness issue with hard data and as such, more likely to lead to the development of a system that is not only usable, but useful.

Useful vs. Usable

I'll address this topic more deeply in another article.  It suffices to say that usability and usefulness are distinctly different system qualities.  A system may be usable, that is, the user interface may require little training and be easy to use, but the system or its capabilities are not useful.  Or, and this is what often happens particularly with top-down approaches, much of what the system provides is not useful or extraneous.

Personal Preference

I am a believer in the bottom-up approach.  It leads to the development of systems that are both usable and useful sooner than the top-down approach.  It is the only approach that I would trust when designing systems where user error is of particular concern.  The top-down approach has its place and I have used it myself, and will continue to use it.  But, in the end, I believe the bottom-up approach is superior, particularly in the medical field. 

Saturday, July 17, 2010

HE-75 Touch Screen Recommendations

I have found HE-75 to be one of the best human factors standards ever produced.  However, I have found their analysis and recommendations regarding touch screens lacking, and out of date.  To place a perspective on the HE-75 touch screen recommendations ... in the late 1980's and early 1990's, I ran a user interface design and implementation project inside of a larger project at Bell Laboratories.  To make a long story short, one of the user interfaces we needed to design and produce was a touch screen interface.  The touch screen used a CRT as a display device and it was as flat as we could make it.  In addition, the distance between the touch screen surface and the display was about 35 mm.  When I read of the issues related to touch screens and the recommendations in HE-75, I experience deja vu and I feel as if I've been transported back to that time.

Some of the most significant advances in user interfaces have been in the areas of display technology and touch screens with respect to hardware and in particular software.  Apple Computer has been a leader in combining the advances in both display technology, touch screen design and touch screen interface software.  I would have expected the HE-75 committee to have incorporated these advances and innovations in touchscreen software into the standard.  However, what I have found appears to me as ossified thinking or ignoring what has transpired.  

People in the medical field are using smart phones with their advanced touch screen interfaces in their medical practice.  Smart phone touch screens and now the Apple iPad have become the de facto standard in touch screen technology.  My previous article related to consistency ... here's a consistency issue.  Is it wise to suggest that medical device touch screen interfaces look and operate in a way different from the accepted standard in the field?  I know this is not a simple question, but I think it is one that will need to be addressed in future editions of HE-75.

The Return: The Value of Consistency

I have been distracted for a couple of months ... working to find and land another consulting contract.  I have completed that task.  However, it is outside of the medical device industry.  I am not completely happy with the situation, however, having a position outside of the medical device industry does afford some freedom when commenting on it.

Another reason for the significant gap between my last post and this one has been that I was working on a long and intricate post regarding hacking or hijacking medical device communications.  The post began to look more like a short story than a commentary.  The more I worked on it, the longer and more convoluted it became.  At some point, I may publish portions of it.

This experience with the article that would never end has lead me to change the way I'll be posting articles in the future.  In the future, my articles will be short - two to four paragraphs.  And will address a single topic.  I think that some of my posts have been too long and in some cases, overly intricate.  I still plan to cover difficult topics, but in a format that is more readable and succinct.

Consistency in User Interfaces

When it comes to making user interface "usable," the two qualities are 1. Performance and 2. Consistency.  Performance is obvious.  If the interface is slow, unresponsive, sluggish, etc. people will not use it.  Or those who are stuck with using it will scream.  Consistency is somewhat less obvious and more difficult to describe.  However, when you encounter a user interface that has changed dramatically on an application that you thought that you knew, you understand the value of consistency. 

Recently, I encountered a newer version of Microsoft Office.  Gone are the pull down menus, the organization of the operations and tools has changed dramatically.  Frankly, I hate the new version.  If I had encountered the newer version of Office as my first encounter with Office, I know that my reaction would be different.  The new version is inconsistent with the older version.  My ability to transfer my knowledge about how to use the newer version is being hindered by the dramatic changes that have been made.  

Consistency is about providing your users with the capability to reapply their knowledge about how things work to new and updated systems.  Operations work the same between applications and between older and newer versions.  In the case of the new version of Word, I am grateful that once I have selected a particular operation, such as formatting, it essentially works the same as the older version.  However, I have tried to use the newer version of PowerPoint and it's drawing capabilities.  I have not yet been successful and am a drawing tool that I know how to use.

Consistency has a side benefit for the development process as well.  When operations, layouts, navigation, etc. become standardized, extending the design of a user interface becomes easier, less risky and less likely to be rejected by users.  The effect of creating consistent user interfaces is similar to having a common language. More on consistency and HE-75 in a later post.

Tuesday, May 4, 2010

HE-75 Topic: Design First and Ask Questions Later?

I was planning on publishing Part 2 of my Medical Implant Issues series.  However, something came up that I could not avoid discussing because it perfectly illustrates the issues regarding defining and understanding your user population.

A Story

I live in the South Loop of Chicago - easy walking distance to the central city ("the Loop).  I do not drive or park a car on the streets in the city of Chicago.  I walk or take public transportation.

One morning I had to run a couple of errands and as I was walking up the street from my home, I saw a man who had parked his car and was staring at the new Chicago Parking Meter machine with dismay.  I'll tell you why a little later.

Depending on how closely you follow the news about Chicago, you may or may not know that Chicago recently sold its street parking revenue rights to a private company.  The company (that as you might imagine has political connections) has recently started to remove the traditional parking meters (that is, one space, one meter) with new meters.  Separate painted parking spaces and their meters have been removed.  People park their vehicles in any space on the street where their vehicle fits, go to a centralized meter on the block where they parked and purchase a ticket (or receipt) that is placed on the dashboard of the vehicle.  On the ticket is printed the end time wherein the vehicle is legally parked.  After the time passes, the vehicle can receive a citation for parking illegally.  Many cities have moved to this system.  However, this system has something missing that I have seen on other systems.

Here's a photograph of the meter's interface ...

Chicago Street-Parking Meter

 
I have placed black ellipse around the credit card reader and a black circle around a coin slot.  Do you see anything wrong in the photo?  ...

Getting back to the man who was staring at the parking meter ... he saw something that was very wrong ... there was no place to enter paper money into to the meter. 


I was surprised. This was the first time I had ever taken the time to really look at one of these meters.

As street parking goes, this is expensive.  One hour will cost you $2.50.  The maximum time that you can park is 3 hours - translated, that's 30 quarters if you had the change.  You can use a credit card. However, there are a lot of people in the City of Chicago who don't have credit cards.  And this man was one of them, nor did he have 30 quarters.

I have seen machines used other cities and towns, and they have a place for paper money.  Oak Park, the suburb immediately west of Chicago, has similar meters and they have a place to use paper money to pay for parking.  What gives with this meter?

I take the City of Chicago off the hook for the design of this parking meter.  I don't believe they had anything to do with the design of the meter.  I have parked in city garages over the years (when I was living in the suburbs), and the city garages have some pretty effective means to enable one to pay for parking - either using cash (paper money) or credit card.  But I think they should have been more aware of what the parking meter company was deploying.  I think they failed the public in that regard.

I can take the cynical view and suggest that this is a tactic by the private company to extract more revenue for itself and the city through issuing parking citations.  However, I think is the more likely that some one designed the system without any regard to the population that was expected to use it and the city fell-down on its responsibility to oversee what the parking company was doing.

Failure to Include a Necessary Feature

For the purposes of examining the value of usability research - that is, the research to understand your users and their environment, what does this incident teach?  It teaches that failure to perform the research to understand your user population could result in the failure to include a necessary capability - such as a means to pay for your parking with paper money.  

What I find interesting (and plausible) is that this parking meter design could have been usability tested and passed the test.  The subjects involved in the usability test could have been provided quarters and credit cards, and under those conditions the subjects would have performed admirably.  However, the parking meter fails the deployment test because the assumptions regarding populace, conditions and environment fail to align with reality of the needs of the population it should have been designed to serve.

Another Failure: Including the Unnecessary or Unwanted Features   


As I was walking to my destination, I started composing this article.  While thinking about what to include in this article, I remembered what a friend of mine said about a system wherein he was in charge of its development.  (I have to be careful about how I write this.  He's a friend of mine for whom I have great respect.  And, defining the set of features that are included in this system is not his responsibility.)


He said that "... we build a system with capabilities that customers neither need nor want."  The process for selecting capabilities to include in a product release at this company is an insular process.  More echo-chamber than outreach to include customers or users.  As a result this company has failed to understand their customers, users, their work environment, etc.  

Some might suggest that the requirements gathering process should reduce the likelihood of either failure occurring - failure to include or include unnecessary or unwanted features.  Again, I know that in case of my friend's company, requirements-gathering takes its direction largely from competitors instead of customers and/or users.  So what often results is the release of a system that fails to include capabilities that customers want and includes capabilities that customers do not want or need.
   
I don't know about you, but I see the process my friend's company engages in as a colossal waste of money and time.  Why would any company use or continue to use such a process?  


Ignorance, Stupidity or Arrogance - Or a combination?



I return to the title of this article "Design First and Ask Questions Later?" and the question I pose above.  I have seen company after company see design as an end in itself and failing to understand that creating a successful design requires an effective process that includes research and testing.  Failure to recognize this costs money and time, and possibly customers.  It is not always a good idea to be first in the market with a device or system that includes a trashy user interface.

So why to companies continue to hang on to failing processes?  Is it ignorance, stupidity or arrogance?  Is it a combination?  My personal experience suggests a combination all three factors with the addition of two others: delusion and denial.  These are two factors that we saw in operation that lead to the financial crisis of 2008.  I think the people will continue to believe that what they're doing is correct up to the point until the whole thing comes crashing down.

The Chicago Parking Meters has a user interface with poor and inconsiderate design ... inconsiderate of those who would use it.  (If I get comments from city officials, it will probably be for that last sentence.)  However, I don't believe that the parking meter company will face any major consequences such as being forced to redesign and redeploy new meters.  They will have gotten away with creating a poor design.  And they're not alone.  There are lots of poorly designed systems, some of the poor designs can be and have been life threatening.  Yet, there are no major consequences.  For medical devices and systems, I believe this needs to change and I hope the FDA exerts it's oversight authority to insure that it happens. 





Medical Device Design: Reader Suggested Books


One of my readers provided me the following list of books related to usable medical product designs.  I pass this list of three books on to you.  I do not yet have them in my library but these would be suitable additions.




Medical Design Article: FDA announces Medical Device Home use Initiative

As I was working on a human factors related article, this article from Medical Design appeared.  Here's the link to the article: http://medicaldesign.com/contract-manufacturing/fda-announces-medical-device-home-050310/

I thought that this article is interesting and telling with respect to how the FDA will assert it regulatory authority regarding usability issues. Here are a few quote from the article.

Recognizing that more patients of all ages are being discharged from hospitals to continue their medical treatment at home, the U.S. Food and Drug Administration announced an initiative to ensure that caregivers and patients safely use complex medical devices in the home. (My emphasis.) The initiative will develop guidance for manufacturers that intend to market such devices for home use, provide for post-market surveillance, and put in place other measures to encourage safe use of these products. The FDA is also developing education materials on home use of medical devices.
These home care patients often need medical devices and equipment such as hemodialysis equipment to treat kidney failure, wound therapy care, intravenous therapy devices, and ventilators. 

Monday, May 3, 2010

HE-75 Topic: Risk Management

One more HE-75 topic before proceeding into design and design related activities.  The topic, risk management.

Reading HE-75, you will note that this document continually discusses risk management and reducing risk.  In fact, the entire document is fundamentally about reducing risk, the risks associated with a poor or inappropriate design.

If you drive a car, especially if you have been driving cars for more than a decade or two, you will note that a driving a car with well-designed controls and well-laid out and designed displays seems inherently easier than one that is poorly designed.  Furthermore, it has been demonstrated time and again that driving safety increases when a driver has been provided well-designed controls and displays, driving become less risky for everyone concerned.

Car makers now see safety as selling point.  (Look at a car that was built in the 40s, 50s or 60s and you'll note how few safety features the car included.)  Manufacturers are beginning to include in their luxury models driver-error detection systems.  For example, one manufacturer has a system that signals the driver of the existence of an other vehicle the space the driver wants to move to.  One of the qualities of a well-designed user interface is the ability to anticipate the user and identify and trap errors or potential user errors, and provide a means or path for preventing or correcting the error without serious consequences.  Car manufacturers have been moving in this direction.  I suggest that the adoption of HE-75 will be the FDA's way of pushing medical manufacturers in the same direction.


Risk Management: Creating a Good Design and Verifying It


My many blog postings on HE-75 will address the specifics of how to create a good design and verify it, and the process of incorporating these design and verification processes in the a company's risk management processes.  In this posting I want to address a two issues at a high level.


First, I want to address "what is a good design and how to do to create it." Creating a good design requires a process such as one outlined by HE-75.  I am often amused at hiring managers and HR people who want to see a designer's portfolio having no conception regarding how the design were created.  A good design for a user interface is not artistry, it is a result of an effective process.  It should not only look good, but it should enable users to perform their tasks effectively and with a minimum of errors.  Furthermore, it should anticipate users and trap errors and prevent serious errors occurring.  And finally, it should provide users with paths or instructions on how to correct the error.  This is what HE-75 teaches in that it instructs researchers and designs   And to that end, the design process should reduce risk.  Think this is not possible?  Then I suggest you spend some time in the cockpit of a commercial airline.  It is possible.


Second, HE-75 teaches that design verification should be empirical and practiced often throughout the design process.  This is an adjunct to classic risk management that tends to be speculative or theoretical in that it relies on brainstorming and rational analysis.  HE-75 teaches that medical device and system manufacturers should not rely just on opinions - although opinions provided by subject-matter experts can provide valuable guidance.  HE-75 instructs subjects drawn from the targeted population(s) should be used to guide and test the design at each stage of the process.  This becomes the essence of risk management and risk reduction in the design of user interfaces.



Additional Resources

I have this book in my library.  It provides some good information, but it's not comprehensive.  Unfortunately, it's the only book I know of in this field.  
















These books I do not owe, but provide you with the links for information purposes.  I am surprised at how few books in the field of medical risk management there are.  It may go a long ways to explain the large number medical errors, especially the ones that injure or kill patients.

Risk Management Handbook for Health Care Organizations, Student Edition (J-B Public Health/Health Services Text) 

Medical Malpractice Risk Management