Showing posts with label User Analysis. Show all posts
Showing posts with label User Analysis. Show all posts

Thursday, December 12, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies, Sections 6 and 7: Part V

I cover Sections 6 and 7 in this article as shown below:

6Summary of preliminary analyses and evaluations
  • Evaluation methods used
  • Key results and design modifications implemented in response
  • Key findings that informed the human factors validation test protocol
7Description and categorization of critical tasks 
  • Process used to identify critical tasks
  • List and descriptions of critical tasks
  • Categorization of critical tasks by severity of potential harm
  • Descriptions of use scenarios that include critical tasks
I consider Sections 6 and 7 together because the information for these two sections should have come from the formative stage of the research and design process. These two sections could be combined into a single section. However, it is apparent that the FDA (and probably other regulatory bodies as well) considers Section 7, Description and categorization of critical tasks, as important enough to have its own, separate section. 

Importance of Getting It Right


The contents of these sections, the descriptions and explanations provided, can be the difference between: 

  • An easy, unquestioned acceptance of what you've done or
  • A difficult, question-riddled review of the work that you performed resulting in:
    • Approval delays, 
    • A reworking of the submitted materials 
    • Requests for additional research to be performed, or 
    • Rejection of the human engineering file 

To fully address what should be included in Sections 6 and 7, you need to examine your entire HE process in the context of the research and development program of your medical device or system and determine whether your HE process can adequately address reporting requirements of these two sections. These sections form the core of the report of your research and design process up to the point immediately before you begin your final phase of testing, namely verification and validation (summative) testing.


Section 6: Summary of the Preliminary Analysis and Evaluations


What Should be in Section 6

I briefly cover the points of what should be included in Section 6. Assuming that you are a human engineering professional, you should already have a reasonable understanding of the meaning of each of three requirements listed below. 

1. Evaluation methods used

This comprises the entire body of research performed including all of the data collected before the implementing a foundational or initial design, and all of the testing performed on the design.

2. Key results and design modifications implemented in response

What findings from your research lead to you to creating your initial design and what where the factors that lead you to modifying your design?

3. Key findings that informed the human factors validation test protocol

How did your arrive at creating your research protocol for summative/validation testing? How do you know that your validation protocol is appropriate and will verify that your system or device is safe for use?

That's the brief overview of what should be in Section 6. However, what should be included in Section 6 are the logical threads of justifications for doing what you did: for creating your research and development plan, the initial/foundational design and how you went about modifying that design. 
   
Don't be deceived by the seeming simplicity of Section 6. It is far more complicated and demands much more investigative and design process rigor than one might imagine. 


Human Engineering (HE): Research and Development


Section 6 is the section where you lay out all research and development performed in relationship to human engineering. Thus, Section 6 becomes the place where you make your case for the research that you performed and the design choices that you made. After reading Section 6, the reviewer should have a clear understanding and be in agreement with the research and design process that was undertaken. This includes the rationale for the research plan as proposed and undertaken including the rationale for any changes made to the plan on the basis of research findings. It will include the rationale for the design process, including the initial or foundational design and the reasons for changes made through the design iteration process.

Human factors is the study of how human interact with or operate systems and devices. Its fundamentally research. Human engineering incorporates the human factors, but encompasses and  incorporates design and the design process that should be at its foundation, driven by research. The research that directs and informs design and the design process includes field, laboratory, library, risk or research-based standards. And in the absence of the ability to collect empirical data: scenarios and interaction walk-throughs and analysis. 


You will need to defend your rationale for the specific research projects undertaken and the design choices made. Because the narrative is an overview, it's often a good place to explain the much of the logic for the research undertaken and the design choices made.


Defending HE Research and Design Planning and Choices


Adequate and effective justification of your research and development plan and design choices will often be the key to insuring unquestioned acceptance of your submission. Here are some suggestions:

  1. Justifying the Research and Development Plan -- the means for creating a usable and low-likelihood use-error and low risk system or device. Reasoning and justifications for the creating a research and development plan for this system or device include:
    • Compliance with IEC 62366 (part 1).
    • Conformance to FDA HE program guidance (on the FDA website).
    • Guidance from AAMI/ANSI HE-75
    • Guidance from previous, similar and accepted plans 
    • This system or device is a next generation release of a currently, commercially available product. Thus the research and development performed along with field collected data provide guidance for research and design plan for this next generation product.
  2. Justification for performing specific research include:
    • Planned research
    • Research fits within the guidelines set within the research plan.
    • Research is designed to answer specific research questions. Often during a research program, questions arise that may be human performance, design specific, etc. that may not have been specified in the research plan. Often times these types of studies are applicable to the research and development of a variety of device and systems. In this case the research is "question-driven." Those research questions need to be clearly defined out within the research protocol and become the clear justification for the research and the applicability and potential value of the findings.
    • Findings from planned research suggest the need for new research not originally planned.
  3. Justification for the foundational design: is initial design that is prototyped, usability tested and then iterated. The foundational design establishes the basic design philosophy (appearance and operation) that will likely be commercialized. While the foundational design will likely be updated and improved throughout the research and development process; fundamentally, it will likely maintain the same design philosophy. Thus, establishment of the foundational design maybe the most consequential step in the research and development process. Justifications for the foundational design include:
    • Updated version of an earlier, accepted design: using the same design philosophy. Updates and improvement driven-by field research, customer feedback, research on the use of the system under actual conditions.
    • Findings from formative research as defined by the research and development plan undertaken before initiating a design.
    • Compliance with accepted design standards, e.g., AAMI HE75. (There are a wide array of design standards issued and accepted by the US agencies as well as other agencies of a variety of countries. When localization of a design is required, the design standards issued by the targeted country should be considered and referenced.)
  4. Justification for changes made to the foundational and modified designs.
    • Findings from prototype testing.
    • Findings from expert reviewers: resulting from design walkthroughs/reviews and/or interactions with the device or system.
    • Limited field tests of prototypes.
  5. Justification that the design has reached the stage for verification and validation (summative) testing. And that a research protocol can be written that can effectively and realistically test the system or device to demonstrate that the system or device will be safe for use by members of the targeted population in the intended use environment(s).
    • This is the hand-off point to the summative testing phase.
    • Justification that that the system or device is read to hand off: The formative testing up to this point should have subjected the system or device to the all of the testing that it would be subjected-to multiple times. And the system or device should have passed those tests multiple times. Thus, if the research and development plan was properly executed nothing of any concern should come from verification and validation testing. If there are findings that are the least bit concerning, then it is time to reexamine your research and development planning and protocols. 
    • Finally, if your formative testing, meaning all of the testing performed up to this point, has been comprehensive,  rigorous and complete, then that testing should dictate the verification and validation research protocols.

What Should be Included in the Section 6 Narrative


I suggest that your narrative should be written in the form of a story. It should be a narration that describes in a linear fashion (from the beginning to immediately before the validation step) what you did, why you did it: 

  • if it's research, summarize what you did and what you found, 
  • if it's your foundational design, provide an high level description of how you arrived at this design (include enough figures to be sure that a reviewer will understand your description) and
  • if it's a design update, explain what change or changes were made and why.
Be sure to include references to your submitted materials in your HE file.



Section 7: Description and categorization of critical tasks


Identifying the critical tasks that will be performed on your system or device should be part of formative research. Often the ability to identify the set of critical tasks is beyond the expertise of the human engineering professional and identifying as well as categorizing the critical tasks requires the support of subject-matter experts (who should be included from the beginning of the formative research stage). My experience has been to integrate subject-matter experts into the research and design process from product inception.  

The list of requirements for Section 7 include:

1. Process used to identify critical tasks

With your subject-matter experts, describe the process used to identify your critical tasks. 

2. List and descriptions of critical tasks

Include with this your justifications and reasoning for this list. 

3. Categorization of critical tasks by severity of potential harm

In addition, if any of your critical tasks have the possibility of inflicting moderate to critical harm, I suggest that mitigations developed to minimize the likelihood that harm would ever occur. 


4. Descriptions of use scenarios that include critical tasks

These use scenarios should form a fundamental part of both your testing as well as justification and rationale for your design (and updates to your design.)

Section 7 Narrative


I suggest that in your narrative that you include a table with the information from items 2 and 3 above. I would add a brief summary of the process that was used to identify your critical tasks. Finally, include a reference to the use scenarios that include the critical tasks. You don't need to include them in your narrative, a reference should be sufficient. 

______________________
Note: I plan on periodically updating this article as I learn more and reconsider what I have written. With each update, I'll include at the top of this article, when it was updated and list some of the changes that I have made. 

Monday, November 25, 2019

Supplement to Part IV Submissions to FDA: Risk Management and Use Errors

Additional Thoughts:

As part of your risk analysis and use error identification, you should identify the circumstances or origin of the reported use error. 


  • Origins of reports of use errors, in order of likely relevance:
    1. Field reported use errors: These require special attention because they were discovered while in use by users under actual conditions. That's the reason why field reported use errors are the most important.They deserve special consideration, particularly if the use error was responsible for any harm and that the use error occurs significantly more often than originally predicted. Consider performing a root cause analysis to determine why the use error occurred. Most especially consider the use or environmental conditions and who made the use error to be factored into why the use error was made. Be sure to determine which assumptions regarding use, conditions for use and predicted user characteristics were violated. 
    2. Errors reported from empirical studies: These are use errors that have been observed under laboratory or other kinds of testing conditions defined by researchers. Furthermore, these use errors are from members of the expected user population(s) who have the expected requisite level of education and training. Thus in testing sessions the conditions of use including the environment have been structured and manipulated by the researchers. The results of the research and the use errors detected may be valid, but narrow in scope in both situations of use, the use environment, actual user characteristics including education and training, etc. 
    3. Analysis based:
      • Scenarios: Scenarios are set of connected events with a beginning, a series of possible steps or actions and an end point. They are generally derived from real world knowledge of the environment, the people involved -- their characteristics such as education, training, responsibilities, experiences, situaetc. --  and the kinds of actions they would engage with the systems and devices in development in order to accomplish a particular task. Scenarios also consider possible paths and actions that would lead to making a use error. Scenarios particularly worst-case scenarios can be an effective means for detecting possible use errors. With newly designed products and systems, this maybe one of the first means to identify possible use errors and determine their possible harm. Nevertheless, scenario-based use errors come from thought experiments and lack empirical validation.
      • Brain-storming: Brain storming is an unstructured or free-form process of analysis to capture the possible use errors. Brain storming sessions can be a particularly useful means of capturing conditions and use errors that may not have been seen or consider using other methods. However use errors derived from brain storming are not based on empirical evidence. Nevertheless, those who uncover use errors by brain storming often have high levels of expertise and experience in the technical area under consideration.
Each process as its own value and every method to detect or originate possible use errors should be considered. And when reporting use errors in the submission, I suggest that where the reported use error originated should be included in the final submitted report. And a summary of where use errors originated should be included in the narrative.





Monday, November 11, 2019

Submission of the Human Engineering File to the FDA and Other Regulatory Bodies

Background

About a year ago I was asked what I thought was the most difficult phase of the medical device human engineering process. Frankly, I'd never before considered such a question. I could not identify any phase of the process that I considered more difficult than any other. I considered whether early phase formative research and testing or risk identification, use errors and risk management would be the most difficult. No, actually for me these phases have always proven to be the most interesting phases in the process. Challenging, yes; difficult, no.

The questioner had an answer in mind: she believed that the validation testing was the most difficult. I disagreed. I believe that the validation testing was most often the least difficult of all the phases of the process. Why? Because validation comes at the end of the entire process and is based on all of the work that you've done previously to get one to the point of validation testing. Thus the procedure of a validation test should flow naturally and easily from the earlier work.

In the end, we agreed to disagree. However, the question never left me. 

I have finally come up with the answer. The most difficult phase of the process is the creation of the narrative for the regulatory reviewers. What I refer to is more than putting together the folder of documents of all of the human engineering related activities. It is the construction of a cohesive and understandable narrative the provides to a reviewer an overall view of the reasoning and logic of the steps taken and that the procedures performed that will demonstrate that the human engineering process was sound and resulting from it is a system that will be safe to use. This is the most difficult phase of the human engineering process, especially if writing the narrative comes at the end.

Human Engineering Pre-Market Submission

Here is the outline of what the FDA expects in a Human Engineering Pre-Market Submission as provided by the FDA on their website:

Sec.Contents
1Conclusion
The device has been found to be safe and effective for the intended users, uses and use environments.
  • Brief summary of HFE/UE processes and results that support this conclusion
  • Discussion of residual use-related risk
2Descriptions of intended device users, uses, use environments, and training
  • Intended user population(s) and meaningful differences in capabilities between multiple user populations that could affect user interactions with the device
  • Intended use and operational contexts of use
  • Use environments and conditions that could affect user interactions with the device    
  • Training intended for users
3Description of device user interface 
  • Graphical representation of device and its user interface
  • Description of device user interface
  • Device labeling
  • Overview of operational sequence of device and expected user interactions with user interface
4Summary of known use problems 
  • Known use problems with previous models of the subject device
  • Known use problems with similar devices, predicate devices or devices with similar user interface elements
  • Design modifications implemented in response to  post-market use error problems
5Analysis of hazards and risks associated with use of the device
  • Potential use errors
  • Potential harm and severity of harm that could result from each use error
  • Risk management measures implemented to eliminate or reduce the risk
  • Evidence of effectiveness of each risk management measure
6Summary of preliminary analyses and evaluations
  • Evaluation methods used
  • Key results and design modifications implemented in response
  • Key findings that informed the human factors validation test protocol
7Description and categorization of critical tasks 
  • Process used to identify critical tasks
  • List and descriptions of critical tasks
  • Categorization of critical tasks by severity of potential harm
  • Descriptions of use scenarios that include critical tasks
8Details of human factors validation testing
  • Rationale for test type selected (i.e., simulated use, actual use or clinical study)
  • Test environment and conditions of use
  • Number and type of test participants
  • Training provided to test participants and how it corresponded to real-world training levels
  • Critical tasks and use scenarios included in testing
  • Definition of successful performance of each test task 
  • Description of data to be collected and methods for documenting observations and interview responses
  • Test results: Observations of task performance and occurrences of use errors, close calls, and use problems 
  • Test results: Feedback from interviews with test participants regarding device use, critical tasks, use errors, and problems (as applicable)  
  • Description and analysis of all use errors and difficulties that could cause harm, root causes of the problems, and implications for additional risk elimination or reduction 
Each one of the items listed will include one or more documents that address in some manner each one of the sub-points listed in each one of the items. You could place these documents into a folder, identify where they belong and submit this to the FDA or other regulatory body. However, I advocate something more: write an over arching narrative that takes the reader through the path of not only what was performed but why as well. The narrative can be and probably should be based on the outline above. (I'm sure there are exceptions, but beginning with the outline above should be your starting point.)
The narrative should provide a comprehensive flow that interconnects each of the phases and explains reasoning for what was done including the rationale behind the design and operation of the system as well as why it's appropriate for use by the identified user population as well as other likely populations who might encounter it.

Why Write a Narrative?

Is a narrative required as part of a submission? From all that I can tell: no, it's not a requirement as part of a submission to regulatory bodies. 

However, consider the fact that if you can't explain the logic of what you did to yourselves, how hard will it be for a reviewer to comprehend? And if the reviewer can't comprehend what you did -- including the reasoning and logic behind it -- could your submission be at risk for rejection? The answer is "yes," you may be putting your submission at risk for rejection.

The narrative is analogous to a completed jigsaw puzzle. A human engineering file without a narrative is analogous to just the jigsaw puzzle pieces. Yes everything is there, but what is it suppose to be? 

Submitting a human engineering file that includes a comprehensive narrative can insure that your submission is understandable: to you as well as your reviewers. It can insure that there are no gaps or issues that should have been included in your submission are left out. Again, going back to the jigsaw puzzle analogy, you don't know that you've got a missing piece or pieces until you've assembled the puzzle.

The narrative provides reviewers with framework to understand what you've done. Interestingly enough this will likely minimize any questions reviewer might have about your submission. And will likely minimize the likelihood that you'll get questions that you cannot answer.

One more thing to note: if your narrative is clear and comprehensive, it's likely that the reviewer or reviewers will often read no further or will simply scan the foundational documents to insure that the foundational documents do in fact support what is stated in the narrative. This could speed the regulatory review and acceptance. 

More Articles on This Topic

I'll be writing a series of articles on the topic of human engineering file submission narratives over the next week or two. I'll focus on specific areas of the narrative and discuss some of what I have done with regards to putting together narratives for submission to regulators. 



Saturday, December 1, 2018

Commentary: HE-75 and IEC 62366 and Cleaning Up the Messes

I received a reminder recently when I was made aware of the International
Consortium of Investigative Journalists' database of medical device recalls of what human factors professionals working in the area of human engineering for medical devices are often called on to do: clean up the mess created by a failed design process that somehow failed to incorporate research. (Note that medical device development isn't the only domain where this kind of failure occurs, however, the impact of medical device failures can often result in fatalities.) The persons responsible for designing an awful, unusable and in some case, useless user interface expect the usability expert to come in, take one look and create a beautiful user interface. This is absurd!

Writing from my own perspective, there is nothing that a usability professional likes to do less than to correct a failed design that resulted from a failed design process. This week I was asked to save a group of programmers and user interfaced designers from the monstrosities that they had created. What was particularly strange was that the leader of the project thought that I could just redesign something by looking at what they had created. It was bizarre. Unfortunately, I had to deliver several harsh messages regarding the design process and the design, that were not well received. (Nevertheless, that is my job.)

Here is the point I want to make clear to anyone who reads this: Process and the resulting design should be considered as two sides of the same coin. The outcome of a good design process generally results in a good design. A nonexistent or poor design process often times leads to a poor design and a design that gets worse with each design iteration when attempts are made to fix problems or incorporate enhancements.

The processes and design direction provided by HE-75 and IEC 62366 can serve as a foundations for research and designing systems with user impacts within nearly any industry, particularly in those industries where the potential for harm is likely.

Sunday, August 1, 2010

HE-75 Topic: Cleaning Up the Mess

I received a reminder this week of what usability professionals are often called on to do – cleaning up the mess created by a failed process. Somehow, the persons responsible for designing an awful, unusable and in some case, useless user interface expect the usability expert to come in, take one look and create a beautiful user interface. This is absurd!  It was the "nightmare" come true - something related to one of my other postings: HE-75 topic: Design first and ask questions later

Writing from my own perspective, there is nothing that a usability professional likes to do less than to correct a failed design that resulted from a failed design process. This week I was asked to save a group of programmers and user interfaced designers from the monstrosities that they had created. What was particularly strange was that the leader of the project thought that I could just redesign something by looking at what they had created. It was bizarre. Unfortunately, I had to deliver several harsh messages regarding the design process and the design, that were not well received. (Nevertheless, that is my job.)

Here is the point I want to make to anyone who reads this. Process and the resulting design should be considered as two sides of the same coin. Good design process nearly always results in a good design. A nonexistent or poor design process leads to a poor design. HE-75's design process can serve as a foundation design process for designing user interface in nearly any industry, particularly in those industries where the harm is particularly severe. Where I am currently working, I plan to use HE-75 as one of the foundation documents to set user interface design standards. And as I mentioned, I am not currently working in the medical or medical device industry. However, I have come to believe that in this industry, the level of harm can be significant. Thus, I shall incorporate HE-75.
 
Next time, I'll review so of the literature that might be of some use to the community.

Saturday, July 24, 2010

Advanced Technology

I mentioned in an earlier article that I have move out of medical devices for the time being.  However, I have moved out of remote monitoring or remote programming (it is called "remote configuration" where I am now.)

We have been given the go ahead to explore a variety of new and what some may consider, off the beaten path technologies.  Although I shall not be able to discuss specific studies or approaches, I shall be able to discuss how some technologies not currently used by the medical and medical device communities might be useful to them.

I shall have periodic updates on this topic from time to time.

Here are some platforms to consider for mobile technology.  (This is not part of the work that I am doing now.  It is more related to my earlier work.)










































Sunday, July 18, 2010

HE-75, Usability and When to Prototype and Usability Test: Take 1

Prototyping and Testing will be a topical area where I shall have much to contribute.  Expect numerous articles to appear on this topic.

I had a discussion a few days ago with one of my colleagues who has worked as a user interface designer, but has little knowledge of human factors.  He was completely unaware of the concepts of "top-down" and "bottom-up" processes to user interface design.  I provide for you the essence of that discussion.

Top-Down Approach

The top-down approach begins with a design.  Most often the initial design is a best or educated guess based on some set of principles.  Could be aesthetics or "accepted" standards of good design, or something else.  The design is usability and/or acceptance tested in some manner.  (Anywhere from laboratory testing to field-collected data.)  In response to the data, the design reworked.  The process is continual.  Recent experience has suggested that the top-down approach has become predominant design methodology, particularly for the development of websites.

Top-down is a valid process, particularly for the deployment of new or unique products where the consequences of a failed design do not lead to serious consequences.  It can get a design into user hands more quickly.  The problem with a top-down approach (when practiced correctly) is that it relies on successive approximations to an ill-defined or unknown target.  To some degree it's similar to throwing darts blindfolded with some minimal correction information provided after each throw.  The thrower will eventually hit the bull's eyes, but it may take lots and lots of throws.

The top-down approach may have a side benefit in that it can lead to developing novel and innovative designs.  Although, it can have the opposite effect when designs are nothing more than "knock-offs" of the designs from others.  I have seen both coming out of the top-down approach.

Bottom-Up Approach

HE-75 teaches the use of a bottom-up approach where first one defines and researches the targeted user population.  Contextual Inquiry is also a bottom-up approach.  Since I have already discussed researching the targeted user population in depth, I'll not cover it here.  

With the bottom-up approach, the target is clear and understood.  And tailoring a design to the user population(s) should be a relatively straight forward process.  Furthermore, the bottom-up approach directly addresses the usefulness issue with hard data and as such, more likely to lead to the development of a system that is not only usable, but useful.

Useful vs. Usable

I'll address this topic more deeply in another article.  It suffices to say that usability and usefulness are distinctly different system qualities.  A system may be usable, that is, the user interface may require little training and be easy to use, but the system or its capabilities are not useful.  Or, and this is what often happens particularly with top-down approaches, much of what the system provides is not useful or extraneous.

Personal Preference

I am a believer in the bottom-up approach.  It leads to the development of systems that are both usable and useful sooner than the top-down approach.  It is the only approach that I would trust when designing systems where user error is of particular concern.  The top-down approach has its place and I have used it myself, and will continue to use it.  But, in the end, I believe the bottom-up approach is superior, particularly in the medical field. 

Saturday, July 17, 2010

HE-75 Touch Screen Recommendations

I have found HE-75 to be one of the best human factors standards ever produced.  However, I have found their analysis and recommendations regarding touch screens lacking, and out of date.  To place a perspective on the HE-75 touch screen recommendations ... in the late 1980's and early 1990's, I ran a user interface design and implementation project inside of a larger project at Bell Laboratories.  To make a long story short, one of the user interfaces we needed to design and produce was a touch screen interface.  The touch screen used a CRT as a display device and it was as flat as we could make it.  In addition, the distance between the touch screen surface and the display was about 35 mm.  When I read of the issues related to touch screens and the recommendations in HE-75, I experience deja vu and I feel as if I've been transported back to that time.

Some of the most significant advances in user interfaces have been in the areas of display technology and touch screens with respect to hardware and in particular software.  Apple Computer has been a leader in combining the advances in both display technology, touch screen design and touch screen interface software.  I would have expected the HE-75 committee to have incorporated these advances and innovations in touchscreen software into the standard.  However, what I have found appears to me as ossified thinking or ignoring what has transpired.  

People in the medical field are using smart phones with their advanced touch screen interfaces in their medical practice.  Smart phone touch screens and now the Apple iPad have become the de facto standard in touch screen technology.  My previous article related to consistency ... here's a consistency issue.  Is it wise to suggest that medical device touch screen interfaces look and operate in a way different from the accepted standard in the field?  I know this is not a simple question, but I think it is one that will need to be addressed in future editions of HE-75.

Tuesday, May 4, 2010

HE-75 Topic: Design First and Ask Questions Later?

I was planning on publishing Part 2 of my Medical Implant Issues series.  However, something came up that I could not avoid discussing because it perfectly illustrates the issues regarding defining and understanding your user population.

A Story

I live in the South Loop of Chicago - easy walking distance to the central city ("the Loop).  I do not drive or park a car on the streets in the city of Chicago.  I walk or take public transportation.

One morning I had to run a couple of errands and as I was walking up the street from my home, I saw a man who had parked his car and was staring at the new Chicago Parking Meter machine with dismay.  I'll tell you why a little later.

Depending on how closely you follow the news about Chicago, you may or may not know that Chicago recently sold its street parking revenue rights to a private company.  The company (that as you might imagine has political connections) has recently started to remove the traditional parking meters (that is, one space, one meter) with new meters.  Separate painted parking spaces and their meters have been removed.  People park their vehicles in any space on the street where their vehicle fits, go to a centralized meter on the block where they parked and purchase a ticket (or receipt) that is placed on the dashboard of the vehicle.  On the ticket is printed the end time wherein the vehicle is legally parked.  After the time passes, the vehicle can receive a citation for parking illegally.  Many cities have moved to this system.  However, this system has something missing that I have seen on other systems.

Here's a photograph of the meter's interface ...

Chicago Street-Parking Meter

 
I have placed black ellipse around the credit card reader and a black circle around a coin slot.  Do you see anything wrong in the photo?  ...

Getting back to the man who was staring at the parking meter ... he saw something that was very wrong ... there was no place to enter paper money into to the meter. 


I was surprised. This was the first time I had ever taken the time to really look at one of these meters.

As street parking goes, this is expensive.  One hour will cost you $2.50.  The maximum time that you can park is 3 hours - translated, that's 30 quarters if you had the change.  You can use a credit card. However, there are a lot of people in the City of Chicago who don't have credit cards.  And this man was one of them, nor did he have 30 quarters.

I have seen machines used other cities and towns, and they have a place for paper money.  Oak Park, the suburb immediately west of Chicago, has similar meters and they have a place to use paper money to pay for parking.  What gives with this meter?

I take the City of Chicago off the hook for the design of this parking meter.  I don't believe they had anything to do with the design of the meter.  I have parked in city garages over the years (when I was living in the suburbs), and the city garages have some pretty effective means to enable one to pay for parking - either using cash (paper money) or credit card.  But I think they should have been more aware of what the parking meter company was deploying.  I think they failed the public in that regard.

I can take the cynical view and suggest that this is a tactic by the private company to extract more revenue for itself and the city through issuing parking citations.  However, I think is the more likely that some one designed the system without any regard to the population that was expected to use it and the city fell-down on its responsibility to oversee what the parking company was doing.

Failure to Include a Necessary Feature

For the purposes of examining the value of usability research - that is, the research to understand your users and their environment, what does this incident teach?  It teaches that failure to perform the research to understand your user population could result in the failure to include a necessary capability - such as a means to pay for your parking with paper money.  

What I find interesting (and plausible) is that this parking meter design could have been usability tested and passed the test.  The subjects involved in the usability test could have been provided quarters and credit cards, and under those conditions the subjects would have performed admirably.  However, the parking meter fails the deployment test because the assumptions regarding populace, conditions and environment fail to align with reality of the needs of the population it should have been designed to serve.

Another Failure: Including the Unnecessary or Unwanted Features   


As I was walking to my destination, I started composing this article.  While thinking about what to include in this article, I remembered what a friend of mine said about a system wherein he was in charge of its development.  (I have to be careful about how I write this.  He's a friend of mine for whom I have great respect.  And, defining the set of features that are included in this system is not his responsibility.)


He said that "... we build a system with capabilities that customers neither need nor want."  The process for selecting capabilities to include in a product release at this company is an insular process.  More echo-chamber than outreach to include customers or users.  As a result this company has failed to understand their customers, users, their work environment, etc.  

Some might suggest that the requirements gathering process should reduce the likelihood of either failure occurring - failure to include or include unnecessary or unwanted features.  Again, I know that in case of my friend's company, requirements-gathering takes its direction largely from competitors instead of customers and/or users.  So what often results is the release of a system that fails to include capabilities that customers want and includes capabilities that customers do not want or need.
   
I don't know about you, but I see the process my friend's company engages in as a colossal waste of money and time.  Why would any company use or continue to use such a process?  


Ignorance, Stupidity or Arrogance - Or a combination?



I return to the title of this article "Design First and Ask Questions Later?" and the question I pose above.  I have seen company after company see design as an end in itself and failing to understand that creating a successful design requires an effective process that includes research and testing.  Failure to recognize this costs money and time, and possibly customers.  It is not always a good idea to be first in the market with a device or system that includes a trashy user interface.

So why to companies continue to hang on to failing processes?  Is it ignorance, stupidity or arrogance?  Is it a combination?  My personal experience suggests a combination all three factors with the addition of two others: delusion and denial.  These are two factors that we saw in operation that lead to the financial crisis of 2008.  I think the people will continue to believe that what they're doing is correct up to the point until the whole thing comes crashing down.

The Chicago Parking Meters has a user interface with poor and inconsiderate design ... inconsiderate of those who would use it.  (If I get comments from city officials, it will probably be for that last sentence.)  However, I don't believe that the parking meter company will face any major consequences such as being forced to redesign and redeploy new meters.  They will have gotten away with creating a poor design.  And they're not alone.  There are lots of poorly designed systems, some of the poor designs can be and have been life threatening.  Yet, there are no major consequences.  For medical devices and systems, I believe this needs to change and I hope the FDA exerts it's oversight authority to insure that it happens. 





Medical Device Design: Reader Suggested Books


One of my readers provided me the following list of books related to usable medical product designs.  I pass this list of three books on to you.  I do not yet have them in my library but these would be suitable additions.




Saturday, May 1, 2010

HE-75 Topic: Meta Analysis

The definition of a "meta-analysis" is an analysis of analyzes.  Meta analyzes are often confused with a literature search, although a literature search is often the first step in a meta-analysis.

A meta-analysis is a consolidation of similar studies on a single, well defined topic.  The each study may have covered a variety of topics, but with the meta-analysis, each study will have addressed the common topic in depth and collected data regarding it.

The meta-analysis is a well-respected means of developing broad-based conclusions from a variety of studies.  (I have included a book on the topic at the end of this article.)  If you search the literature, you will note that meta-analyzes are often found in the medical literature, particularly in relationship to the effectiveness or problems with medications.

In some quarters, the meta-analysis is not always welcome or respected.  Human factors (Human engineering) is rooted in experimental psychology, and meta-analyzes are not always respected or well-received in this community.  It is work outside of the laboratory.  It is not collecting your own data, but using the data collected by others, thus the tendency has been to consider the meta-analysis as lesser.

However, the meta-analysis has a particular strength in that it provides a richer and wider view than a single study with a single population sample.  It is true that the studies of others often do not directly address all the issues that researchers could study if those researchers performed that research themselves.  In other words, the level and the types of research related controls were employed by the researchers themselves.  But, again, the meta-analysis can provide a richness and the numeric depth that a single study cannot provide.

Thus the question is, to use or not to use a meta-analysis when collecting data about a specific population?  Should a meta-analysis be used in lieu of collecting empirical data?  

Answer.  There are no easy answers.  Yes, a meta-analysis could be used in lieu of an empirical analysis, but only if there are enough applicable studies recently performed.  However, I would suggest that when moving forward with a study of a specific, target population that the first response should be to initiate a literature search and perform some level of a meta-analysis.  If the data is not available or is incomplete, then the meta-analysis will not suffice.  But, a meta-analysis is always a good first step, and a relatively inexpensive first step, even if the decision is made to go forward with an empirical study.  The meta-analysis will aid in the study's design and data analysis.  And will act as a guide when drawing conclusions.



Additional Resources

Wednesday, April 21, 2010

HE-75: Collecting Data and Modeling Tasks and Environment

This article expounds on my earlier article related to AAMI HE-75: Know what thy user does and where they do it. 


Collect and Represent the Data


Ideally the first steps in the design process should occur before a design is ever considered.  Unfortunately, in virtually every case I have encountered, a design for the user interface has already been in the works before the steps for collecting user and task related data have been performed.


Nevertheless, if you are one of the people performing the research, do as much as you can to push the design out of your mind and focus on objectively collecting and evaluating the data.  And, in your data analysis, following the data and not your or the preconceived notions of someone else.


There are a variety of means for collecting data and representing it.  The means for collecting the data will generally involve:
  • Observation - collecting the step-by-step activities as a person under observation performs their tasks.
  • Inquiry - collecting data about the a person's cognitive processes.
Once the data has been connected, it requires analysis and representation in a manner that is useful for later steps in the design process.  Data representations can include:
  • Task models - summary process models (with variants and edge cases) of how users perform each task.  This is different from workflow models in that in task models no references to specific tools or systems should be included in the task model.  A task model should be abstracted and represented at a level without reference to actions taking place on a particular device or system.
  • Workflows - summary process models (with variants and edge cases) similar to the task flows with reference to a particular device or system.  For example, if the user interface consists of a particular web page, there should be a reference to that webpage and the action(s) that took place.
  • Cognitive models - a representation of the cognitive activities and processes that take place as the person performs a task.
  • Breadth analysis - I have noted that this is often overlooked.  Breadth analysis organizes the tasks by frequency of use and if appropriate, order of execution.  This is also the place to represent the tasks that users perform in their work environment but were not directly part of the data collection process.
Detailed Instructions


I cannot hope to provide detailed instructions in this blog.  However, I can provide a few pointers. There published works on how to collect, analyze and model the data by leaders in the field.

Here are three books that can recommend and several can be found in my library:


User and Task Analysis for Interface Design by  J. Hackos & J. Redish


I highly recommend this book.  I use it frequently.  For those of us experienced in the profession and with task and user analysis, what they discuss will seem familiar - as well it should.  However, what they do are provide clear paths and methods for collecting data from users.  The book is well-structured and extremely useful for practitioners.  I had been using task and user analysis for a decade before this book came out.  I found that by owning this book, I could throw all my notes away related to task and user analysis, and use this book as my reference.


Motion and Time Study: Improving Work Methods and Management 
by F. Meyer
Motion and Time Study for Lean Manufacturing (3rd Edition) by F. Meyer & J. R. Stewart


Time and motion study is a core part of industrial engineering as the means to improve the manufacturing process.  Historically, time and motion studies go back to Fredrick Taylor (http://en.wikipedia.org/wiki/Frederick_Winslow_Taylor) who pioneered this work in the later part of the 19th and in early part of the 20th Century.  I have used time and motion studies as a means for uncovering problematic designs.  Time and motion studies can be particularly useful when users are engaged in repetitive activities and as a means for improving efficiency and even as a means for reducing repeated stress injuries.  The first book I have in my library however it is a bit old (but very inexpensive) so I include the second book by Meyers (and Stewart) that more recent.  I can say that the methods of time and motion can be considered timeless, thus adding a book published in 1992 can still be valuable.

Time and motion studies can produce significant detail regarding the activities that those under observation perform.  However, these studies are time-consuming and as such, expensive.  Nevertheless, they can provide extremely valuable data that can uncover problems and improve efficiency.


Contextual Design: Defining Customer-Centered Systems (Interactive Technologies) by H. Beyer & K. Holtzblatt &

Rapid Contextual Design: A How-to Guide to Key Techniques for User-Centered Design (Interactive Technologies) by K. Holtzblatt, J. B. Wendell & S. Wood


The first book I have in my library, but not the second.  I have used many of the methods described in Contextual Design before the book was published.  The contextual design process is one of the currently "hot" methods collecting user and task data, and as such, every practitioner should own a copy of this book - at least as a reference.


I believe what's particularly useful about this contextual inquiry is that it collects data about activities not directly observered.  It's able but that affect the users and the tasks that they perform.  For example, clinicians engaged in the remote monitoring of patients often have other duties, many of them patient related.  Collecting data exclusively targeting remote monitoring activities (or the activities specific to a targeted device or company) can miss significant activities that impact remote monitoring and vice versa


Additional Resources


As a graduate student, I had the privilege of having my education supported by Xerox's Palo Alto Research Center.  I was able to work with luminaries of the profession, Tom Moran and Allen Newell on a couple of projects.  In addition I was able to learn the GOMS model.  I have found this model useful in that it nicely blends objectively observed activities with cognitive processes.  However, the modeling process can be arduous, and as such, expensive.  

Allen Newell and Herbert Simon are particularly well known for their research on chess masters and problem solving.  They were well-known for their research method, protocol analysis. Protocol analysis is a method that has the person under observation verbally express their thoughts while engaged a particular activity.  This enables the observer to collect data about the subject's thoughts, strategies and goals.  This methodology has been adopted by the authors of contextual inquiry and one that I have often used in my research.


The problem with protocol analysis is that it cannot capture cognitive processes that occur beyond the level of consciousness, such as the perception.  For example, subjects are unable to express how they perceive and identify words, or express how they are able to read sentences.  These processes are largely automatic and thus not available to conscious processes.  (I shall discuss methods that will enable one to collect data that involves automatic processes when I discuss usability testing in a later article.)  However, protocol analysis can provide valuable data regarding a subject's thoughts particularly when that person reaches a point where confusion sets-in or where the person attempts to correct an error condition.

Here's a link from Wikipedia: http://en.wikipedia.org/wiki/GOMS.


Another book that I have in my library by a former Bell Labs human factors researcher, Thomas K. (TK) Landauer, is The Trouble with Computers: Usefulness, Usability, and Productivity.


This is fun book.  I think it's much more instructive to the professional than Don Norman's book, The Psychology Of Everyday Things.  (Nevertheless, I place the link to Amazon just the same.  This is a good book for professional in the field to give to family members who ask "what do you do for a living?")  

Tom rails against the many of the pressures and processes that push products, systems and services into the commercial space before they're ready from a human engineering standpoint.  Although the book is relatively old, many of the points he makes are more relevant today than when the book was first published.  The impluse to design user interfaces without reference or regard for users has been clearly noted by the FDA, hence the need for HE-75.