Call for Abstracts for 11th Annual International Conference on Stigma

I am excited to share the call for abstracts for a very special conference that bridges community members, practitioners and researchers. This year’s conference will be online and spread over the course of a week. It brings a cathartic opportunity for community members to share their experiences and a unique opportunity for researchers to engage directly with community members and practitioners about their work.

 

CALL FOR ABSTRACTS

11th ANNUAL INTERNATIONAL CONFERENCE ON STIGMA

Conference Theme: “Faces of Stigma

 

Virtually Hosted by Howard University Monday, November 16, 2020 – Friday, November 20, 2020 (9 AM– 4 PM EST)

Deadline for Submission: Friday, September 18, 2020 by 5:00pm (EST)

 

The goals of this virtual conference are to increase awareness of the stigma of HIV and other health conditions and to explore interventions to eradicate this stigma. The conference also serves to educate healthcare providers and the general public about stigma as both a human rights violation and a major barrier to prevention and treatment of illnesses. We are looking for original research that addresses HIV or other mental or physical health-related stigma to be presented as a VIRTUAL POSTER during the conference virtual poster session. Abstracts that focus on the effects of intersectional stigma, or how different layers of stigma (e.g., race, poverty, mental health) affect individuals or communities of people are particularly encouraged. The Best Scientific Abstract Award recipient and the second-place scientific abstract will have the opportunity to provide a BRIEF VIRTUAL PRESENTATION of their work on November 19, 2020 in addition to their participation in the virtual poster session held throughout the week of the conference. Monetary prizes will be given for the top three scientific abstracts. The Best Scientific Abstract Award recipient will receive a $500 prize, the second-place scientific abstract will receive a $200 prize, and the third-place scientific abstract will receive a $100 prize.

Abstract Guidelines:  Submit an abstract, with a maximum of 300 words, to Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at ssultana@brockport.edu, by 5:00pm (EST) on Friday, September 18, 2020.  Please include the full name, position/job title, affiliation and email address of each contributing author at the top of the page along with the abstract title.  Author information and the abstract title are not included in the 300-word count.  First author or another presenter must register for the conference if the abstract is accepted.  The first author (or another presenter) of the winning abstracts must virtually attend the conference to receive the prizes.  Students are welcome to submit abstracts and attend the conference!

Notifications will be sent by October 16, 2020.  These are virtual poster presentations only, with the exception of the Best Scientific Abstract Award winner and the second-place scientific abstract winner, which are also brief virtual oral presentations.

For questions about abstracts, contact Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at ssultana@brockport.edu.  For general questions about the conference contact Patricia Houston at phouston@howard.edu.

Bridging Research, Community and Practice

I want to share a call for abstracts for a very special conference. It is rare that an event bring together researchers, practitioners and community members:

 

CALL FOR ABSTRACTS

9th ANNUAL INTERNATIONAL CONFERENCE ON STIGMA

Conference Theme: “Bridging Research, Community and Practice”

 

Howard University, Washington, DC, Friday, November 16, 2018 (8 AM– 5 PM)

Deadline for Submission: Friday, September 14, 2018 by 5:00pm (EST)

 

The overarching goals of this conference are to increase awareness of the stigma of HIV and other health conditions and to explore interventions to eradicate this stigma. The conference also serves to educate healthcare providers and the general public about stigma as both a human rights violation and a major barrier to prevention and treatment of illnesses. We are looking for original work that addresses HIV or other health-related stigma (such as mental illness) to be presented as a POSTER during the conference poster session.  The Best Scientific Abstract Award recipient and the second-place scientific abstract will have the opportunity to provide a BRIEF PRESENTATION of their work in addition to the poster session. Monetary prizes will be given for the top three scientific abstracts. The Best Scientific Abstract Award recipient will receive a $500 prize, the second-place scientific abstract will receive a $200 prize, and the third-place scientific abstract will receive a $100 prize.

 

Abstract Guidelines:  Submit an abstract, with a maximum of 300 words, to Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at shirin.sultana@bison.howard.edu, by 5:00pm (EST) on Friday, September 14, 2018.  Please include the full name, position/job title, affiliation and email address of each contributing author at the top of the page along with the abstract title.  Author information and the abstract title are not included in the 300-word count. First author or presenter must register for the conference if the abstract is accepted.  Notifications will be sent by October 15, 2018. These are poster presentations only, with the exception of the Best Scientific Abstract Award winner and the second-place scientific abstract winner, which are also brief oral presentations.  The first author of the winning abstracts must attend the conference to receive the prizes (or be willing to let an attending author or other representative accept the prize). Students are welcome!

 

For questions about abstracts, contact Victoria Hoverman at vicki.hoverman@gmail.com and/or Shirin Sultana at shirin.sultana@bison.howard.edu.  For general questions about the conference contact Patricia Houston at phouston@howard.edu.

The surprising unpredictability of language in use

This morning I recieved an e-mail from an international professional association that I belong to. The e-mail was in English, but it was not written by an American. As a linguist, I recognized the differences in formality and word use as signs that the person who wrote the e-mail is speaking from a set of experiences with English that differ from my own. Nothing in the e-mail was grammatically incorrect (although as a linguist I am hesitant to judge any linguistic differences as correct or incorrect, especially out of context).

Then later this afternoon I saw a tweet from Twitter on the correct use of Twitter abbreviations (RT, MT, etc.). If the growth of new Twitter users has indeed leveled off then Twitter is lucky, because the more Twitter grows the less they will be able to influence the language use of their base.

Language is a living entity that grows, evolves and takes shape based on individual experiences and individual perceptions of language use. If you think carefully about your experiences with language learning, you will quickly see that single exposures and dictionary definitions teach you little, but repeated viewings across contexts teach you much more about language.

Language use is patterned. Every word combination has a likelihood of appearing together, and that likelihood varies based on a host of contextual factors. Language use is complex. We use words in a variety of ways across a variety of contexts. These facts make language interesting, but they also obscure language use from casual understanding. The complicated nature of language in use interferes with analysts who build assumptions about language into their research strategies without realizing that their assumptions would not stand up to careful observation or study.

I would advise anyone involved in the study of language use (either as a primary or secondary aspect of their analysis) to take language use seriously. Fortunately, linguistics is fun and language is everywhere. So hop to it!

Professional Identity: Who am I? And who are you?

Last night I acted as a mentor at the annual Career Exploration Expo sponsored by my graduate program. Many of the students had questions about developing a professional identity. This makes sense, of course, because graduate school is an important time for discovering and developing a professional identity.

People enter our program (and many others) With a wide variety of backgrounds and interests. They choose from a variety of classes that fit their interests and goals. And then they try to map their experience onto job categories. But boxes are difficult to climb into and out of, and students soon discover that none of the boxes is a perfect fit.

I experienced this myself. I entered the program with an extensive and unquestioned background in survey research. Early in my college years (while I was studying and working in neuropsychology) I began to manage a clinical dataset in SPSS. Working with patients and patient files was very interesting, but to my surprise working with data using statistical software felt right to me much in the way that Ethiopian meals include injera and Japanese meals include rice (IC 2006 (1997) Ohnuki Tierney Emiko). I was actually teased by my friends about my love of data! This affinity served me well, and I enjoyed working with a variety of data sets while moving across fields and statistical programming languages.

But my graduate program blew my mind. I felt like I had spent my life underwater and then discovered the sky and continents. I discovered many new kinds of data and analytic strategies, all of which were challenging and rewarding. These discoveries inspired me to start this blog and have inspired me to attend a wide variety of events and read some very interesting work that I never would have discovered on my own. Hopefully followers of this blog have enjoyed this journey as much as I have!

As a recent graduate, I sometimes feel torn between worlds. I still work as a survey researcher, but I’m inspired by research methods that are beyond the scope of my regular work. Another recent graduate of our program who is involved in market research framed her strategy in a way that really resonated with me: “I give my customers what they want and something else, and they grow to appreciate the ‘something else.'” That sums up my current strategy. I do the survey management and analysis that is expected of me in a timely, high quality way. But I am also using my newly acquired knowledge to incorporate text analysis into our data cleaning process in order to streamline it, increasing both the speed and the quality of the process and making it better equipped to handle the data from future surveys. I do the traditional quantitative analyses, but I supplement them  with analyses of the open ended responses that use more flexible text analytic strategies. These analyses spark more quantitative analyses and make for much better (richer, more readable and more inspired) reports.

Our goal as professionals should be to find a professional identity that best capitalizes on  our unique knowledge, skills and abilities. There is only one professional identity that does all of that, and it is the one you have already chosen and continue to choose every day. We are faced with countless choices about what classes to take, what to read, what to attend, what to become involved in, and what to prioritize, and we make countless assessments about each. Was it worthwhile? Did I enjoy it? Would I do it again? Each of these choices constitutes your own unique professional self, a self which you are continually manufacturing. You are composed of your past, your present, and your future, and your future will undoubtedly be a continuation of your past and present. The best career coach you have is inside of you.

Now your professional identity is much more uniquely or narrowly focused that the generic titles and fields that you see in the professional marketplace. Keep in mind that each job listing that you see represents a set of needs that a particular organization has. Is this a set of needs that you are ready to fill? Is this a set of needs that you would like to fill? You are the only one who knows the answers to these questions.

Because it turns out that you are your best career coach, and you have been all along.

Reflections and Notes from the Sentiment Analysis Symposium #SAS14

The Sentiment Analysis Symposium took place in NY this week in the beautiful offices of the New York Academy of Sciences. The Symposium was framed as a transition into a new era of sentiment analysis, an era of human analytics or humetrics.

The view from the New York Academy of Sciences is really stunning!

The view from the New York Academy of Sciences is really stunning!

Two main points that struck me during the event. One is that context is extremely important for developing high quality analytics, but the actual shape that “context” takes varies greatly. The second is a seeming disconnect between the product developers, who are eagerly developing new and better measures, and the customers, who want better usability, more customer support, more customized metrics that fit their preexisting analytic frameworks and a better understanding of why social media analysis is worth their time, effort and money.

Below is a summary of some of the key points. My detailed notes from each of the speakers, can be viewed here. I attended both the more technical Technology and Innovation Session and the Symposium itself.

Context is in. But what is context?

The big takeaway from the Technology and Innovation session, which was then carried into the second day of the Sentiment Analysis Symposium was that context is important. But context was defined in a number of different ways.

 

New measures are coming, and old measures are improving.

The innovative new strategies presented at the Symposium made for really amazing presentations. New measures include voice intonation, facial expressions via remote video connections, measures of galvanic skin response, self tagged sentiment data from social media sharing sites, a variety of measures from people who have embraced the “quantified self” movement, metadata from cellphone connections (including location, etc.), behavioral patterning on the individual and group level, and quite a bit of network analysis. Some speakers showcased systems that involved a variety of linked data or highly visual analytic components. Each of these measures increase the accuracy of preexisting measures and complicate their implementation, bringing new sets of challenges to the industry.

Here is a networked representation of the emotion transition dynamics of 'Hopeful'

Here is a networked representation of the emotion transition dynamics of ‘Hopeful’

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

Meanwhile, traditional text-based sentiment analyses are also improving. Both the quality of machine learning algorithms and the quality of rule based systems are improving quickly. New strategies include looking at text data pragmatically (e.g. What are common linguistics patterns in specific goal directed behavior strategies?), gaining domain level specificity, adding steps for genre detection to increase accuracy and looking across languages. New analytic strategies are integrated into algorithms and complementary suites of algorithms are implemented as ensembles. Multilingual analysis is a particular challenge to ML techniques, but can be achieved with a high degree of accuracy using rule based techniques. The attendees appeared to agree that rule based systems are much more accurate that machine learning algorithms, but the time and expertise involved has caused them to come out of vogue.

 

“The industry as a whole needs to grow up”

I suspect that Chris Boudreaux of Accenture shocked the room when he said “the industry as a whole really needs to grow up.” Speaking off the cuff, without his slides after a mishap and adventure, Boudreaux gave the customer point of view toward social media analytics. He said said that social media analysis needs to be more reliable, accessible, actionable and dependable. Companies need to move past the startup phase to a new phase of accountability. Tools need to integrate into preexisting analytic structures and metrics, to be accessible to customers who are not experts, and to come better supported.

Boudreaux spoke of the need for social media companies to better understand their customers. Instead of marketing tools to their wider base of potential customers, the tools seem to be developed and marketed solely to market researchers. This has led to a more rapid adoption among the market research community and a general skepticism or ambivalence across other industries, who don’t see how using these tools would benefit them.

The companies who truly value and want to expand their customer base will focus on the usability of their dashboards. This is an area ripe for a growing legion of usability experts and usability testing. These dashboards cannot restrict API access and understanding to data scientist experts. They will develop, market and support these dashboards through productive partnerships with their customers, generating measures that are specifically relevant to them and personalized dashboards that fit into preexisting metrics and are easy for the customers to understand and react to in a very practical and personalized sense.

Some companies have already started to work with their customers in more productive ways. Crimson Hexagon, for example, employs people who specialize in using their dashboard. These employees work with customers to better understand and support their use of the platform and run studies of their own using the platform, becoming an internal element in the quality feedback loop.

 

Less Traditional fields for Social Media Analysis:

There was a wide spread of fields represented at the Symposium. I spoke with someone involved in text analysis for legal reasons, including jury analyses. I saw an NYPD name tag. Financial services were well represented. Publishing houses were present. Some health related organizations were present, including neuroscience specialists, medical practitioners interested in predicting early symptoms of diseases like Alzheimer’s, medical specialists interested in helping improve the lives of people with diseases like Autism (e.g. with facial emotion recognition devices), pharmaceutical companies interested in understanding medical literature on a massive scale as well as patient conversation about prescriptions and participation in medical trials. There were traditional market research firms, and many new startups with a wide variety of focuses and functions. There were also established technology companies (e.g. IBM and Dell) with innovation wings and many academic departments. I’m sure I’ve missed many of the entities present or following remotely.

The better research providers can understand the potential breadth of applications  of their research, the more they can improve the specific areas of interest to these communities.

 

Rethinking the Public Image of Sentiment Analysis:

There was some concern that “social” is beginning to have too much baggage to be an attractive label, causing people to think immediately of top platforms such as Facebook and Twitter and belying the true breadth of the industry. This prompted a movement toward other terms at the symposium, including human analytics, humetrics, and measures of human engagement.

 

Accuracy

Accuracy tops out at about 80%, because that’s the limit of inter-rater reliability in sentiment analysis. Understanding the more difficult data is an important challenge for social media analysts. It is important for there to be honesty with customers and with each other about the areas where automated tagging fails. This particular area was a kind of elephant in the room- always present, but rarely mentioned.

Although an 80% accuracy rate is really fantastic compared to no measure at all, and it is an amazing accomplishment given the financial constraints that analysts encounter, it is not an accuracy rate that works across industries and sectors. It is important to consider the “fitness for use” of an analysis. For some industries, an error is not a big deal. If a company is able to respond to 80% of the tweets directed at them in real-time, they are doing quite well, But when real people or weightier consequences are involved, this kind of error rate is blatantly unacceptable. These are the areas where human involvement in the analysis is absolutely critical. Where, honestly speaking, are algorithms performing fantastically, and where are they falling short? In the areas where they fall short, human experts should be deployed, adding behavioral and linguistic insight to the analysis.

One excellent example of Fitness for Use was the presentation by Capital Market Exchange. This company operationalizes sentiment as expert opinion. They mine a variety of sources for expert opinions about investing, and then format the commonalities in an actionable way, leading to a substantial improvement above market performance for their investors. They are able to gain a great deal of market traction that pure sentiment analysts have not by valuing the preexisting knowledge structures in their industry.

 

Targeting the weaknesses

It is important that the field look carefully at areas where algorithms do and do not work. The areas where they don’t represent whole fields of study, many of which have legions of social media analysts at the ready. This includes less traditional areas of linguistics, such as Sociolinguistics, Conversation Analysis (e.g. looking at expected pair parts) and Discourse Analysis (e.g. understanding identity construction), as well as Ethnography (with fast growing subfields, such as Netnography), Psychology and Behavioral Economics. Time to think strategically to better understand the data from new perspectives. Time to more seriously evaluate and invest in neutral responses.

 

Summing Up

Social media data analysis, large scale text analysis and sentiment analysis have enjoyed a kind of honeymoon period. With so many new and fast growing data sources, a plethora of growing needs and applications, and a competitive and fast growing set of analytic strategies, the field has been growing at an astronomical rate. But this excitement has to be balanced out with the practical needs of the marketplace. It is time for growing technologies to better listen to and accommodate the needs of the customer base. This shift will help ensure the viability of the field and free developers up to embrace the spirit of intellectual creativity.

This is an exciting time for a fast growing field!

Thank you to Seth Grimes for organizing such a great event.

 

Free Range Research will cover the Sentiment Symposium in NYC next week #SAS14

Next week Free Range Research will be in NYC to cover the Sentiment Symposium and Innovation session, and I can’t tell you how excited I am about it!

The development of useful analytics hinges on constant innovation and experimentation, and binary positive/negative measures don’t come close to describing the full potential of social media data. This year’s symposium is an effort to confront the limitations of calcified measures of sentiment head on by introducing new measures and new perspectives.

As a programmer, a quantitative and qualitative analyst, a recent academic, and a fervent believer in the power of the power of mixed methods and interdisciplinary research, I am eager to cover the Symposium as both an enthusiastic and a critical voice. The new directions that will be represented are exciting and interesting, and I expect to gain a better feel for many cutting edges analytic practices. But the proprietary and competitive nature of the social media marketplace has led to countless overblown claims. I do not plan to simply be a conduit for these. My goal will be to share as much as possible of what I learn at the Symposium in a grounded and accessible way, as timely as possible, offering counterpoints and data driven examples when possible, on both my blog and through my Twitter handle @FreeRangeRsrch

I hope you’ll join me!

 

today in research & zen: “What is known as ‘realizing the mystery’ is nothing more than breaking through to grab an ordinary person’s life” Te-Shan

Planning another Online Research, Offline lunch

I’m planning another Online Research, Offline lunch for researchers in the Washington DC area later this month. The specific date and location are TBA, but it will be toward the end of February near Metro Center.

These lunches are designed to welcome professionals and students involved in online research across a variety of disciplines, fields and sectors. Past attendees have had a wide array of interests and specialties, including usability and interface design, data science, natural language processing, social network analysis, social media monitoring, discourse analysis, netnography, digital humanities and library science.

The goal of this series is to provide an informal venue for a diverse set of researchers to talk with each other and gain a wider context for understanding their work. They are an informal and flexible way to researchers to meet each other, talk and learn. Although Washington DC is a great meeting place for specific areas of online research, there are few informal opportunities for interdisciplinary gatherings of professionals and academics.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

Medical Errors: detecting them, preventing them, and dealing with their aftermath

This is primarily a report on an event, but I’ve added links, stories and examples to my notes.

The event:

Bioethics lecture on Error: https://www.eventbrite.com/e/conversations-in-bioethics-tickets-10276951639

Brief description: “Join distinguished national experts John James, PhD, former chief toxicologist at NASA and founder of Patient Safety America, Brian Goldman, MD, emergency physician-author and host of the CBC’s White Coat, Black Art, Beth Daley Ullem, MBA, nationally-recognized advocate for patient safety and quality and SFS alum, for a lively discussion and Q&A moderated by Maggie Little, Director of the Kennedy Institute of Ethics.”

At the Beautiful Kennedy Institute, Georgetown University

The follow-up:

For those who are particularly interested in this topic, the Kennedy Institute has an upcoming Bioethics MOOC starting 4/15: http://kennedyinstitute.georgetown.edu/about/news/bioethics-mooc-spring-launch.cfm

Why Create this resource?

What follows is a long resource- an in depth summary of the lecture I attended last night, complete with many links to other resources and a few stories and examples. Like the members of this panel, I have experienced a dramatic medical error. In 2012 my mother was on life support after experiencing a period of time with no oxygen to her brain. Her heart had stopped twice, and she was unresponsive. I am her only child, and I had essentially moved into the hospital with her in order to be her advocate. It was my decision whether or not to continue life support, and the main deciding factor was whether or not she was brain dead. She was given an EEG test, and it did not look good. There was a delay before I heard the results of the test, and I spent that delay researching her EEG patterns to try to understand what was going on. The next day the medical staff involved in her case sat me down and told me she was indeed brain dead. It wasn’t until my cousin had announced her passing on Facebook, I was saying my final goodbyes, and my aunt was on the phone with the funeral home that the doctors on the case realized they had miscommunicated. Another patient in another hospital was brain dead, but my mother was not officially brain dead. Her brain activity appeared to be seizure activity, and it wasn’t clear if there was anything else going on. The group apologized, and we were forced to reverse the story and try to explain to friends and family (and ourselves) that she was not actually dead, but she was still very close to it. There was a tide of “Go get em!” cries, which were difficult to deal with when we did indeed have to remove life support a few days later.

After this event, one of the physicians involved in the miscommunication focused my attention on a collaborative project. We began work on a grand rounds presentation for the hospital. We planned to talk about errors in general and, more specifically, what could be learned from this error. I did quite a bit of reading and research. We had some great discussions, and I started to attend a medical discourse group in my graduate linguistics department. At some point I will probably return to the notes from that collaboration and assemble a blog post about them.

It is because of that experience and that project that I assembled the resource below. I sincerely hope that you find it interesting and useful.

Please note that this is based on many pages of notes. Unfortunately my notes did not attribute points to individual panelists. I apologize for that omission.

Prevalence and Detection

An estimated 100,000 lives are lost each year from preventable medical error (according to 1999 landmark Institute Of Medicine report), but this data is old (1984, New York State) and focused on errors of commission. There are many other kinds of errors, including omission, context, diagnostic and communication. Measuring preventable deaths is easier than measuring mistakes overall, but mistakes that do not directly lead to death cause plenty of heartache every day as well.

One more recent attempt to detect medical errors involved isolating common trigger words that accompany medical mistakes in medical files and then having the cases reviewed by medical professionals to see if the deaths were indeed preventable. By this method, the estimate was closer to 210,000 preventable deaths. This method was more comprehensive, but records don’t have the right parameters or standardization to make this process ideal. Some estimates are as high as 440,000 deaths per year.

Regardless of the exact numbers, for physicians, there is a near 100% possibility of making a mistake at some point. This fact alone should change the paradigm from avoiding errors altogether to openly anticipating and working with errors as they happen.

Aftermath

After a medical error occurs, heartache abounds. But contrary to social conventions outside of the medical establishment, contact is often strictly controlled and regulated after the incident, and the physician is rarely able to say “I’m sorry.” This can cause a lack of closure for both the patients and the doctors. The aftermath of one of these errors forms a second layer of trauma for all of those involved.

The first target for any kind of error is often the individual who made the mistake, not the system that enabled the mistakes. The system quickly closes around this individual. The hospital risk administration sets in. Privacy walls are erected, and it becomes very difficult to take responsibility for one’s errors. A perfect storm of system and culture clash together, resulting in ill-advised words and actions on the part of those involved. At such a sensitive time, the words of care providers are often burned into the minds of the deceased patient’s advocates and family members. Blame is often tossed around indiscriminately. The survivors are often left feeling confused. One of the panelists remembers her physician counseling her with “I really don’t know why God needed your baby more than you did.”

The medical providers at this point are isolated from their patients and often prohibited from discussing these incidents with each other. At such a vulnerable moment, they are left to deal with it alone, taking each incident as a private failure when mistakes are a universal human condition. If other providers hear about the incident, they will often exacerbate the problem by not making eye contact, demonstrating their vicarious shame, reinforcing the problem as a repudiation of all a doctor is supposed to be.

System level Problems

The medical system is large and complicated enough to really enable errors. There are so many medical professionals, patients, laypeople and touchpoints, and the body itself is quite a complicated system- some of which is better understood and some of which is still largely undocumented territory. The medical system is evolving fast from the mom and pop doctors of the past to the large complexes of today. The modern medical system has its hand in businesses that no one would have imagined before. Some hospitals boast dental facilities, nursing homes, outpatient clinics, and even foster care facilities. The changing rules for insurance payments and the increasing role of legal actors also have a significant influence on the system.

In order for hospitals to make money, many end up adjusting the patient care ratios. Some stretch these ratios to the breaking point, putting medical staff in a position where they can barely keep up. The pressure for productivity is much higher now than it was in the recent past. Many facilities are over capacity, and space is at a premium. This can put medical staff in an awkward position where there are constant workarounds and makeshift solutions. These kinds of problems can lead to  errors of context. The same patient may be treated differently in the ambulatory care area of the same wing than in the rapid assessment area. In the words of one panelist “geography is destiny in the E.R.” Movement in space within a medical facility is both physical and cognitive.

Scheduling is also a huge issue in medical facilities. Long stretches of work without sleep are a better known precursor to many medical errors.

Technology

Technology is integral to the modern medical system and has saved many lives. But technology training and interface design are extremely important. One panelist reported that a medical professional confessed to him years after his son’s preventable death that the MRI machine was new, and no one onhand knew how to use it properly. Others have reported on the influence of signal fatigue- it is very hard amidst a constant stream of signals to ferret out the most important among them.

Technology was a real point of frustration for me when I had my first child. I was induced in the evening and felt increasingly strong contractions all night. When the nurses came to check on me, I reported that I was in labor, but the pattern on the monitor was not consistent with what they would call labor. Once I started to push I called them back and requested an exam, and fortunately, although my doctor and the doctor on call were not available, they were qualified to catch the baby.

Medical culture

One of the panelists told the story of a physician who began his shift by calling together his team, warning them that he did not get much of a night’s sleep the night before, and asking them to watch his back a bit more closely than usual. This runs starkly contrary to typical medical enculturation. Medical culture makes it harder to admit mistakes or to be human. One panelist commented “We’re very defensive about our mistakes.” This is emblematic  of a culture that can’t handle its own humanity. This repulsion by error is compounded by a system that doesn’t comment but rather expects good performance. The “no news is good news” ethic means that a physician can go his or her entire career without ever hearing any feedback, and that can be a good thing.

In medicine, the smartest person in the room is quickly the person in charge. One of the panelists, Brian “didn’t want to be a high-maintenance student” as a resident by asking too many questions or requesting help too often. This attitude wound up fatal for one of his patients. Errors are a reminder of human fallibility, and medical professionals are supposed to be infallible. Brian talked more about this in a TED talk. In it, he spoke of batting averages. We assume that error is a natural part of other jobs, but what is an acceptable batting average for a surgeon? A mistake can mean that one was lazy or incompetent or had a lapse. Which one does the physician want to admit to? None! Instead, they often live in terror when one mistake happens that another will soon follow. One panelist said the words he most fears as a medical professional are “Do you remember?”

Instead of the culture of shame and blame, we could benefit from being scientific about error: exhibiting genuine curiosity about errors, measuring them, and developing and testing treatments for them. One panel member mentioned a surgeon who developed a kind of flight data recorder for surgery: http://www.icee-con.org/papers/2008/pdf/O-100.pdf . Apparently this surgeon has been dubbed “the most dangerous man in surgery.”

Isolation and selective training

People are trained in the context of the settings where they have worked. Different settings see different kinds of challenges. Shouldn’t there be a better system for sharing challenges and solutions across institutions?

Handwriting

It is pretty incredible that such a high stakes field rests on human handwriting. This is made worse by the lack of value placed on making handwriting legible and on the decreasing abilities of a technologically savvy population to decipher human handwriting. How many of you can read cursive?

Science or Art?

One interesting aspect of medicine is the way it is a field composed of scientists who view themselves as artists. This is evident in the total lack of standardization in medical care. You will have a different experience, even with the same condition, across locations and providers. Even within a single hospital individual doctors act as subcontractors, providing individualized service as only they can, despite the common environment. Sometimes there are standards or guidelines set for specific areas of medicine with a goal of instituting consistency. But the adaptation of these standards and attitudes toward these standards are far from universal. The standards take shape differently across locations and providers.

The panel members mentioned the success of VA hospitals in this area. They are better at standardization. Vertically integrated healthcare can be much more progressive.

Areas for improvement

So what kind of changes would improve the system? Some prominent authors liken error models to those in the airplane industry. This is tricky, because medicine is far more complicated that aviation- although both are high stakes fields that require inhuman levels of perfection among human actors. But even if the systems are different, they can still learn from each other.

Atul Gawande is a well known author The Checklist Manifesto. He has been advocating for many of the checklists and safety features that are standard in the aviation industry to be applied to medicine. He also wrote a piece about what medicine can learn from The Cheesecake Factory.

Some suggested areas for improvement include instituting redundancies, collapsing hierarchies and patient centered care.

One panel member was involved with error prevention at more of a business level. She mentioned the power of adding redundancies. Adding redundancies should be common practice and is common practice in other high stakes fields. Redundancies should be worked into routines and checks, although models of modern efficiency seem to be moving away from them. She also mentioned the powerful potential of dashboards and the importance of comparative information. One great example of the power of comparative information is “Solutions for Patient Safety” http://www.solutionsforpatientsafety.org/ . This is a group of 78 pediatric hospitals that share a common dashboard. Using the dashboard the hospitals can see how they stand in terms of infections and other errors compared to the rest of the network. It’s a teaching model- the best teach the rest about the measures they’re using to combat each problem. The panelist mentioned that we buy healthcare products without comparative information, but information on dashboards can really increase accountability.

Collapsing hierarchies would make it more culturally acceptable to report medical errors. This could also be augmented through multidisciplinary peer reviews, involving everyone from providers across medical specialties and training to janitors and other people present at the time of care.

One of the panelists wrote a patient bill of rights. An audience member commented on the need for patients to feel more powerful and have more power in medical situations. He noted that the playing field between doctor and patient is inherently unequal. As soon as you remove your clothes and put on the patient smock you begin to feel powerless. He noted that some medical providers will take advantage of that vulnerability. The foundation of patient centered care is informed consent. If you don’t understand your options, you cannot make an informed choice.

One specific example of an area where patients are unable to make informed decisions was off-label prescriptions. Prescriptions are often prescribed off-label, meaning that the patient is not part of the population base for which the drug was tested. This was the case for me when my first child was born, and I was induced with Cytotec. When she was born, a healthy 8 lb 3 ounce baby aspirated meconium and ended up in the NICU while I was treated for hemorrhage. I knew nothing of the drug or the potential consequences. In fact, I had chosen an unmedicated chidbirth and eschewed interventions altogether.

Another example of an area where patients can’t always make informed decisions is that of cost. There has been quite a bit of buzz lately about the ridiculous hospital bills patients receive upon discharge. I can’t tell you how paranoid I am about any supplies used on myself or my kids in the E.R. having seen some of those bills. A close friend of mine recently had an incident where an inexpensive scheduled dentist appointment turned into over $2000 in charges, due immediately. That incident led to an extensive series of phonecalls between myself and the dental office, debating consent.

An audience member spoke about the importance of patient advocates.  Apparently there is a growing business of professional patient advocates. I think that this is wonderful, because historically the only qualification necessary for a patient advocate was that they not be the patient. I’ve had the experience of reading transcripts of doctor patient visits that included advocates. Certainly not all advocates are built alike! This role is more deeply explored in the book “High Performance Healthcare

Opportunities for Linguists

There are two main applications for linguistics that are most evident in this discussion. One is the potential for computational linguists and natural language processing experts to mine the textual data available in  electronic health records as they become increasingly available. The other is the opportunity for discourse analysts to conduct research on the actual communication between everyone involved. Discourse analysts can both develop and institute more structured protocols, such as the double verification before certain medications and procedures, and raise awareness regarding instances when less than optimal communication styles can lead to mix-ups or other mistakes. Discourse analysts who specialize in apologies could be particularly effective advisors in training medical professionals to talk with patients and their advocates and family following medical errors. This is a strong interest of mine, and I’m lucky enough to attend regular medical discourse discussion groups with the head of my graduate department, Heidi Hamilton. Her work is a real treasure trove of medical discourse, well worth investigating further.

On a personal note, it is also very healing for victims and survivors to build narratives around these incidents that help to give them a wider context and meaning. I wrote about that process here: https://freerangeresearch.com/2012/05/22/ot-on-loss-and-grief-and-the-power-of-storytelling/

You may notice that I decided at that point not to give the medical error a place in my mom’s story. That was an important decision for me that helped me to heal.

Moving on

The three panelists had all lost people due to medical errors. I’ve also been the victim of medical errors. We were able to find some healing in the process of going deeper into the errors and the medical system that enabled them. You have also probably suffered in some way as the result of a medical error. It is also important to note that all of us have also had our lives made better by medicine at some point, and we probably also all know people whose lives were saved by medicine. It is an imperfect system, but it is a system with a lot of strengths.

Great readings that might shake you to your academic core? I’m compiling a list

In the spirit of research readings that might shake you to your academic core, I’m compiling a list. Please reply to this thread with any suggestions you have to add. They can be anything from short blog posts (microblog?) to research articles to books. What’s on your ‘must read’ list?

Here are a couple of mine to kick us off:

 

Charles Goodwin’s Professional Vision paper

I don’t think I’ve referred to any paper as much as this one. It’s about the way our professional training shapes the way we see the things around us. Shortly after reading this paper I was in the gym thinking about commonalities between the weight stacks and survey scales. I expect myself to be a certain relative strength, and when that doesn’t correlate with the place where I need to place my pin I’m a little thrown off.

It also has a deep analysis of the Rodney King verdict.

 

Revitalizing Chinatown Into a Heterotopia by Jia Lou

This article is based on a geosemiotic analysis of DC’s Chinatown. It is one of the articles that helped me to see that data really can come in all forms

 

After method: Mess in Social Science Research by John Law

This is the book that inspired this list. It also inspired this blog post.

 

On Postapocalyptic Research Methods and Failures, Honesty and Progress in Research

I’m reading a book that I like to call “post-apocalyptic research methodology.” It’s ‘After Method: Mess in Social Science Research’ by John Law. At this point the book reads like a novel. I can’t quite imagine where he’ll take his premise, but I’m searching for clues and turning pages. In the meantime, I’ve been thinking quite a bit about failure, honesty, uncertainty and humility in research.

How is the current research environment like a utopian society?

The research process is often idealized in public spaces. Whether the goal of the researcher is to publish a paper based on their research, present to an audience of colleagues or stakeholders about their research, or market the product of their research, all researchers have a vested interest in the smoothness of the research process. We expect to approach a topic, perform a series of time-tested methods or develop innovative new methods with strong historical traditions, apply these methods as neatly as possible, and end up with a series of strong themes that describe the majority of our data. However, in Law’s words “Parts of the world are caught in our ethnographies, our histories and our statistics. But other parts are not, and if they are then this is because they have been distorted into clarity.” (p. 2) We think of methods as a neutral middle step and not a political process, and this way of thinking allows us to focus on reliability and validity as surface measures and not inherent questions. “Method, as we usually imagine it, is a system for offering more or less bankable guarantees.” (p. 9)

Law points out that research methods are, in practice, very limited in the social sciences “talk of method still tends to summon up a relatively limited repertoire of responses.” (p. 3) Law also points out that every research method is inherently political. Every research method involves a way of seeing or a way of looking at the data, and that perspective maps onto the findings it yields. Different perspectives yield different findings, whether they are subtly or dramatically different. Law’s central assertion is that methods don’t just describe social realities but also help to create them. Recognizing the footprint of our own methods is a step toward better understanding our data and results.

In practice, the results that we focus on are largely true. They describe a large portion of the data, ascribing the rest of the data to noise or natural variation. When more of our data is described in our results, we feel more confident about our data and our analysis.

Law argues that this smoothed version of reality is far enough from the natural world that it should perk our ears. Research works to create a world that is simple and falls into place neatly and resembles nothing we know, “’research methods’ passed down to us after a century of social science tend to work on the assumption that the world is properly to be understood as a set of fairly specific, determinate, and more or less identifiable processes.” (p. 5) He suggests instead that we should recognize the parts that don’t fit, the areas of uncertainty or chaos, and the areas where our methods fail. “While standard methods are often extremely good at what they do, they are badly adapted to the study of the ephemeral, the indefinite and the irregular.” (p. 4). “Regularities and standardizations are incredibly powerful tools, but they set limits.” (p. 6)

Is the Utopia starting to fall apart?

The current research environment is a bit different from that of the past. More people are able to publish research at any stage without peer review using media like blogs. Researchers are able to discuss their research while it is in progress using social media like Twitter. There is more room to fail publicly than there ever has been before, and this allows for public acknowledgment of some of the difficulties and challenges that researcher’s face.

Building from ashes

Law briefly introduces his vision on p. 11 “My hope is that we can learn to live in a way that is less dependent on the automatic. To live more in and through slow method, or vulnerable method, or quiet method. Multiple method. Modest method. Uncertain method. Diverse method.”

Many modern discussions of about management talk about the value of failure as an innovative tool. Some of the newer quality control measures in aviation and medicine hinge on the recognition of failure and the retooling necessary to prevent or limit the recurrences of specific types of events. The theory behind these measures is that failure is normal and natural, and we could never predict the many ways in which failure could happen. So, instead of exclusively trying to predict or prohibit failure, failures should be embraced as opportunities to learn.

Here we can ask: what can researchers learn from the failures of the methods?

The first lesson to accompany any failure is humility. Recognizing our mistakes entails recognizing areas where we fell short, where our efforts were not enough. Acknowledging that our research training cannot be universal, that applying research methods isn’t always straightforward and simple, and that we cannot be everything to everyone could be an important stage of professional development.

How could research methodology develop differently if it were to embrace the uncertain, the chaotic and the places where we fall short?

Another question: What opportunities to researchers have to be publicly humble? How can those spaces become places to learn and to innovate?

Note: This blog post is dedicated to Dr Jeffrey Keefer @ NYU, who introduced me to this very cool book and has done some great work to bring researchers together