That which is bigger than us

We learn about things that are bigger than ourselves in layers, and we accomplish tasks that are bigger than ourselves one step at a time.

 

In college, this knowledge came as a revelation to me. Instead of learning, memorizing and standing atop a field of knowledge, knowledge was something that was created in pieces. Knowledge came to be about process.

 

In graduate school, I again came to relish the mystery of the analytic process through the activities of conversation analysis and discourse analysis. Over and again, I began with a small piece of data, like a conversation or a snippet of video, and watched it come to life through rounds of observation. Something that began as a digestible piece that wouldn’t necessarily attract attention became a multilayered journey into all of the pieces that comprise the situated social actions we make every day.

 

As a parent, I learned almost immediately that parenthood was bigger than me. I learned that I couldn’t be, do or know it all, and I learned that the choices and priorities I made dramatically governed the shape of my family. I learned that I could not be perfect in anyone’s eyes, and I could never measure up to every standard by which I was being measured. I learned that I was ultimately responsible for something I valued more than I had imagined possible, and that I ultimately had to accept and embrace my unique approach to the task. I could only strive to be a parent in the ways in which I was capable, and I could never fit anyone else’s vision. I learned that my shortcomings had to be a bridge of understanding to other parents, who also found themselves unequal to the task at their hands.

 

In my professional life, I’ve learned to relish the possibilities and opportunities that teamwork can bring. As a team we can achieve far more and greater things than we could ever achieve as individuals, and that which we can accomplish can be an inspiration. As a manager the most I could wish for is a team that is inspired by process and by potential, who can love the work and love the product of that work.

 

Ultimately, that’s what I wish for all things that are bigger than myself. Inspiration, pride, a love of the journey and the process- to love life and be surrounded by others who love life, in all its complications, challenges, ups and downs.

 

But all of this talk of inspiration neglects the other side of things that are bigger than us. When we make choices of where to focus our time and energy, other elements are always neglected. As a parent, I have to remind myself that I may not be a go-to mom at bake sale time, but I have other qualities to offer my kids. Even as we work to get things done there is always an undercurrent of things not getting done. And there are times when the journey ahead is more daunting than inspiring. There are the moments when all of the work we’ve accomplished becomes undone before our eyes. There are the toddlers behind us as we clean, some more and some less metaphorical, dumping toys and laughing. And there are the mountains ahead that seem to be too big to climb.

 

There is a TED talk that has been making the rounds lately about emotional hygiene. In it, the speaker talks about how we handle failure and disappointments. We all encounter failures and disappointments, small and large, every day. We conquer our to-do lists one day, only to see them build back up the next day. Sometimes our hard work is unrecognized. Sometimes our efforts are not enough. It’s one thing to love process, but what do we do when the process can’t fit the task ahead? How do we handle ambiguity? To Ignore these challenges is to undercut the complicated texture of life.

I believe that part of embracing life is to embrace the mess; to embrace that which is bigger than ourselves; to keep feeling around the darkness until we find our way; to have faith that there is a path through the darkness, to continually double back to our rocks; to embrace the challenges and embrace our core that guides us through them; to recognize the downs and the ups, and to know where within ourselves to find the strength to persevere. These moments, these challenges allow us to be hear, see and do that which is much grander than what we could see, hear achieve alone or in any immediate sense. These are the elements that give depth to our lives. These are the challenges that define our lives and make life worth living.

Understanding news consumption and production can be like understanding the air we breathe

A careful, systematic look at the way you encounter news might just dramatically change your understanding of the genre. Here are some observations about creating and consuming news in our current information ecosystem.

Creating News

News is not one size fits all, and news methodology can’t be one size fits all. This is probably a well known fact to people with more of a journalism background, but it is often overlooked by people who are newer to the field. Here are a few points that stem from differences:

– Social media can be a great source for information about breaking events that have a critical base of witnesses with internet access.

– Social media is no substitute for news that has very few witnesses with privileged access to information.

– The core job of newsmakers is to keep the public informed about unfolding events. Oftentimes newsmakers are as invisible to their audiences as the people who develop dictionaries are. The audience assumes that the major events they see covered are the objectively most-major events, often without any understanding of the curation involved. Newsmakers provide a vital public service and have a moral obligation to the public, but that obligation is far from straight forward.

– News consumers may choose to engage most deeply in the topics they are most interested in, but that doesn’t invalidate a basic desire to know what’s going on in the world. This is why I like to advocate for eye tracking as an engagement metric- the current tracking metrics don’t reflect the most basic function of the news media.

 

Consuming News

News exposure is seamlessly integrated into our daily experiences. As a child, I would watch multiple newscasts with my mom, and we would both scan the newspapers regularly. As a new parent, I visited multiple websites to collect news from different perspectives and regularly watched multiple newscasts- this seemed like an essential tie between the small world of new parenthood and the larger world outside my door. But these days I work long hours and rarely catch newscasts or have time to visit multiple news sites. Someone recently asked me which news outlets I follow, and I was surprised that the answer didn’t come very readily to me. I’ve been making a careful effort to observe my contact with news stories, outlets and journalists, and I highly recommend this exercise to anyone interested in understanding or measuring media use.

Here is some of what I’ve observed:

– Twitter is the first platform I think of when I think of news. I think of it as my own curated stream of news amidst the wider raging river of information flow. But when it comes to news stories in particular, I often hear about them not because I seek them out or curate them but because my streams are based on people who have a variety of interests. I hear about emerging news because people go off-topic in  their Twitter streams, not because I seek it out. I often value this dynamic as a kind of filter of its own, because major events enter my stream from a variety of perspectives, but the majority of news does not.

– Re: Interest-based streams- I mostly follow researchers on Twitter. As a result, I can follow conferences as they happen or read interesting articles as they come out. Is this news? What makes it news?

– Platforms morph based on the way people use them. See @clintonyates Twitter feed for an example of a journalist using Twitter to tell resonant stories in a unique way that defies traditional uses of the platform.

– Re: Instagram- I love to follow Instagrammers because I really love photography. Some of the instagrammers I follow are photojournalists. This is an area of news coverage that is rarely considered in depth. And sometimes I wonder whether these pictures are only news if they contain, and I read, captions explaining their context and importance?

– Facebook is often discussed as a news source, but it is very important when discussing Facebook as a news source to consider the social context of information. I will share news from news sources only if I think it is something I can share without harming valued personal relationships with people across many ideological spectra and backgrounds. That said, some of my friends will regularly share the pieces that I choose not to. When I see those articles from these friends I will put the articles in the context of what I’ve seen from those people in the past, my patterns with them in regards on the topic, and my social patterns with them in general.

– It is important to recognize that news items on Facebook can come from news sources, interest groups or pages, interested people, or simply from Facebook. The source interacts with the platform to create the stimulus.

– Re: other fora- There are many more news sources that I follow to varying degrees. I receive research updates and daily briefings from Pew and Nielsen, which I read with varying frequency (the only one I read every day is the Daily Briefing from the Pew Journalism Project.) I also receive e-mails from research and technical lists, lists about STEM education, community lists, blog notifications and emails from LinkedIn. I read the Sunday paper, and weekly updates from my employer, and I regularly hear and participate in discussions in my workplace and outside of it. Each of these are potential news sources that may bring in other news sources.

– These sources listed together may appear to amount to a critical mass of time, but I was not aware of that critical mass until I stopped to observe it. Our choices and actions regarding media consumption are as unconscious as many other choices I make with my time.

All of this is to say that news is as seamlessly integrated into my environment as the air I breathe, and it stems from sources of all kinds. Every story has a different way of intersecting with and co creating my own. Whereas news media has a particularly strong history of top down and one way dissemination, it is much more ubiquitous, multi-directional and part of our ecosystem now than ever before. We are consumers and participants in very different ways, and understanding these is a key to understanding and developing tools for news in the future.

 

* A side note re: pay to read. My advice to news outlets is to find a way to integrate pre-existing online funding resources (like Amazon, paypal, etc.) in a collective or semi-standardized way, so that people don’t have to provide financial information to anyone new, and so that people can pay small fees (e.g. 25 cents for a long-read or something that required a good deal of expense to produce, 5 or ten cents for smaller or shorter pieces) with a single click and pay as they go to read around a variety of sources.

The surprising unpredictability of language in use

This morning I recieved an e-mail from an international professional association that I belong to. The e-mail was in English, but it was not written by an American. As a linguist, I recognized the differences in formality and word use as signs that the person who wrote the e-mail is speaking from a set of experiences with English that differ from my own. Nothing in the e-mail was grammatically incorrect (although as a linguist I am hesitant to judge any linguistic differences as correct or incorrect, especially out of context).

Then later this afternoon I saw a tweet from Twitter on the correct use of Twitter abbreviations (RT, MT, etc.). If the growth of new Twitter users has indeed leveled off then Twitter is lucky, because the more Twitter grows the less they will be able to influence the language use of their base.

Language is a living entity that grows, evolves and takes shape based on individual experiences and individual perceptions of language use. If you think carefully about your experiences with language learning, you will quickly see that single exposures and dictionary definitions teach you little, but repeated viewings across contexts teach you much more about language.

Language use is patterned. Every word combination has a likelihood of appearing together, and that likelihood varies based on a host of contextual factors. Language use is complex. We use words in a variety of ways across a variety of contexts. These facts make language interesting, but they also obscure language use from casual understanding. The complicated nature of language in use interferes with analysts who build assumptions about language into their research strategies without realizing that their assumptions would not stand up to careful observation or study.

I would advise anyone involved in the study of language use (either as a primary or secondary aspect of their analysis) to take language use seriously. Fortunately, linguistics is fun and language is everywhere. So hop to it!

Reporting on the AAPOR 69th national conference in Anaheim #aapor

Last week AAPOR held it’s 69th annual conference in sunny (and hot) Anaheim California.

Palm Trees in the conference center area

My biggest takeaway from this year’s conference is that AAPOR is a very healthy organization. AAPOR attendees were genuinely happy to be at the conference, enthusiastic about AAPOR and excited about the conference material. Many participants consider AAPOR their intellectual and professional home base and really relished the opportunity to be around kindred spirits (often socially awkward professionals who are genuinely excited about our niche). All of the presentations I saw firsthand or heard about were solid and dense, and the presenters were excited about their work and their findings. Membership, conference attendance, journal and conference submissions and volunteer participation are all quite strong.

 

At this point in time, the field of survey research is encountering a set of challenges. Nonresponse is a growing challenge, and other forms of data and analysis are increasingly en vogue. I was really excited to see that AAPOR members are greeting these challenges and others head on. For this particular write-up, I will focus on these two challenges. I hope that others will address some of the other main conference themes and add their notes and resources to those I’ve gathered below.

 

As survey nonresponse becomes more of a challenge, survey researchers are moving from traditional measures of response quality (e.g. response rates) to newer measures (e.g. nonresponse bias). Researchers are increasingly anchoring their discussions about survey quality within the Total Survey Error framework, which offers a contextual basis for understanding the problem more deeply. Instead of focusing on an across the board rise in response rates, researchers are strategizing their resources with the goal of reducing response bias. This includes understanding response propensity (who is likely not to respond to the survey? Who is most likely to drop out of a panel study? What are some of the barriers to survey participation?), looking for substantive measures that correlate with response propensity (e.g. Are small, rural private schools less likely to respond to a school survey? Are substance users less likely to respond to a survey about substance abuse?), and continuous monitoring of paradata during the collection period (e.g. developing differential strategies by disposition code, focusing the most successful interviewers on the most reluctant cases, or concentrating collection strategies where they are expected to be most effective). This area of strategizing emerged in AAPOR circles a few years ago with discussions of nonresponse propensity modeling, a process which is surely much more accessible than it sounds, but it has really evolved into a practical and useful tool that can help any size research shop increase survey quality and lower costs.

 

Another big takeaway for me was the volume of discussions and presentations that spoke to the fast-emerging world of data science and big data. Many people spoke of the importance of our voice in the realm of data science, particularly with our professional focus on understanding and mitigating errors in the research process. A few practitioners applied error frameworks to analyses of organic data, and some talks were based on analyses of organic data. This year AAPOR also sponsored a research hack to investigate the potential for Instagram as a research tool for Feed the Hungry. These discussions, presentations and activities made it clear that AAPOR will continue to have a strong voice in the changing research environment, and the task force reports and initiatives from both the membership and education committees reinforced AAPOR’s ability to be right on top of the many changes afoot. I’m eager to see AAPOR’s changing role take shape.

“If you had asked social scientists even 20 years ago what powers they dreamed of acquiring, they might have cited the capacity to track the behaviors, purchases, movements, interactions, and thoughts of whole cities of people, in real time.” – N.A.  Christakis. 24 June 2011. New York Times, via Craig Hill (RTI)

 

AAPOR a very strong, well-loved organization and it is building a very strong future from a very solid foundation.

 

 

2014-05-16 15.38.17

 

MORE DETAILED NOTES:

This conference is huge, so I could not possibly cover all of it on my own, so I will try to share my notes as well as the notes and resources I can collect from other attendees. If you have any materials to share, please send them to me! The more information I am able to collect here, the better a resource it will be for people interested in the AAPOR or the conference-

 

Patrick Ruffini assembled the tweets from the conference into this storify

 

Annie, the blogger behind LoveStats, had quite a few posts from the conference. I sat on a panel with Annie on the role of blogs in public opinion research (organized by Joe Murphy for the 68th annual AAPOR conference), and Annie blew me away by live-blogging the event from the stage! Clearly, she is the fastest blogger in the West and the East! Her posts from Anaheim included:

Your Significance Test Proves Nothing

Do panel companies manage their panels?

Gender bias among AAPOR presenters

What I hate about you AAPOR

How to correct scale distribution errors

What I like about you AAPOR

I poo poo on your significance tests

When is survey burden the fault of the responders?

How many survey contacts is enough?

 

My full notes are available here (please excuse any formatting irregularities). Unfortunately, they are not as extensive as I would have liked, because wifi and power were in short supply. I also wish I had settled into a better seat and covered some of the talks in greater detail, including Don Dillman’s talk, which was a real highlights of the conference!

I believe Rob Santos’ professional address will be available for viewing or listening soon, if it is not already available. He is a very eloquent speaker, and he made some really great points, so this will be well worth your time.

 

Let’s talk about data cleaning

Data cleaning has a bad rep. In fact, it has long been considered the grunt work of the data analysis enterprise. I recently came across a piece of writing in the Harvard Business Review that lamented the amount of time data scientists spend cleaning their data. The author feared that data scientists’ skills were being wasted on the cleaning process when they could be using their time for the analyses we so desperately need them to do.

I’ll admit that I haven’t always loved the process of cleaning data. But my view of the process has evolved significantly over the last few years.

As a survey researcher, my cleaning process used to begin with a tall stack of paper forms. Answers that did not make logical sense during the checking process sparked a trip to the file folders to find the form in question. The forms often held physical evidence of a indecision on the part of the respondent, such as eraser marks or an explanation in the margin, which could not have been reflected properly by the data entry person. We lost this part of the process when we moved to web surveys. It sometimes felt like a web survey left the respondent no way to communicate with the researcher about their unique situations. Data cleaning lost its personalized feel and detective story luster and became routine and tedious.

Despite some of the affordances of the movement to web surveys, much of the cleaning process stayed routed in the old techniques. Each form has its own id number, and the programmers would use those id numbers for corrections

if id=1234567, set var1=5, set var7=62

At this point a “good programmer” would also document the changes for future collaborators

*this person was not actually a forest ranger, and they were born in 1962
if id=1234567, set var1=5, set var7=62

Making these changes grew tedious very quickly, and the process seemed to drag on for ages. The researcher would check the data for a potential errors, scour the records that could hold those errors for any kind of evidence of the respondent’s intentions, and then handle each form one at a time.

My techniques for cleaning data have changed dramatically since those days. My goal is to use id numbers as rarely as possible, but instead to ask myself questions like “how can I tell that these people are not forest rangers?” The answer to these questions evokes a subtley different technique:

* these people are not actually forest rangers
if var7=35 and var1=2 and var10 contains ‘fire fighter’, set var1=5)

This technique requires honing and testing (adjusting the precision and recall), but I’ve found it to be far more efficient, faster, more comprehensive and, most of all- more fun (oh hallelujah!). It makes me wonder whether we have perpetually undercut the quality of the data cleaning we do simply because we hold the process in such low esteem.

So far I have not discussed data cleaning for other types of data. I’m currently working on a corpus of Twitter data, and I don’t see much of a difference in the cleaning process. The data types and programming statements I use are different, but the process is very close. It’s an interesting and challenging process that involves detective work, a better and growing understanding of the intricacies of the dataset, a growing set of programming skills, and a growing understanding of the natural language use in your dataset. The process mirrors the analysis to such a degree that I’m not really sure why it would be such a bad thing for analysts to be involved in data cleaning.

I’d be interested to hear what my readers have to say about this. Is our notion of the value and challenge of data cleaning antiquated? Is data cleaning a burden that an analyst should bear? And why is there so little talk about data cleaning, when we could all stand to learn so much from each other in the way of data structuring code and more?

Professional Identity: Who am I? And who are you?

Last night I acted as a mentor at the annual Career Exploration Expo sponsored by my graduate program. Many of the students had questions about developing a professional identity. This makes sense, of course, because graduate school is an important time for discovering and developing a professional identity.

People enter our program (and many others) With a wide variety of backgrounds and interests. They choose from a variety of classes that fit their interests and goals. And then they try to map their experience onto job categories. But boxes are difficult to climb into and out of, and students soon discover that none of the boxes is a perfect fit.

I experienced this myself. I entered the program with an extensive and unquestioned background in survey research. Early in my college years (while I was studying and working in neuropsychology) I began to manage a clinical dataset in SPSS. Working with patients and patient files was very interesting, but to my surprise working with data using statistical software felt right to me much in the way that Ethiopian meals include injera and Japanese meals include rice (IC 2006 (1997) Ohnuki Tierney Emiko). I was actually teased by my friends about my love of data! This affinity served me well, and I enjoyed working with a variety of data sets while moving across fields and statistical programming languages.

But my graduate program blew my mind. I felt like I had spent my life underwater and then discovered the sky and continents. I discovered many new kinds of data and analytic strategies, all of which were challenging and rewarding. These discoveries inspired me to start this blog and have inspired me to attend a wide variety of events and read some very interesting work that I never would have discovered on my own. Hopefully followers of this blog have enjoyed this journey as much as I have!

As a recent graduate, I sometimes feel torn between worlds. I still work as a survey researcher, but I’m inspired by research methods that are beyond the scope of my regular work. Another recent graduate of our program who is involved in market research framed her strategy in a way that really resonated with me: “I give my customers what they want and something else, and they grow to appreciate the ‘something else.'” That sums up my current strategy. I do the survey management and analysis that is expected of me in a timely, high quality way. But I am also using my newly acquired knowledge to incorporate text analysis into our data cleaning process in order to streamline it, increasing both the speed and the quality of the process and making it better equipped to handle the data from future surveys. I do the traditional quantitative analyses, but I supplement them  with analyses of the open ended responses that use more flexible text analytic strategies. These analyses spark more quantitative analyses and make for much better (richer, more readable and more inspired) reports.

Our goal as professionals should be to find a professional identity that best capitalizes on  our unique knowledge, skills and abilities. There is only one professional identity that does all of that, and it is the one you have already chosen and continue to choose every day. We are faced with countless choices about what classes to take, what to read, what to attend, what to become involved in, and what to prioritize, and we make countless assessments about each. Was it worthwhile? Did I enjoy it? Would I do it again? Each of these choices constitutes your own unique professional self, a self which you are continually manufacturing. You are composed of your past, your present, and your future, and your future will undoubtedly be a continuation of your past and present. The best career coach you have is inside of you.

Now your professional identity is much more uniquely or narrowly focused that the generic titles and fields that you see in the professional marketplace. Keep in mind that each job listing that you see represents a set of needs that a particular organization has. Is this a set of needs that you are ready to fill? Is this a set of needs that you would like to fill? You are the only one who knows the answers to these questions.

Because it turns out that you are your best career coach, and you have been all along.

In praise of getting things wrong and working toward better

“An expert is a man who has made all the mistakes which can be made in a very narrow field” -Niels Bohr

I’ve been reading “In the Plex,” a book about the history of Google by Steven Levy. I highly recommend this book, because as I read it I am increasingly aware of the ways in which Google’s constant presence invisibly shapes our daily lives. Levy makes a point in the book of attributing some of Google’s constant evolution to its obsession with failure. In search terms, isolating failures is relatively easy- if people soon return to the search page, reframe their query, or continue down through lower ranked results their search was a relative failure. Failures are identified and isolated by Google and then obsessed over until the PageRank algorithm can be appropriately tweaked in a way that passes rigorous testing protocols.

In this way, Google is similar to an increasing number of failure- focused initiatives, including some of the engineering based models that have been applied to healthcare and more. These voices are increasingly the source of innovations that are continually shaping and reshaping our future. But the rhetoric of failure and success of its evangelizers can be hard for us to wrap our heads around, as people who naturally fear, avoid and focus on failure in a negative way.

Over the weekend, while I was practicing Yoga I told one of my kids my favorite part of the practice (note: not a good time for chatting). I love that Yoga is a process. One day you will be able to do something that you may or may not be able to do the next day, and vice versa. My practice involves quite a bit of balancing on one foot, and there are days when that balance feels effortless and days when that balance feels impossible. But the effortless days only come because I continue to practice despite the disappointments of my wobblier days. Yoga instructors sometimes talk about the power of intentions and working in ways that align with our intentions. One of my kids pointed out that the wobbly days, as I call them, are exactly the reason why she hates Yoga. She’s believes that she’s no good at it, and because of her assessment she will avoid it. You can probably guess that this conversation is far from over between us.

We see attitudes like these affecting people (including ourselves) every day. Some people theorize that the lower representation of women in STEM (Science, Technology, Engineering and Math) fields is due to a larger proportion of women than men who doubt their abilities or judge their abilities more harshly. We hear about graduate students who experience what is sometimes called the ‘imposter syndrome.’ I remember some students in my graduate classes who chose not to participate in class for fear they would sound stupid. I’ve heard of medical practitioners who were so worried that they would make another mistake that they were afraid to practice. As a writer, I know that the power of self doubt can cause writers block, but I also know how much easier it is to edit or rewrite.

I would encourage all of you to embrace your failures, your mistakes, your shortcomings, your missteps and your errors and see them as part of a process and not an endpoint. These stumbling points are the key points of growth- the key moments for us to learn and to redirect our actions to better suit our intentions. To err is human, but to learn from our missteps is surely something greater.

Reflections and Notes from the Sentiment Analysis Symposium #SAS14

The Sentiment Analysis Symposium took place in NY this week in the beautiful offices of the New York Academy of Sciences. The Symposium was framed as a transition into a new era of sentiment analysis, an era of human analytics or humetrics.

The view from the New York Academy of Sciences is really stunning!

The view from the New York Academy of Sciences is really stunning!

Two main points that struck me during the event. One is that context is extremely important for developing high quality analytics, but the actual shape that “context” takes varies greatly. The second is a seeming disconnect between the product developers, who are eagerly developing new and better measures, and the customers, who want better usability, more customer support, more customized metrics that fit their preexisting analytic frameworks and a better understanding of why social media analysis is worth their time, effort and money.

Below is a summary of some of the key points. My detailed notes from each of the speakers, can be viewed here. I attended both the more technical Technology and Innovation Session and the Symposium itself.

Context is in. But what is context?

The big takeaway from the Technology and Innovation session, which was then carried into the second day of the Sentiment Analysis Symposium was that context is important. But context was defined in a number of different ways.

 

New measures are coming, and old measures are improving.

The innovative new strategies presented at the Symposium made for really amazing presentations. New measures include voice intonation, facial expressions via remote video connections, measures of galvanic skin response, self tagged sentiment data from social media sharing sites, a variety of measures from people who have embraced the “quantified self” movement, metadata from cellphone connections (including location, etc.), behavioral patterning on the individual and group level, and quite a bit of network analysis. Some speakers showcased systems that involved a variety of linked data or highly visual analytic components. Each of these measures increase the accuracy of preexisting measures and complicate their implementation, bringing new sets of challenges to the industry.

Here is a networked representation of the emotion transition dynamics of 'Hopeful'

Here is a networked representation of the emotion transition dynamics of ‘Hopeful’

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

Meanwhile, traditional text-based sentiment analyses are also improving. Both the quality of machine learning algorithms and the quality of rule based systems are improving quickly. New strategies include looking at text data pragmatically (e.g. What are common linguistics patterns in specific goal directed behavior strategies?), gaining domain level specificity, adding steps for genre detection to increase accuracy and looking across languages. New analytic strategies are integrated into algorithms and complementary suites of algorithms are implemented as ensembles. Multilingual analysis is a particular challenge to ML techniques, but can be achieved with a high degree of accuracy using rule based techniques. The attendees appeared to agree that rule based systems are much more accurate that machine learning algorithms, but the time and expertise involved has caused them to come out of vogue.

 

“The industry as a whole needs to grow up”

I suspect that Chris Boudreaux of Accenture shocked the room when he said “the industry as a whole really needs to grow up.” Speaking off the cuff, without his slides after a mishap and adventure, Boudreaux gave the customer point of view toward social media analytics. He said said that social media analysis needs to be more reliable, accessible, actionable and dependable. Companies need to move past the startup phase to a new phase of accountability. Tools need to integrate into preexisting analytic structures and metrics, to be accessible to customers who are not experts, and to come better supported.

Boudreaux spoke of the need for social media companies to better understand their customers. Instead of marketing tools to their wider base of potential customers, the tools seem to be developed and marketed solely to market researchers. This has led to a more rapid adoption among the market research community and a general skepticism or ambivalence across other industries, who don’t see how using these tools would benefit them.

The companies who truly value and want to expand their customer base will focus on the usability of their dashboards. This is an area ripe for a growing legion of usability experts and usability testing. These dashboards cannot restrict API access and understanding to data scientist experts. They will develop, market and support these dashboards through productive partnerships with their customers, generating measures that are specifically relevant to them and personalized dashboards that fit into preexisting metrics and are easy for the customers to understand and react to in a very practical and personalized sense.

Some companies have already started to work with their customers in more productive ways. Crimson Hexagon, for example, employs people who specialize in using their dashboard. These employees work with customers to better understand and support their use of the platform and run studies of their own using the platform, becoming an internal element in the quality feedback loop.

 

Less Traditional fields for Social Media Analysis:

There was a wide spread of fields represented at the Symposium. I spoke with someone involved in text analysis for legal reasons, including jury analyses. I saw an NYPD name tag. Financial services were well represented. Publishing houses were present. Some health related organizations were present, including neuroscience specialists, medical practitioners interested in predicting early symptoms of diseases like Alzheimer’s, medical specialists interested in helping improve the lives of people with diseases like Autism (e.g. with facial emotion recognition devices), pharmaceutical companies interested in understanding medical literature on a massive scale as well as patient conversation about prescriptions and participation in medical trials. There were traditional market research firms, and many new startups with a wide variety of focuses and functions. There were also established technology companies (e.g. IBM and Dell) with innovation wings and many academic departments. I’m sure I’ve missed many of the entities present or following remotely.

The better research providers can understand the potential breadth of applications  of their research, the more they can improve the specific areas of interest to these communities.

 

Rethinking the Public Image of Sentiment Analysis:

There was some concern that “social” is beginning to have too much baggage to be an attractive label, causing people to think immediately of top platforms such as Facebook and Twitter and belying the true breadth of the industry. This prompted a movement toward other terms at the symposium, including human analytics, humetrics, and measures of human engagement.

 

Accuracy

Accuracy tops out at about 80%, because that’s the limit of inter-rater reliability in sentiment analysis. Understanding the more difficult data is an important challenge for social media analysts. It is important for there to be honesty with customers and with each other about the areas where automated tagging fails. This particular area was a kind of elephant in the room- always present, but rarely mentioned.

Although an 80% accuracy rate is really fantastic compared to no measure at all, and it is an amazing accomplishment given the financial constraints that analysts encounter, it is not an accuracy rate that works across industries and sectors. It is important to consider the “fitness for use” of an analysis. For some industries, an error is not a big deal. If a company is able to respond to 80% of the tweets directed at them in real-time, they are doing quite well, But when real people or weightier consequences are involved, this kind of error rate is blatantly unacceptable. These are the areas where human involvement in the analysis is absolutely critical. Where, honestly speaking, are algorithms performing fantastically, and where are they falling short? In the areas where they fall short, human experts should be deployed, adding behavioral and linguistic insight to the analysis.

One excellent example of Fitness for Use was the presentation by Capital Market Exchange. This company operationalizes sentiment as expert opinion. They mine a variety of sources for expert opinions about investing, and then format the commonalities in an actionable way, leading to a substantial improvement above market performance for their investors. They are able to gain a great deal of market traction that pure sentiment analysts have not by valuing the preexisting knowledge structures in their industry.

 

Targeting the weaknesses

It is important that the field look carefully at areas where algorithms do and do not work. The areas where they don’t represent whole fields of study, many of which have legions of social media analysts at the ready. This includes less traditional areas of linguistics, such as Sociolinguistics, Conversation Analysis (e.g. looking at expected pair parts) and Discourse Analysis (e.g. understanding identity construction), as well as Ethnography (with fast growing subfields, such as Netnography), Psychology and Behavioral Economics. Time to think strategically to better understand the data from new perspectives. Time to more seriously evaluate and invest in neutral responses.

 

Summing Up

Social media data analysis, large scale text analysis and sentiment analysis have enjoyed a kind of honeymoon period. With so many new and fast growing data sources, a plethora of growing needs and applications, and a competitive and fast growing set of analytic strategies, the field has been growing at an astronomical rate. But this excitement has to be balanced out with the practical needs of the marketplace. It is time for growing technologies to better listen to and accommodate the needs of the customer base. This shift will help ensure the viability of the field and free developers up to embrace the spirit of intellectual creativity.

This is an exciting time for a fast growing field!

Thank you to Seth Grimes for organizing such a great event.

 

Free Range Research will cover the Sentiment Symposium in NYC next week #SAS14

Next week Free Range Research will be in NYC to cover the Sentiment Symposium and Innovation session, and I can’t tell you how excited I am about it!

The development of useful analytics hinges on constant innovation and experimentation, and binary positive/negative measures don’t come close to describing the full potential of social media data. This year’s symposium is an effort to confront the limitations of calcified measures of sentiment head on by introducing new measures and new perspectives.

As a programmer, a quantitative and qualitative analyst, a recent academic, and a fervent believer in the power of the power of mixed methods and interdisciplinary research, I am eager to cover the Symposium as both an enthusiastic and a critical voice. The new directions that will be represented are exciting and interesting, and I expect to gain a better feel for many cutting edges analytic practices. But the proprietary and competitive nature of the social media marketplace has led to countless overblown claims. I do not plan to simply be a conduit for these. My goal will be to share as much as possible of what I learn at the Symposium in a grounded and accessible way, as timely as possible, offering counterpoints and data driven examples when possible, on both my blog and through my Twitter handle @FreeRangeRsrch

I hope you’ll join me!

 

today in research & zen: “What is known as ‘realizing the mystery’ is nothing more than breaking through to grab an ordinary person’s life” Te-Shan

Medical Errors: detecting them, preventing them, and dealing with their aftermath

This is primarily a report on an event, but I’ve added links, stories and examples to my notes.

The event:

Bioethics lecture on Error: https://www.eventbrite.com/e/conversations-in-bioethics-tickets-10276951639

Brief description: “Join distinguished national experts John James, PhD, former chief toxicologist at NASA and founder of Patient Safety America, Brian Goldman, MD, emergency physician-author and host of the CBC’s White Coat, Black Art, Beth Daley Ullem, MBA, nationally-recognized advocate for patient safety and quality and SFS alum, for a lively discussion and Q&A moderated by Maggie Little, Director of the Kennedy Institute of Ethics.”

At the Beautiful Kennedy Institute, Georgetown University

The follow-up:

For those who are particularly interested in this topic, the Kennedy Institute has an upcoming Bioethics MOOC starting 4/15: http://kennedyinstitute.georgetown.edu/about/news/bioethics-mooc-spring-launch.cfm

Why Create this resource?

What follows is a long resource- an in depth summary of the lecture I attended last night, complete with many links to other resources and a few stories and examples. Like the members of this panel, I have experienced a dramatic medical error. In 2012 my mother was on life support after experiencing a period of time with no oxygen to her brain. Her heart had stopped twice, and she was unresponsive. I am her only child, and I had essentially moved into the hospital with her in order to be her advocate. It was my decision whether or not to continue life support, and the main deciding factor was whether or not she was brain dead. She was given an EEG test, and it did not look good. There was a delay before I heard the results of the test, and I spent that delay researching her EEG patterns to try to understand what was going on. The next day the medical staff involved in her case sat me down and told me she was indeed brain dead. It wasn’t until my cousin had announced her passing on Facebook, I was saying my final goodbyes, and my aunt was on the phone with the funeral home that the doctors on the case realized they had miscommunicated. Another patient in another hospital was brain dead, but my mother was not officially brain dead. Her brain activity appeared to be seizure activity, and it wasn’t clear if there was anything else going on. The group apologized, and we were forced to reverse the story and try to explain to friends and family (and ourselves) that she was not actually dead, but she was still very close to it. There was a tide of “Go get em!” cries, which were difficult to deal with when we did indeed have to remove life support a few days later.

After this event, one of the physicians involved in the miscommunication focused my attention on a collaborative project. We began work on a grand rounds presentation for the hospital. We planned to talk about errors in general and, more specifically, what could be learned from this error. I did quite a bit of reading and research. We had some great discussions, and I started to attend a medical discourse group in my graduate linguistics department. At some point I will probably return to the notes from that collaboration and assemble a blog post about them.

It is because of that experience and that project that I assembled the resource below. I sincerely hope that you find it interesting and useful.

Please note that this is based on many pages of notes. Unfortunately my notes did not attribute points to individual panelists. I apologize for that omission.

Prevalence and Detection

An estimated 100,000 lives are lost each year from preventable medical error (according to 1999 landmark Institute Of Medicine report), but this data is old (1984, New York State) and focused on errors of commission. There are many other kinds of errors, including omission, context, diagnostic and communication. Measuring preventable deaths is easier than measuring mistakes overall, but mistakes that do not directly lead to death cause plenty of heartache every day as well.

One more recent attempt to detect medical errors involved isolating common trigger words that accompany medical mistakes in medical files and then having the cases reviewed by medical professionals to see if the deaths were indeed preventable. By this method, the estimate was closer to 210,000 preventable deaths. This method was more comprehensive, but records don’t have the right parameters or standardization to make this process ideal. Some estimates are as high as 440,000 deaths per year.

Regardless of the exact numbers, for physicians, there is a near 100% possibility of making a mistake at some point. This fact alone should change the paradigm from avoiding errors altogether to openly anticipating and working with errors as they happen.

Aftermath

After a medical error occurs, heartache abounds. But contrary to social conventions outside of the medical establishment, contact is often strictly controlled and regulated after the incident, and the physician is rarely able to say “I’m sorry.” This can cause a lack of closure for both the patients and the doctors. The aftermath of one of these errors forms a second layer of trauma for all of those involved.

The first target for any kind of error is often the individual who made the mistake, not the system that enabled the mistakes. The system quickly closes around this individual. The hospital risk administration sets in. Privacy walls are erected, and it becomes very difficult to take responsibility for one’s errors. A perfect storm of system and culture clash together, resulting in ill-advised words and actions on the part of those involved. At such a sensitive time, the words of care providers are often burned into the minds of the deceased patient’s advocates and family members. Blame is often tossed around indiscriminately. The survivors are often left feeling confused. One of the panelists remembers her physician counseling her with “I really don’t know why God needed your baby more than you did.”

The medical providers at this point are isolated from their patients and often prohibited from discussing these incidents with each other. At such a vulnerable moment, they are left to deal with it alone, taking each incident as a private failure when mistakes are a universal human condition. If other providers hear about the incident, they will often exacerbate the problem by not making eye contact, demonstrating their vicarious shame, reinforcing the problem as a repudiation of all a doctor is supposed to be.

System level Problems

The medical system is large and complicated enough to really enable errors. There are so many medical professionals, patients, laypeople and touchpoints, and the body itself is quite a complicated system- some of which is better understood and some of which is still largely undocumented territory. The medical system is evolving fast from the mom and pop doctors of the past to the large complexes of today. The modern medical system has its hand in businesses that no one would have imagined before. Some hospitals boast dental facilities, nursing homes, outpatient clinics, and even foster care facilities. The changing rules for insurance payments and the increasing role of legal actors also have a significant influence on the system.

In order for hospitals to make money, many end up adjusting the patient care ratios. Some stretch these ratios to the breaking point, putting medical staff in a position where they can barely keep up. The pressure for productivity is much higher now than it was in the recent past. Many facilities are over capacity, and space is at a premium. This can put medical staff in an awkward position where there are constant workarounds and makeshift solutions. These kinds of problems can lead to  errors of context. The same patient may be treated differently in the ambulatory care area of the same wing than in the rapid assessment area. In the words of one panelist “geography is destiny in the E.R.” Movement in space within a medical facility is both physical and cognitive.

Scheduling is also a huge issue in medical facilities. Long stretches of work without sleep are a better known precursor to many medical errors.

Technology

Technology is integral to the modern medical system and has saved many lives. But technology training and interface design are extremely important. One panelist reported that a medical professional confessed to him years after his son’s preventable death that the MRI machine was new, and no one onhand knew how to use it properly. Others have reported on the influence of signal fatigue- it is very hard amidst a constant stream of signals to ferret out the most important among them.

Technology was a real point of frustration for me when I had my first child. I was induced in the evening and felt increasingly strong contractions all night. When the nurses came to check on me, I reported that I was in labor, but the pattern on the monitor was not consistent with what they would call labor. Once I started to push I called them back and requested an exam, and fortunately, although my doctor and the doctor on call were not available, they were qualified to catch the baby.

Medical culture

One of the panelists told the story of a physician who began his shift by calling together his team, warning them that he did not get much of a night’s sleep the night before, and asking them to watch his back a bit more closely than usual. This runs starkly contrary to typical medical enculturation. Medical culture makes it harder to admit mistakes or to be human. One panelist commented “We’re very defensive about our mistakes.” This is emblematic  of a culture that can’t handle its own humanity. This repulsion by error is compounded by a system that doesn’t comment but rather expects good performance. The “no news is good news” ethic means that a physician can go his or her entire career without ever hearing any feedback, and that can be a good thing.

In medicine, the smartest person in the room is quickly the person in charge. One of the panelists, Brian “didn’t want to be a high-maintenance student” as a resident by asking too many questions or requesting help too often. This attitude wound up fatal for one of his patients. Errors are a reminder of human fallibility, and medical professionals are supposed to be infallible. Brian talked more about this in a TED talk. In it, he spoke of batting averages. We assume that error is a natural part of other jobs, but what is an acceptable batting average for a surgeon? A mistake can mean that one was lazy or incompetent or had a lapse. Which one does the physician want to admit to? None! Instead, they often live in terror when one mistake happens that another will soon follow. One panelist said the words he most fears as a medical professional are “Do you remember?”

Instead of the culture of shame and blame, we could benefit from being scientific about error: exhibiting genuine curiosity about errors, measuring them, and developing and testing treatments for them. One panel member mentioned a surgeon who developed a kind of flight data recorder for surgery: http://www.icee-con.org/papers/2008/pdf/O-100.pdf . Apparently this surgeon has been dubbed “the most dangerous man in surgery.”

Isolation and selective training

People are trained in the context of the settings where they have worked. Different settings see different kinds of challenges. Shouldn’t there be a better system for sharing challenges and solutions across institutions?

Handwriting

It is pretty incredible that such a high stakes field rests on human handwriting. This is made worse by the lack of value placed on making handwriting legible and on the decreasing abilities of a technologically savvy population to decipher human handwriting. How many of you can read cursive?

Science or Art?

One interesting aspect of medicine is the way it is a field composed of scientists who view themselves as artists. This is evident in the total lack of standardization in medical care. You will have a different experience, even with the same condition, across locations and providers. Even within a single hospital individual doctors act as subcontractors, providing individualized service as only they can, despite the common environment. Sometimes there are standards or guidelines set for specific areas of medicine with a goal of instituting consistency. But the adaptation of these standards and attitudes toward these standards are far from universal. The standards take shape differently across locations and providers.

The panel members mentioned the success of VA hospitals in this area. They are better at standardization. Vertically integrated healthcare can be much more progressive.

Areas for improvement

So what kind of changes would improve the system? Some prominent authors liken error models to those in the airplane industry. This is tricky, because medicine is far more complicated that aviation- although both are high stakes fields that require inhuman levels of perfection among human actors. But even if the systems are different, they can still learn from each other.

Atul Gawande is a well known author The Checklist Manifesto. He has been advocating for many of the checklists and safety features that are standard in the aviation industry to be applied to medicine. He also wrote a piece about what medicine can learn from The Cheesecake Factory.

Some suggested areas for improvement include instituting redundancies, collapsing hierarchies and patient centered care.

One panel member was involved with error prevention at more of a business level. She mentioned the power of adding redundancies. Adding redundancies should be common practice and is common practice in other high stakes fields. Redundancies should be worked into routines and checks, although models of modern efficiency seem to be moving away from them. She also mentioned the powerful potential of dashboards and the importance of comparative information. One great example of the power of comparative information is “Solutions for Patient Safety” http://www.solutionsforpatientsafety.org/ . This is a group of 78 pediatric hospitals that share a common dashboard. Using the dashboard the hospitals can see how they stand in terms of infections and other errors compared to the rest of the network. It’s a teaching model- the best teach the rest about the measures they’re using to combat each problem. The panelist mentioned that we buy healthcare products without comparative information, but information on dashboards can really increase accountability.

Collapsing hierarchies would make it more culturally acceptable to report medical errors. This could also be augmented through multidisciplinary peer reviews, involving everyone from providers across medical specialties and training to janitors and other people present at the time of care.

One of the panelists wrote a patient bill of rights. An audience member commented on the need for patients to feel more powerful and have more power in medical situations. He noted that the playing field between doctor and patient is inherently unequal. As soon as you remove your clothes and put on the patient smock you begin to feel powerless. He noted that some medical providers will take advantage of that vulnerability. The foundation of patient centered care is informed consent. If you don’t understand your options, you cannot make an informed choice.

One specific example of an area where patients are unable to make informed decisions was off-label prescriptions. Prescriptions are often prescribed off-label, meaning that the patient is not part of the population base for which the drug was tested. This was the case for me when my first child was born, and I was induced with Cytotec. When she was born, a healthy 8 lb 3 ounce baby aspirated meconium and ended up in the NICU while I was treated for hemorrhage. I knew nothing of the drug or the potential consequences. In fact, I had chosen an unmedicated chidbirth and eschewed interventions altogether.

Another example of an area where patients can’t always make informed decisions is that of cost. There has been quite a bit of buzz lately about the ridiculous hospital bills patients receive upon discharge. I can’t tell you how paranoid I am about any supplies used on myself or my kids in the E.R. having seen some of those bills. A close friend of mine recently had an incident where an inexpensive scheduled dentist appointment turned into over $2000 in charges, due immediately. That incident led to an extensive series of phonecalls between myself and the dental office, debating consent.

An audience member spoke about the importance of patient advocates.  Apparently there is a growing business of professional patient advocates. I think that this is wonderful, because historically the only qualification necessary for a patient advocate was that they not be the patient. I’ve had the experience of reading transcripts of doctor patient visits that included advocates. Certainly not all advocates are built alike! This role is more deeply explored in the book “High Performance Healthcare

Opportunities for Linguists

There are two main applications for linguistics that are most evident in this discussion. One is the potential for computational linguists and natural language processing experts to mine the textual data available in  electronic health records as they become increasingly available. The other is the opportunity for discourse analysts to conduct research on the actual communication between everyone involved. Discourse analysts can both develop and institute more structured protocols, such as the double verification before certain medications and procedures, and raise awareness regarding instances when less than optimal communication styles can lead to mix-ups or other mistakes. Discourse analysts who specialize in apologies could be particularly effective advisors in training medical professionals to talk with patients and their advocates and family following medical errors. This is a strong interest of mine, and I’m lucky enough to attend regular medical discourse discussion groups with the head of my graduate department, Heidi Hamilton. Her work is a real treasure trove of medical discourse, well worth investigating further.

On a personal note, it is also very healing for victims and survivors to build narratives around these incidents that help to give them a wider context and meaning. I wrote about that process here: https://freerangeresearch.com/2012/05/22/ot-on-loss-and-grief-and-the-power-of-storytelling/

You may notice that I decided at that point not to give the medical error a place in my mom’s story. That was an important decision for me that helped me to heal.

Moving on

The three panelists had all lost people due to medical errors. I’ve also been the victim of medical errors. We were able to find some healing in the process of going deeper into the errors and the medical system that enabled them. You have also probably suffered in some way as the result of a medical error. It is also important to note that all of us have also had our lives made better by medicine at some point, and we probably also all know people whose lives were saved by medicine. It is an imperfect system, but it is a system with a lot of strengths.