Total Survey Error: nanny to some, wise elder for some, strange parental friend for others

Total Survey Error and I are long-time acquaintences, just getting to know each other better. Looking at TSE is, for me, like looking at my work in survey research through a distorted mirror to an alternate universe. This week, I’ve spent some time closely reading Groves’ Past, Present and Future of Total Survey Error, and it provided some historical context to the framework, as well as an experienced account of its strengths and weaknesses.

Errors are an important area of study across many fields. Historically, models about error assumed that people didn’t really make errors often. Those attitudes are alive and well in many fields and workplaces today. Instead of carefully considering errors, they are often dismissed as indicators of incompetence. However, some workplaces are changing the way they approach errors. I did some collaborative research on medical errors in 2012 and was introduced to the term HRO or High-Reliability Organization. This is an error focused model of management that assumes that errors will be made, and not all errors can be anticipated. Therefore, every error should be embraced as a learning opportunity to build a better organizational framework.

From time to time, various members of our working group have been driven to create checklists for particular aspects of our work. In my experience, the checklists are very helpful for work that we do infrequently and virtually useless for work that we do daily. Writing a checklist for your daily work is a bit like writing instructions on how you brush your teeth and expecting to keep those instructions updated whenever you make a change of sorts. Undoubtedly, you’ll reread the instructions and wonder when you switched from a vertical to a circular motion for a given tooth. And yet there are so many important elements to our work, and so many areas where people could make less than ideal decisions (small or large). From this need rose Deming, with the first survey quality checklist. After Deming, a few other models arose. Eventually, TSE became the cumulative working framework or foundational framework for the field of survey research.

In my last blog, I spoke about the strangeness of coming across a foundational framework after working in the field without one. The framework is a conceptually important one, separating out sources of errors in ways that make shortcomings and strengths apparent and clarifying what is more or less known about a project.

But in practice, this model has not become the applied working model that its founders and biggest proponents expected it to be. This is for two reasons (that I’ll focus on), one of which Groves mentioned in some detail in this paper and one of which he barely touched on (but likely drove him out of the field).

1. The framework has mathematical properties, and this has led to its more intensive use on aspects of the survey process that are traditionally quantitative. TSE research in areas of sampling, coverage, response and aspects of analysis is quite common, but TSE research in other areas is much less common. In fact, many of the less quantifiable parts of the survey process are almost dismissed in favor of the more quantifiable parts. A survey with a particularly low TSE value could have huge underlying problems or be of minimal use once complete.
2. The framework doesn’t explicitly consider the human factors that govern research behind the scenes. Groves mentioned that the end users of the data are not deeply considered in the model, but neither are the other financial and personal (and personafinancial) constraints that govern much decision making. Ideally, the end goal of research is high quality research that yields a useful and relevant response for as minimal cost as possible. In practice, however, the goal is both to keep costs low and to satisfy a system of interrelated (and often conflicting) personal or professional (personaprofessional?) interests. If the most influential of these interests are not particularly interested in (or appreciative of) the model, practitioners are highly unlikely to take the time to apply it.

Survey research requires very close attention to detail in order to minimize errors. It requires an intimate working knowledge of math and of computer programming. It also benefits from a knowledge of human behavior and the research environment. If I were to recommend any changes to the TSE model, I would recommend a bit more task based detail, to incorporate more of the highly valued working knowledge that is often inherent and unspoken in the training of new researchers. I would also recommend a more of an HRO orientation toward error, anticipating and embracing unexpected errors as a source of additions to the model. And I would recommend some deeper incorporation of the personal and financial constraints and the roles they play (clearly an easier change to introduce than to flesh out in any great detail!). I would recommend a shift of focus, away from the quantitative modeling aspects and to the overall applicability and importance of a detailed, applied working model.

I’ve suggested before that survey research does not have a strong enough public face for the general public to understand or deeply value our work. A model that is better embraced by the field could for the basis for a public face, but the model would have to appeal to practitioners on a practical level. The question is: how do you get members of a well established field who have long been working within it and gaining expertise to accept a framework that grew into a foundational piece independent of their work?

Repeating language: what do we repeat, and what does it signal?

Yesterday I attended a talk by Jon Kleinberg entitled “Status, Power & Incentives in Social Media” in Honor of the UMD Human-Computer Interaction Lab’s 30th Anniversary.


This talk was dense and full of methods that are unfamiliar to me. He first discussed logical representations of human relationships, including orientations of sentiment and status, and then he ventured into discursive evidence of these relationships. Finally, he introduced formulas for influence in social media and talked about ways to manipulate the formulas by incentivizing desired behavior and deincentivizing less desired behavior.


In Linguistics, we talk a lot about linguistic accommodation. In any communicative event, it is normal for participant’s speech patterns to converge in some ways. This can be through repetition of words or grammatical structures. Kleinberg presented research about the social meaning of linguistic accommodation, showing that participants with less power tend to accommodate participants with more power more than participants with more power accommodate participants with less power. This idea of quantifying social influence is a very powerful notion in online research, where social influence is a more practical and useful research goal than general representativeness.


I wonder what strategies we use, consciously and unconsciously, when we accommodate other speakers. I wonder whether different forms of repetition have different underlying social meanings.


At the end of the talk, there was some discussion about both the constitution of iconic speech (unmarked words assembled in marked ways) and the meaning of norm flouting.


These are very promising avenues for online text research, and it is exciting to see them play out.

The Bones of Solid Research?

What are the elements that make research “research” and not just “observation?” Where are the bones of the beast, and do all strategies share the same skeleton?

Last Thursday, in my Ethnography of Communication class, we spent the first half hour of class time taking field notes in the library coffee shop. Two parts of the experience struck me the hardest.

1.) I was exhausted. Class came at the end of a long, full work day, toward the end of a week that was full of back to school nights, work, homework and board meetings. I began my observation by ordering a (badly needed) coffee. My goal as I ordered was to see how few words I had to utter in order to complete the transaction. (In my defense, I am usually relatively talkative and friendly…) The experience of observing and speaking as little as possible reminded me of one of the coolest things I’d come across in my degree study: Charlotte Linde, SocioRocketScientist at NASA

2.) Charlotte Linde, SocioRocketScientist at NASA. Dr Linde had come to speak with the GU Linguistics department early in my tenure as a grad student. She mentioned that her thesis had been about the geography of communication- specifically: How did the layout of an (her?) apartment building help shape communication within it?

This idea had struck me, and stayed with me, but it didn’t really make sense until I began to study Ethnography of Communication. In the coffee shop, I structured my fieldnotes like a map and investigated it in terms of zones of activities. Then I investigated expectations and conventions of communication in each zone. As a follow-up to this activity, I’ll either return to the same shop or head to another coffee shop to do some contrastive mapping.

The process of Ethnography embodies the dynamic between quantitative and qualitative methods for me. When I read ethnographic research, I really find myself obsessing over ‘what makes this research?’ and ‘how is each statement justified?’ Survey methodology, which I am still doing every day at work, is so deeply structured that less structured research is, by contrast, a bit bewildering or shocking. Reading about qualitative methodology makes it seem so much more dependable and structured than reading ethnographic research papers does.

Much of the process of learning ethnography is learning yourself; your priorities, your organization, … learning why you notice what you do and evaluate it the way you do… Conversely, much of the process of reading ethnographic research seems to involve evaluation or skepticism of the researcher, the researcher’s perspective and the researcher’s interpretation. As a reader, the places where the researcher’s perspective varies from mine is clear and easy to see, as much as my own perspective is invisible to me.

All of this leads me back to the big questions I’m grappling with. Is this structured observational method the basis for all research? And how much structure does observation need to have in order to qualify as research?

I’d be interested to hear what you think of these issues!

Unlocking patterns in language

In linguistics study, we quickly learn that all language is patterned. Although the actual words we produce vary widely, the process of production does not. The process of constructing baby talk was found to be consistent across kids from 15 different languages. When any two people who do not speak overlapping languages come together and try to speak, the process is the same. When we look at any large body of data, we quickly learn that just about any linguistic phenomena is subject to statistical likelihood. Grammatical patterns govern the basic structure of what we see in the corpus. Variations in language use may tweak these patterns, but each variation is a patterned tweak with its own set of statistical likelihoods. Variations that people are quick to call bastardizations are actually patterned departures from what those people consider to be “standard” english. Understanding “differences not defecits” is a crucially important part of understanding and processing language, because any variation, even texting shorthand, “broken english,” or slang, can be better understood and used once its underlying structure is recognized.

The patterns in language extend beyond grammar to word usage. The most frequent words in a corpus are function words such as “a” and “the,” and the most frequent collocations are combinations like “and the” or “and then it.” These patterns govern the findings of a lot of investigations into textual data. A certain phrase may show up as a frequent member of a dataset simply because it is a common or lexicalized expression, and another combination may not appear because it is more rare- this could be particularly problematic, because what is rare is often more noticeable or important.

Here are some good starter questions to ask to better understand your textual data:

1) Where did this data come from? What was it’s original purpose and context?

2) What did the speakers intend to accomplish by producing this text?

3) What type of data or text, or genre, does this represent?

4) How was this data collected? Where is it from?

5) Who are the speakers? What is their relationship to eachother?

6) Is there any cohesion to the text?

7) What language is the text in? What is the linguistic background of the speakers?

8) Who is the intended audience?

9) What kind of repetition do you see in the text? What about repetition within the context of a conversation? What about repetition of outside elements?

10) What stands out as relatively unusual or rare within the body of text?

11) What is relatively common within the dataset?

12) What register is the text written in? Casual? Academic? Formal? Informal?

13) Pronoun use. Always look at pronoun use. It’s almost always enlightening.

These types of questions will take you much further into your dataset that the knee-jerk question “What is this text about?”

Now, go forth and research! …And be sure to report back!

A fleet of research possibilities and a scattering of updates

Tomorrow is my first day of my 3rd year as a Masters student in the MLC program at Georgetown University. I’m taking the slowwww route through higher ed, as happens when you work full-time, have two kids and are an only child who lost her mother along the way.

This semester I will [finally] take the class I’ve been borrowing pieces from for the past two years: Ethnography of Communication. I’ve decided to use this opportunity do an ethnography of DC taxi drivers. My husband is a DC taxi driver, so in essence this research will build on years of daily conversations. I find that the representation of DC taxi drivers in the news never quite approximates what I’ve seen, and that is my real motivation for the project. I have a couple of enthusiastic collaborators: my husband and a friend whose husband is also a DC taxi driver and who has been a vocal advocate for DC taxi drivers.

I am really eager to get back into linguistics study. I’ve been learning powerful sociolinguistic methods to recognize and interpret patterning in discourse, but it is a challenge not to fall into the age old habit of studying aboutness or topicality, which is much less patterned and powerful.

I have been fortunate enough to combine some of my new qualitative methods with my more quantitative work on some of the reports I’ve completed over the summer. I’m using the open ended responses that we usually don’t fully exploit in order to tell more detailed stories in our survey reports. But balancing quantitative and qualitative methods is very difficult, as I’ve mentioned before, because the power punch of good narrative blows away the quiet power of high quality, representative statistical analysis. Reporting qualitative findings has to be done very carefully.

Over the summer I had the wonderful opportunity to apply my sociolinguistics education to a medical setting. Last May, while my mom was on life support, we were touched by a medical error when my mom was mistakenly declared brain dead. Because she was an organ donor, her life support was not withdrawn before the error was recognized. But the fallout from the error was tremendous. The problem arose because two of her doctors were consulting by phone about their patients, and each thought they were talking about a different patient. In collaboration with one of the doctors involved, I’ve learned a great amount about medical errors and looked at the role of linguistics in bringing awareness to potential errors of miscommunication in conversation. This project was different from other research I’ve done, because it did not involve conducting new research, but rather rereading foundational research and focusing on conversational structure.

In this case, my recommendations were for an awareness of existing conversational structures, rather than an imposition of a new order or procedure. My recommendations, developed in conjunction with Dr Heidi Hamilton, the chair of our linguistics department and medical communication expert, were to be aware of conversational transition points, to focus on the patient identifiers used, and to avoid reaching back or ahead to other patients while discussing a single patient. Each patient discussion must be treated as a separate conversation. Conversation is one of the largest sources of medical error and must be approached carefully is critically important. My mom’s doctor and I hope to make a Grand Rounds presentation out of this effort.

On a personal level, this summer has been one of great transitions. I like to joke that the next time my mom passes away I’ll be better equipped to handle it all. I have learned quite a bit about real estate and estate law and estate sales and more. And about grieving, of course. Having just cleaned through my mom’s house last week, I am beginning this new school year more physically, mentally and emotionally tired than I have ever felt. A close friend of mine has recently finished an extended series of chemo and radiation, and she told me that she is reveling in her strength as it returns. I am also reveling in my own strength, as it returns. I may not be ready for the semester or the new school year, but I am ready for the first day of class tomorrow. And I’m hopeful. For the semester, for the research ahead, for my family, and for myself. I’m grateful for the guidance of my newest guardian angel and the inspiration of great research.

A snapshot from a lunchtime walk

In the words of Sri Aurobindo, “By your stumbling the world is perfected”

Could our attitude toward marketing determine our field’s future?

In our office, we call it the “cocktail party question:” What do you do for a living? For those of us who work in the area of survey research, this can be a particularly difficult question to answer. Not only do people rarely know much about our work, but they rarely have a great deal of interest in it. I like to think of myself as a survey methodologist, but it is easier in social situations to discuss the focus of my research than my passion for methodology. I work at the American Institute of Physics, so I describe my work as “studying people who study physics.” Usually this description is greeted with an uncomfortable laugh, and the conversation progresses elsewhere. Score!

But the wider lack of understanding of survey research can have larger implications than simply awkward social situations. It can also cause tension with clients who don’t understand our work, our process, or where and how we add expertise to the process. Toward this end, I once wrote a guide for working with clients that separated out each stage in the survey process and detailed what expertise the researcher brings to the stage and what expertise we need from the client. I hoped that it would be a way of both separating and affirming the roles of client and researcher and advertising our firm and our field. I have not ye had the opportunity to use this piece, because of the nature of my current projects, but I’d be happy to share it with anyone who is interested in using or adapting it.

I think about that piece often as I see more talk about big data and social media analysis. Data seems to be everywhere and free, and I wonder what affect this buzz will have on a body of research consumers who might not have respected the role of the researchers from the get-go. We worried when Survey Monkey and other automated survey tools came along, but the current bevvy of tools and attitudes could have an exponentially larger impact on our practice.

Survey researchers often thumb their nose at advertising, despite the heavy methodological overlap. Oftentimes there is a knee-jerk reaction against marketing speak. Not only do survey methodologists often thumb their/our noses at the goal and importance of advertising, but they/we often thumb their/our nose at what appears to be evidence of less rigorous methodology. This has led us to a ridiculous point where data and analyses have evolved quickly with the demand and heavy use of advertising and market researchers and evolved strikingly little in more traditional survey areas, like polling and educational research. Much of the rhetoric about social media analysis, text analysis, social network analysis and big data is directed at the marketing and advertising crowd. Translating it to a wider research context and communicating it to a field that is often not eager to adapt to it can be difficult. And yet the exchange of ideas between the sister fields has never been more crucial to our mutual survival and relevance.

One of the goals of this blog has been to approach the changing landscape of research from a methodologically sound, interdisciplinary perspective that doesn’t suffer from the artificial walls and divisions. As I’ve worked on the blog, my own research methodology has evolved considerably. I’m relying more heavily on mixed methods and trying to use and integrate different tools into my work. I’ve learned quite a bit from researchers with a wide variety of backgrounds, and I often feel like I’m belted into a car with the windows down, hurtling down the highways of progress at top speed and trying to control the airflow. And then I often glimpse other survey researchers out the window, driving slowly, sensibly along the access road alongside the highway. I wonder if my mentors feel the change of landscape as viscerally as I do. I wonder how to carry forward the anchors and quality controls that led to such high quality research in the survey realm. I wonder about the future. And the present. About who’s driving, and who in what car is talking to who? Using what gps?

Mostly I wonder: could our negative attitude toward advertising and market research drive us right into obscurity? Are we too quick to misjudge the magnitude of the changes afoot?


This post is meant to be provocative, and I hope it inspires some good conversation.

Amazing Presentation on Infographics

I had the privilege this week of attending a webinar by Matthew Erickson of the New York Times about innovative graphic presentations of data. There were some truly amazing interactive displays included in this presentation, and the presenter had a lot of very insightful suggestions for rethinking data presentation:


He spoke about the role of good interactive presentation in situating data, providing context, developing layers, and telling a story. A lot of times, the distribution of the data, and it’s relationship with data from other sources, is its most interesting layer. In an innovative presentation of data, we must balance the expectations of the audience, who become interactants with the data and must be able to manipulate it easily, with a complementary layer of expertise or context.

For example, data about Manny Rivera’s pitching style could best be understood by the placement of the ball at the hitter’s decision making point. In a graphic about Rivera’s success, the reporters were able to show how radically different pitches were virtually indistinguishable at the crucial decision making point for the hitter.

He referred to infographics as the “gamification of news.”


To connect his presentation to this ongoing discussion of text analytics, check out the way he displayed word frequencies:

Interestingly, it is still problematic, but it is super cool looking…


And, speaking of infographics, check out this awesome one that Pew debuted today:

Rethinking the Future of Survey Methodology; Finding a Place for Linguistics

Where is the future of survey research?

The technical context in which survey methodology lives is evolving quickly. Where will surveys fit into this context in the future?

In the past, surveys were a valuable and unique source of data. As society became more focused on customization and understanding certain populations, surveys became in invaluable tool for data collection. But at this point, we are inundated with data. The amount of content generatd every minute on the net is staggering. In an environment where content is so omnipresent, what role can surveys play? How can we justify our particular brand of data?

Survey methodology has become structured around a set of ethics and practices, including representativeness and respect for the respondents. Without that structure, the most vocal win out, and the resulting picture is not representative.

I recently had the pleasure of reading a bit of Don Dillman’s rewrite of ‘The Tailored Design Method,’ which is the defining classic reference in survey research. The book includes research based strategies for designing and targeting a survey population with the highest possible degree of success. It is often referred to as a bible of sorts to survey practitioners. This time around, I began to think about why the suggestions in the book are so successful. I believe that the success of Dillman’s suggestions has to do with his working title- it is a tailored method, designed around the respondents. And, indeed, the book borrows some principles of respondent or user centered design.

So where does text analysis fit into that? In a context where content is increasingly targeted, and people expect content to be increasingly targeted, surveys as well need to be targeted and tailored for respondents. In an era where the cost benefit equation of survey response is increasingly weighted against us, where potential responses are inundated with web content and web surveys, any misstep can be enough to drive respondents away and even to cause a potential viral backlash. It has never been more important to get it right.

And yet we are pressured not to get it right but to get it fast. So the traditional methods of focus groups and cognitive interviews are increasingly too costly and too timely to use. But their role is an important one. They act to add a layer of quality control to the surveys we produce. They keep us from thinking that because we are the survey experts we are also the respondent experts and the response experts.

A good example of this is Shaeffer’s key idea of native terms. I have a brief story to illustrate this. Our building daycare is about to close, and I have been involved in many discussions about the impact of its closure as well as the planning and musing about the upcoming final farewell reunion celebration. The other day I ran into one of the kids’ grandparents, someone who I have frequentky discussed the daycare with. She asked me if I was planning to go to Audrey’s party. I told her I didn’t know anything about it and wasn’t planning to go. I said this, because I associate the terms she used with retirement celebrations. I assumed that she was talking about a party specifically in honor of the director, not the reunion for all of the kids.

It’s easy as a survey developer to assume that if you ask something that is near enough to what you want to know, the respondent can extrapolate the rest. But that belies the actual way in which we communicate. When it comes to communication, we are inundated with verbal information, and we only really consciously take the gloss of it. That’s what linguistics is all about; unpacking the aspects of communication that communicators don’t naturally focus on, don’t notice, and even can’t notice in the process of communication.

So where am I going with all of this?

One of the most frequent aspects of text analysis is a word frequency count. This is often used as a psuedo content analysis, but that is a very problematic extrapolation, for reasons that I’ve mentioned before in this blog and in my paper on ths topic. However, word frequency counts are a good way of extrapolating native terms from which to do targeting.

Text analytics aren’t representative, but they have the ability of being more representative than many of the other predevelopment methods that we employ. Their best use may not be so much as a supplement to our data analysis as a precursor to our data collection.

However, that data has more uses than this.

It CAN be used as a supplement to data analysis as well, but not by going broad. By going DEEP. Taking segments and applying discourse analytic methodology can be a way of supplementing the numbers and figures collected with surveys with a deeper understanding of the dynamics of the respondent population.

Using this perspective, linguistics has a role both in the development of tailored questionnaires and in the in depth analysis of the respondenses and respondents.

Framing; an Important Aspect of Discourse Analysis

One aspect of discourse analysis that is particularly easy to connect with is framing. Framing is a term that we hear very often in public discourse, as in “How was that issue framed?” or “How should this idea be framed if we want people to buy into it?” Framing is discourse analysis is similar, but it is a much more useful concept.

We understand a frame as ‘what is going on.’ This can be very simple. I can see you on the street and greet you. We can both think of it simply as a greeting frame, and we can have similar ideas about what that greeting frame should look like. I can say “Hey there, nice to see you!” and you can answer back “Nice to see you!” We can both then smile at each other, and keep walking, both smiling for having seen each other.

But frames are much more complicated than that, for the most part. Each of the interactants has their own idea of what the frame of the interaction is, and each has their own set of knowledge about what the frame entails. It would be easy for us to have different sets of knowledge or expectations regarding the frame. We do, after all, have a lifetime of separate experiences. We also could disagree about the framing of our interaction. Let’s say that I think we are simply greeting and passing, and you think we are greeting and then starting a conversation? Or what of we decide to enter a nearby bar, and I think we are on a date and you do not.

Frames also have layers. We might love to joke, but we will joke differently in a job interview than we will at a bar. Joking in a job interview is what we call an embedded frame in discourse analysis. The layering in the frames are an interesting point of analysis as well, because we may or may not have the same idea of what the outer frame of our interaction is.

I believe it was Erving Goffman who pointed out that the range of emotions we access is contingent on the frame we are working within. Truly, anger in an office is generally quite tame compared to anger at home…

Framing accounts for successful communication and misunderstandings. Its an especially useful tool with which to evaluate the success or failure of an interaction. It is especially interesting to look at framing in terms of the cuing that interactants do. How do we signal a change in frame? Are those signals recognized as they were intended? Are they accepted or rejected?

Framing is also an interesting way to view relationships. It is easy, especially early in a relationship, to assume that your partner shares your frames and the knowledge about them. Similarly, it is easy to assume that your partner shares the same priorities that you do.

Unfortunately, we tend to judge people by the frames that we have activated. So if I frame our interaction as ‘cleaning the kitchen’ and you view it as ‘chatting in the kitchen while fiddling with the dishcloth,’ I am likely to judge your performance as a cleaner negatively. Similarly, in a job interview situation, framing problems are often not recognized by the interviewers, causing the interviewee to appear incompetent.

Recognizing framing issues is an important element of what discourse analysts do in their professional lives when analyzing communication.