Rethinking the Future of Survey Methodology; Finding a Place for Linguistics

Where is the future of survey research?

The technical context in which survey methodology lives is evolving quickly. Where will surveys fit into this context in the future?

In the past, surveys were a valuable and unique source of data. As society became more focused on customization and understanding certain populations, surveys became in invaluable tool for data collection. But at this point, we are inundated with data. The amount of content generatd every minute on the net is staggering. In an environment where content is so omnipresent, what role can surveys play? How can we justify our particular brand of data?

Survey methodology has become structured around a set of ethics and practices, including representativeness and respect for the respondents. Without that structure, the most vocal win out, and the resulting picture is not representative.

I recently had the pleasure of reading a bit of Don Dillman’s rewrite of ‘The Tailored Design Method,’ which is the defining classic reference in survey research. The book includes research based strategies for designing and targeting a survey population with the highest possible degree of success. It is often referred to as a bible of sorts to survey practitioners. This time around, I began to think about why the suggestions in the book are so successful. I believe that the success of Dillman’s suggestions has to do with his working title- it is a tailored method, designed around the respondents. And, indeed, the book borrows some principles of respondent or user centered design.

So where does text analysis fit into that? In a context where content is increasingly targeted, and people expect content to be increasingly targeted, surveys as well need to be targeted and tailored for respondents. In an era where the cost benefit equation of survey response is increasingly weighted against us, where potential responses are inundated with web content and web surveys, any misstep can be enough to drive respondents away and even to cause a potential viral backlash. It has never been more important to get it right.

And yet we are pressured not to get it right but to get it fast. So the traditional methods of focus groups and cognitive interviews are increasingly too costly and too timely to use. But their role is an important one. They act to add a layer of quality control to the surveys we produce. They keep us from thinking that because we are the survey experts we are also the respondent experts and the response experts.

A good example of this is Shaeffer’s key idea of native terms. I have a brief story to illustrate this. Our building daycare is about to close, and I have been involved in many discussions about the impact of its closure as well as the planning and musing about the upcoming final farewell reunion celebration. The other day I ran into one of the kids’ grandparents, someone who I have frequentky discussed the daycare with. She asked me if I was planning to go to Audrey’s party. I told her I didn’t know anything about it and wasn’t planning to go. I said this, because I associate the terms she used with retirement celebrations. I assumed that she was talking about a party specifically in honor of the director, not the reunion for all of the kids.

It’s easy as a survey developer to assume that if you ask something that is near enough to what you want to know, the respondent can extrapolate the rest. But that belies the actual way in which we communicate. When it comes to communication, we are inundated with verbal information, and we only really consciously take the gloss of it. That’s what linguistics is all about; unpacking the aspects of communication that communicators don’t naturally focus on, don’t notice, and even can’t notice in the process of communication.

So where am I going with all of this?

One of the most frequent aspects of text analysis is a word frequency count. This is often used as a psuedo content analysis, but that is a very problematic extrapolation, for reasons that I’ve mentioned before in this blog and in my paper on ths topic. However, word frequency counts are a good way of extrapolating native terms from which to do targeting.

Text analytics aren’t representative, but they have the ability of being more representative than many of the other predevelopment methods that we employ. Their best use may not be so much as a supplement to our data analysis as a precursor to our data collection.

However, that data has more uses than this.

It CAN be used as a supplement to data analysis as well, but not by going broad. By going DEEP. Taking segments and applying discourse analytic methodology can be a way of supplementing the numbers and figures collected with surveys with a deeper understanding of the dynamics of the respondent population.

Using this perspective, linguistics has a role both in the development of tailored questionnaires and in the in depth analysis of the respondenses and respondents.

Advertisement

More work on Twitter, Google Searches and Text Analytics in Survey research

I am so excited to see this blog post and read the paper that it was based on!

 

blog post:

https://blogs.rti.org/surveypost/2012/01/04/can-surveillance-of-tweets-and-google-searches-substitute-survey-research-2

paper:

http://www.rti.org/pubs/twitter_google_search_surveillance.pdf

 

Kudos to RTI for continuing to carve out a place for text analytics in the future of survey research!

Word Clouds

Here is an interesting application of word clouds. It is a word cloud analysis of Public Opinion Quarterly, the leading journal in Public Opinion Research:

https://blogs.rti.org/surveypost/2012/02/26/a-visual-history-of-poq-1937-to-present/

Word clouds are a fast and easy tool that produce a visual picture of the most frequently used words in a body of text or ‘bag of words.’ They are frequently used as a tool for content analysis.

On my ‘my research’ page above, there is a link to a paper I wrote about text analytic strategies. In the paper, I addressed word clouds in great detail. I did that because word clouds are fast gaining popularity and recognition in the survey research community and in the wider society at large. However, the clouds have a lot of limitations that are rarely considered by people who use them.

One of the complications of a word cloud is that word frequency alone doesn’t speak to the particular ways in which a word was used. So when you see ‘public,’ you think of the public/private dichotomy that is such a big debate in the current public sphere. However, in the context of a survey ‘public’ could also easily be used as a noun, to refer to potential respondents. While word clouds appear to give a lot of information in a quick visual, the picture underlying that information can be clouded by the complexities of language use.

I don’t think that these pictures can map directly onto the underling topical landscape, but they can provide a quick window into the specific words that we have used over the years and the changes in our lexicon over time.

Another CLIP

I missed today’s CLIP. Too much work and too much rain. But the description of it made it sound especially interesting, because the speaker is obviously really grappling with the concept of context. It would have been interesting to have heard what he did with it and how he used linguistics (he specifically mentioned the field, albeit probably not in a discourse analytic type of way). I will have to follow up with him or with his papers. Thankfully, he’s local!

Here’s the sum:

February 29: Vlad Eidelman, Unsupervised Textual Analysis with Rich Features

Learning how to properly partition a set of documents into categories in an unsupervised manner is quite challenging, since documents are inherently multidimensional, and a given set of documents can be correctly partitioned along a number of dimensions, depending on the criterion. Since the partition criterion for a supervised model is encoded in the data via the class labels, even the standard information retrieval representation of a document as a vector of term frequencies is sufficient for many state-of-the-art classification models. This representation is especially well suited for the most common application: topic (or thematic) analysis, where term presence is highly indicative of class. Furthermore, for tasks where term presence may not be adequate, such as sentiment or perspective analysis, discriminative models have the ability to incorporate complex features, allowing them to generalize and adapt to the specific domain. In the case where we do not have access to resources for supervised training, we must turn to unsupervised clustering models. Clustering models rely almost exclusively on a simple bag-of-words vector representation, which performs well for topic analysis, but unfortunately, is not guaranteed to perform well for a different task.

In this talk, I will present a feature-enhanced unsupervised model for categorizing textual data. The presented model allows for the integration of arbitrary features of the observations within a document. While in generative models the observed context is usually a single unigram, or bigram, our model can robustly expand the context to extract features from a block of text of larger size. After presenting the model derivation, I will describe the use of complex automatically derived linguistic and statistical features across three practical tasks with different criterion: perspective, sentiment, and topic analysis. I show that by introducing domain relevant features, we can guide the model towards the task-specific partition we want to learn. For each task, our feature enhanced model outperforms strong baselines and state-of-the-art models.

Bio: Vladimir Eidelman is a fourth-year Ph.D. student in the Department of Computer Science at the University of Maryland, working primarily with Philip Resnik. He received his B.S. in Computer Science and Philosophy from Columbia University in 2008 and a M.S in Computer Science from UMD in 2010. His research interests are in machine learning and natural language processing problems, such as machine translation, structured prediction, and unsupervised learning. He is the recipient of the National Science Foundation Graduate Research and National Defense Science and Engineering Graduate Fellowships.

Funny Focus Group moment on Oscars

It’s not often that aspects of survey research make it into the public sphere. Last night’s Oscars included some “recovered focus group footage” from the Wizard of Oz. It’s hilarious, and there’s a good reason why it is. Humor often happens when occurrences don’t match expectations. We tend to expect every member of the focus group to be reasonable and representative, but the reality of that just isn’t true.

 

Anyway, enjoy!

Framing; an Important Aspect of Discourse Analysis

One aspect of discourse analysis that is particularly easy to connect with is framing. Framing is a term that we hear very often in public discourse, as in “How was that issue framed?” or “How should this idea be framed if we want people to buy into it?” Framing is discourse analysis is similar, but it is a much more useful concept.

We understand a frame as ‘what is going on.’ This can be very simple. I can see you on the street and greet you. We can both think of it simply as a greeting frame, and we can have similar ideas about what that greeting frame should look like. I can say “Hey there, nice to see you!” and you can answer back “Nice to see you!” We can both then smile at each other, and keep walking, both smiling for having seen each other.

But frames are much more complicated than that, for the most part. Each of the interactants has their own idea of what the frame of the interaction is, and each has their own set of knowledge about what the frame entails. It would be easy for us to have different sets of knowledge or expectations regarding the frame. We do, after all, have a lifetime of separate experiences. We also could disagree about the framing of our interaction. Let’s say that I think we are simply greeting and passing, and you think we are greeting and then starting a conversation? Or what of we decide to enter a nearby bar, and I think we are on a date and you do not.

Frames also have layers. We might love to joke, but we will joke differently in a job interview than we will at a bar. Joking in a job interview is what we call an embedded frame in discourse analysis. The layering in the frames are an interesting point of analysis as well, because we may or may not have the same idea of what the outer frame of our interaction is.

I believe it was Erving Goffman who pointed out that the range of emotions we access is contingent on the frame we are working within. Truly, anger in an office is generally quite tame compared to anger at home…

Framing accounts for successful communication and misunderstandings. Its an especially useful tool with which to evaluate the success or failure of an interaction. It is especially interesting to look at framing in terms of the cuing that interactants do. How do we signal a change in frame? Are those signals recognized as they were intended? Are they accepted or rejected?

Framing is also an interesting way to view relationships. It is easy, especially early in a relationship, to assume that your partner shares your frames and the knowledge about them. Similarly, it is easy to assume that your partner shares the same priorities that you do.

Unfortunately, we tend to judge people by the frames that we have activated. So if I frame our interaction as ‘cleaning the kitchen’ and you view it as ‘chatting in the kitchen while fiddling with the dishcloth,’ I am likely to judge your performance as a cleaner negatively. Similarly, in a job interview situation, framing problems are often not recognized by the interviewers, causing the interviewee to appear incompetent.

Recognizing framing issues is an important element of what discourse analysts do in their professional lives when analyzing communication.

Observations on another CLIP event: ESL and MT

Today I attended another CLIP colloquila at the University of MD:

Feb 22: Rebecca Hwa, The Role of Machine Translation in Modeling English as a Second Language (ESL) Writings

She addressed these research questions:

1. How patterned are the errors of English language learners?

1a. Could ‘English with mistakes’ be used as an input for machine translation?

1b. Could that be used to improve mt outputs?

1c. Could these findings be used for EFL training?

 

Her presentation made me think a lot about the role of linguistics in this type of work and about the nature of English.

First, I am coming to firmly believe that the best text processing should be done in partnership between linguists and computer scientists. Linguistics provides the most thorough and reliable frame for computer scientists to key off of, and once you stray from the nature of what you’re trying top represent, you end up astray.

So, for example, in the first part of her research presentation she talked about a project involving machine translation and English language learners of all backgrounds. One woman in the audience kept asking questions about the conglomeration of non native English speakers, and I assumed she was from the English department. The issue of mistakes in language use is a huge one, and a focus has to be chose from which to do the work. Maybe language background would be a more productive way to narrow the focus, and would allow for much more specific structural guidance and bodies of knowledge on language interference.

Second, she spoke about Chinese English language learners in particular and her investigation of lexical choice. Often English language learners’ written English is marked by lexical choices that appear strange to native English speakers. Her hypothesis was that the words that were used in place of the correct words were similar in some way to the correct words, most likely by context. She played a lot with the definition of context; was it proximity? Was it a specific grammatical relationship? This discussion was fascinating, but probably could have benefited from some restrictions on the context of the errors she was targeting. Again, this is from the linguistics end of the linguistics—computer science spectrum.

Her speech made me think a lot about the nature of English. I often think about what it means to be a global language. English is spoken in many places where there are not native speakers, and it is spoken in many places that we don’t traditionally think of as native English places. Often the English that arises from these contexts is judged to be full of errors, but I don’t necessarily agree with this. Instead, I would ask two questions:

1. Is the variation patterned?

2. Is communication successful?

If the answer to these questions is yes, then I don’t think that the speaker is producing errors, so much as a different variety of English. Varieties of English are not all treated with the same respect, but I suspect that the reasons behind this are more to do with the prejudices of the person judging the grammar than a paucity on the part of the speaker.

AAPOR Conference Preliminary Program is Up!

This is exciting!

The conference theme this year is New Frontiers in Public Opinion Research, and now we can get a first glimpse at AAPOR’s take on the future of the field! There are quite a few sessions on web survey design, paradata, alternative data sources, and the potential of social media. It will be interesting to see which of the sessions will have a sociolinguistic bent, because many certainly have that potential. There are also sessions on interviewer effects and context effects, which may even use Conversation Analysis (CA) approaches.

http://www.aapor.org/AM/Template.cfm?Section=AAPOR_Annual_Conference&Template=/CM/ContentDisplay.cfm&ContentID=4986