Rethinking demographics in research

I read a blog post on the LoveStats blog today that referred to one of the most widely regarded critiques of social media research: the lack of demographic information.

In traditional survey research, demographic information is a critically important piece of the analysis. We often ask questions like “Yes 50% of the respondents said they had encountered gender harassment, but what is the breakdown by gender?” The prospect of not having this demographic information is a large enough game changer to cast the field of social media research into the shade.

Here I’d like to take a sidestep and borrow a debate from linguistics. In the linguistic subfield of conversation analysis, there are two main streams of thought about analysis. One believes in gathering as much outside data as possible, often through ethnographic research, to inform a detailed understanding of the conversation. The second stream is rooted in the purity of the data. This stream emphasizes our dynamic construction of identity over the stability of identity. The underlying foundation of this stream is that we continually construct and reconstruct the most important and relevant elements of our identity in the process of our interaction. Take, for example, a study of an interaction between a doctor and a patient. The first school would bring into the analysis a body of knowledge about interactions between doctors and patients. The second would believe that this body of knowledge is potentially irrelevant or even corrupting to the analysis, and if the relationship is in fact relevant it will be constructed within the excerpt of study. This begs the question: are all interactions between doctors and patients primarily doctor patient interactions? We could address this further through the concept of framing and embedded frames (a la Goffman), but we won’t do that right now.

Instead, I’ll ask another question:
If we are studying gender discrimination, is it necessary to have a variable for gender within our datasouce?

My kneejerk reaction to this question, because of my quantitative background, is yes. But looking deeper: is gender always relevant? This does strongly depend on the datasource, so let’s assume for this example that the stimulus was a question on a survey that was not directly about discrimination, but rather more general (e.g. “Additional Comments:”).

What if we took that second CA approach, the purist approach, and say that where gender is applicable to the response it will be constructed within that response. The question now becomes ‘how is gender constructed within a response?’ This is a beautiful and interesting question for a linguist, and it may be a question that much better fits the underlying data and provides deeper insight into the data. It also turns the age old analytic strategy on its head. Now we can ask whether a priori assumptions that the demographics could or do matter are just rote research or truly the productive and informative measures that we’ve built them up to be?

I believe that this is a key difference between analysis types. In the qualitative analysis of open ended survey questions, it isn’t very meaningful to say x% of the respondents mentioned z, and y% of the respondents mentioned d, because a nonmention of z or d is not really meaningful. Instead we go deeper into the data to see what was said about d or z. So the goal is not prevalence, but description. On the other hand, prevalence is a hugely important aspect of quantitative analysis, as are other fun statistics which feed off of demographic variables.

The lesson in all of this is to think carefully about what is meaningful information that is relevant to your analysis and not to make assumptions across analytic strategies.

Do you ever think about interfaces? Because I do. All the time.

Did you ever see the movie Singles? It came out in the early 90s, shortly before the alternative scene really blew up and I dyed [part of] my hair blue and thought seriously about piercings. Singles was a part of the growth of the alternative movement. In the movie, there is a moment when one character says to another “Do you ever think about traffic? Because I do. All the time.” I spent quite a bit of time obsessing over that line, about what it meant, and, more deeply, what it signaled.

I still think about that line. As I drove toward the turnoff to my mom’s street during our 4th of July vacation, I saw what looked like the turn lane for her street, but it was actually an intersection- less left- turning split immediately preceding the real left turn lane for her street. It threw me off every time, and I kept remembering that romantic moment in Singles when the two characters were getting to know each other’s quirks, and the man was talking about traffic. And it was okay, even cool, to be quirky and think or talk about traffic, even during a romantic moment.

I don’t think about traffic often. But I am no less quirky. Lately, I tend to think about interfaces. Before my first brush with NLP (Natural Language Processing), I thought quite a bit about alternatives to e-mail. Since I discovered the world of text analytics, I have been thinking quite a bit about ways to integrate the knowledge across different fields about methods for text analysis and the needs of quantitative and qualitative researchers. I want to think outside of the sentiment box, because I believe that sentiment analysis does not fully address the underlying richness of textual data. I want to find a way to give researchers what they need, not what they think they want. Recently, my thinking on this topic has flipped. Instead of thinking from the data end, or the analytic possibilities end, or about what programs already exist and what they do, I have started to think about interfaces. This feels like a real epiphany. Once we think about the problem from an interface, or user experience perspective, we can better utilize existing technology and harness user expectations.

Have you read the new Imagine book about how creativity works? I believe that this strategy is the natural step after spending time zoning out on the web, thinking, or not thinking, about research. The more time you cruise, the better feel you develop for what works and what doesn’t, the more you learn what to expect. Interfaces are simply the masks we put on datasets of all sorts. The data could be the world wide web as a whole, results from a site or time period, a database of merchandise, or even a set of open ended survey responses. The goal is to streamline the searching interface and then make it available for use on any number of datasets. We use NLP every day when we search the internet, or shop. We understand it intuitively. Why don’t we extend that understanding to text analysis?

I find myself thinking about what this interface should look like and what I want this program to do.

Not traffic, not as romantic. But still quirky and all-encompassing.

Question Writing is an Art

As a survey researcher, I like to participate in surveys with enough regularity to keep current on any trends in methodology. As a web designer, an aspect of successful design is a seamlessness with the visitor’s expectations. So if the survey design realm has moved toward submit buttons on the upper right hand corner of individual pages, your idea (no matter how clever) to put a submit button on the upper left can result in a disconnect on the part of the user that will effect their behavior on the page. In fact, the survey design world has evolved quite a bit in the last few years, and it is easy to design something that reflects poorly on the quality of your research endeavor. But these design concerns are less of an issue than they have been, because most researchers are using templates.

Yet there is still value in keeping current.

And sometimes we encounter questions that lend themselves to an explanation of the importance of question writing. These questions are a gift for a field that is so difficult to describe in terms of knowledge and skills!

Here is a question I encountered today (I won’t reveal the source):

How often do you purchase potato chips when you eat out at any quick service and fast food restaurants?

2x a week or more
1x a week
1x every 2-3 weeks
1x a month
1x every 2-3 months
Less than 1x every 3 months
Never

This is a prime example of a double barreled question, and it is also an especially difficult question to answer. In my care, I rarely eat at quick service restaurants, especially sandwich places, like this one, that offer potato chips. When I do eat at them, I am tempted to order chips. About half the time I will give in to the temptation with a bag of sunchips, which I’m pretty sure are not made of potato.

In bigger firms that have more time to work through, this information would come out in the process of a cognitive interview or think aloud during the pretesting phase. Many firms, however, have staunchly resisted these important steps in the surveying process, because of their time and expense. It is important to note that the time and expense involved with trying to make usable answers out of poorly written questions can be immense.

I have spent some time thinking about alternatives to cognitive testing, because I have some close experience with places that do not use this method. I suspect that this is a good place for text analytics, because of the power of reaching people quickly and potentially cheaply (depending on your embedded TA processes). Although oftentimes we are nervous about web analytics because of their representativeness, the bar for representativeness is significantly lower in the pretesting stage than in the analysis phase.

But, no matter what pretesting model you choose, it is important to look closely at the questions that you are asking. Are you asking a single question, or would these questions be better separated out into a series?

How often do you eat at quick service sandwich restaurants?

When you eat at quick service restaurants, do you order [potato] chips?

What kind of [potato] chips do you order?

The lesson of all of this is that question writing is important, and the questions we write in surveys will determine the kind of survey responses we receive and the usability of our answers.

To go big, first think small

We use language all of the time. Because of this, we are all experts in language use. As native speakers of a language, we are experts in the intricacies of that language.

Why, then, do people study linguistics? Aren’t we all linguists?

Absolutely not.

We are experts in *using* language, but we are not experts in the methods we employ. Believe it or not, much of the process of speaking and hearing is not conscious. If it was, we would be sensorally overwhelmed with the sheer volume of words around us. Instead, listening comprehension involves a process of merging what we expect to hear with what we gauge to be the most important elements of what we do hear. The process of speaking involves merging our estimates of what the people we communicate with know and expect to hear with our understanding of the social expectations surrounding our words and our relationships and distilling these sources into a workable expression. The hearer will reconstruct elements of this process using cues that are sometimes conscious and sometimes not.

We often think of language as simple and mechanistic, but it is not simple at all. As conversational analysts, our job is to study conversation that we have access to in an attempt to reconstruct the elements that constituted the interaction. Even small chunks of conversation encode quite a bit of information.

The process of conversation analysis is very much contrary to our sense of language as regular language users. This makes the process of explaining our research to people outside our field difficult. It is difficult to justify the research, and it is difficult to explain why such small pieces of data can be so useful, when most other fields of research rely on greater volumes of data.

In fact, a greater volume of data can be more harmful than helpful in conversation analysis. Conversation is heavily dependent on its context; on the people conversing, their relationship, their expectations, their experiences that day, the things on their mind, what they expect from each other and the situation, their understanding of language and expectations, and more. The same sentence can have greatly different meanings once those factors are taken into account.

At a time when there is so much talk of the glory of big data, it is especially important to keep in mind the contributions of small data. These contributions are the ones that jeopardize the utility and promise of big data, and if these contributions can be captured in creative ways, they will be the true promise of the field.

Not what language users expect to see, but rather what we use every day, more or less consciously.

Data Journalism, like photography, “involves selection, filtering, framing, composition and emphasis”

Beautiful:

“Creating a good piece of data journalism or a good data-driven app is often more like an art than a science. Like photography, it involves selection, filtering, framing, composition and emphasis. It involves making sources sing and pursuing truth – and truth often doesn’t come easily. ” -Jonathan Gray

Whole article:

http://www.guardian.co.uk/news/datablog/2012/may/31/data-journalism-focused-critical

Truly, at a time when the buzz about big data is at such a peak, it is nice to hear a voice of reason and temper! Folks: big data will not do all that it is talked up to do. It will, in fact, do something surprising and different. And that something will come from the interdisciplinary thought leaders in fields like natural language processing and linguistics. That *something,* not the data itself, will be the new oil.

Patterning in Language, revisited

Language can be pretty mindblowing.
In my paper on the potential of Natural Language Processing (NLP) for social science research, I called NLP a kind of oil rig for the vast reserves of data that we are increasingly desperate to tap.
Sometimes the rigging runs smoothly. This week I read a chapter about compliments in Linguistics at Work. In the chapter, Nessa Wolfson describes her investigations into the patterning of compliments in English. Although some of her commentary in this chapter seems far off base to me (I’ll address this in another post) her quantitative findings are strong. She discovered that 54% of the compliments in her corpus fell into a single syntactic pattern, 85% of the compliments fell into three syntactic patterns, and 97% fell into a total of nine syntactic patterns. She also found that 86% of the compliments with a syntactically positive verb used just two common verbs, ‘like’ and ‘love.’ And she discovered some strong patterning in the adjectival compliments as well.

 

Linguistic patterns such as these are generally not something that native speakers of a language are aware of, yet they offer great potential to English Language Learners and NLP programmers. It is precisely patterns such as these that  NLP programmers use in order to mine information from large bodies of textual data. When language is patterned as strongly as this, it is significantly easy to mine and makes a strong case for the effectiveness of NLP as a rig and syntax as the bones of the rig.

 

But as strongly as language patterns in some areas, it is also profoundly conflicted in others.

 

This week I attended a CLIP Colloquilam at the Universty of Maryland. The speaker was Jan Wiebe, and the title of her talk was ‘Subjectivity and Sentiment Analysis. From Words to Discourse.’ In an information packed hourlong talk, Wiebe essentially covered her long history with sentiment analysis and discussed her current research (I took 11 pages of notes! Totally mindblowing). Wiebe approached one of the essential struggles of linguistics, the spectrum between language out of context and language in context (from words to discourse) from a computer science perspective. She spoke about the programming tools and transformations that she had developed and worked with in order to take data out of context in an automated way and build their meaning back in a patterned way. For each stage or transformation, she spoke of the complications and potential errors she had encountered.

 

She spoke of her team’s efforts to tag word senses in wordnet by their subjective or objective orientation and positive and negative meanings. Her team has created a downloadable subjectivity lexicon, and they hope to make a subjectivity phrase classifier available this Spring. For the sense labeling, they decided to use courser groupings that wordnet in order to improve accuracy, so instead of associating words with their senses, they associate them only along usage domains, or s/o (subjective/objective) and p/n/n (positive/negative/neutral). This increases the accuracy of the tags, but doesn’t account for the context effects such as polarity shifting, e.g. from wonderfully (+) horrid (-) to wonderfully horrid (+). The subjectivity phrase classifier will be a next step in the transition between prior polarity (out of context, word level orientation, like in the subjectivity lexicon) and contextual polarity (the ultimate polarity of the sentence, taking into count phrase dependency, etc.), or longer distance negation such as “not only good, but amazing”.

 

She also spoke of her teams research into debate sites. They annotate individual postings by their target relationships (same/alternative/part/anaphora, etc.), p/n/n, and reinforcing vs non reinforcing. So, for example, in a debate between blackberries and iphones, where the sides are predetermined by the setup of the site, she can connect relationships to stances, e.g. “fast keyboard” is a positive stance toward blackberry, “slower keyboard” reflects a negative orientation toward an iphone, and a pro-iphone post that mentions the “fast keyboard” is building a concessionary, rather than an argument in favor of blackberry.

 

In sum, she discussed the transformations between words out of context and words in context, a transformation which is far from complete. She discussed the subjectivity/objectivity of individual words, but then showed how these could be transformed through context. She showed the way phrases with the same syntactic structure could have completely different meanings. She spoke of the difficulty of isolating targets or the subject of the speech. She spoke of the interdependent structures in discourse, and the way that each compounding phrase in a sentence can change the overall directionality. She spoke of her efforts to account for these more complex structures with a phrase level classifier, and she spoke of her research into more indirect references in language. Each of these steps is a separate area of research, each compounding error on the path between words and discourse.

 

Patterning such as Wolfson found show the great potential of NLP, but research such as Wiebe’s shows the complicated nature of putting these patterns into use. In fact, this was exactly my experience working with NLP. NLP is a constant struggle between linguistic patterning and the complicated nature of discourse. It is an important field and a growing field, but the problems it poses will not be quickly resolved. The rigs are being built, but the quality of the oil is still dubious.