Can they just get along? Situated Cognition and Survey Response

Finally, I’m going to take a moment to talk about Norbert Schwarz’s JPSM Distinguished Lecture on March 30! I’ve attended a few events and had a few experiences lately that I’m eager to blog about, but sometimes life has plans for us that don’t involve blogging. Today, I would say, is no different, except that I woke up thinking about this lecture!

Ok, enough about me, more about Schwartz.

I should start by saying that I am a longtime fan of Schwartz. In Fall 2009, I had just discovered the MLC program and finished what was a whirlwind application process, and I was first trying to wrap my head around the field of sociolinguistics and its intersection with my career in survey methodology. I had attended a presentation of an ethnography of communication pilot study to the McDonough School of Business, and, to my great shock, I came across a survey methodology paper that spoke of the Logic of Conversation and the role of Gricean maxims in survey responses. This fantastic piece is the work of Norbert Schwarz, and I’ve kept it nearby ever since. In it, Schwartz addresses the conversational expectations of survey respondents and shows how they respond not only to the question at hand, but also to these expectations.

It’s common in every survey to look at some of the responses and wonder how in the world they could have come about. I addressed this in an earlier blog post, where one researcher had gone as far as to call respondents stupid. Oftentimes we think of respondents “getting it right” or “getting it wrong.” But there is a larger phenomena underlying what appear to be strange responses, and it’s something that we experience when we attempt to respond to surveys.

We write survey questions with a mechanistic expectation, that if we ask a question, we will hear back the answer to that question, but we neglect to consider the fact that communication is not mechanistic. Of course, we are not necessarily aware of this. We’re aware of misunderstandings, but we’re not often aware of the tiny sphere of focus and interpretive frames that we apply to every utterance we here and utter. This is no fault of our own. This is a survival tool. We simply cannot process all of the information that we’re constantly inundated with.

In survey research, we’re aware that small differences in question format can influence responses. We’re aware that changing a scale will change the numeric range of the responses. We see that changing labels on a scalar question changes the results. We’re aware that sometimes answers appear to be absolute contradictions and seem to us to be impossible. These are especially large challenges for us, and they are the purview of linguistics.

Schwartz, however, is not a linguist. He is a cognitice scientist. And his lecture was not about the linguistic basis behind apparently wonky response phenomenon. Instead, he spoke about situated cognition.

Situated cognition makes a lot of intuitive sense. It is a proven psychological phenomena that shows that we don’t hold attitudes, beliefs and responses at a certain location in our mind, rather we recreate them each time. Instead we create or recreate them each time. This process allows for much more of an influence from “what’s on our mind,” making situational or contextual factors much more important, and decreasing the reliability, or repeatability, of survey responses. This is not a hard egg for someone (me) with a background in cognitive science and sociolinguistics to swallow, but the effect on the audience was remarkable. How does someone from a field that thrives on the mechanistic nature of responses take the suggestion that what they’re measuring is not a distinctly measurable entity so much as a complicated, potentially unreliable act of nature?

One of the discussants used a couple that he was not very fond of as an example of a stable opinion. I believe that this example lends itself well to further exploration. If he had just met the couple, and he had had a negative experience with them, his evaluation of his opinion toward the couple would depend on the degree of negativity of the experience, his predisposition to give or not give them the benefit of the doubt, and his degree of concern about expressing a negative opinion to the interviewer or survey researchers. After this point, these factors will be increasingly influenced by his further experiences with the people and the degree of negativity, positivity or neutrality of the experiences, and the recency and saliency of the experiences. Essentially, his response would reflect a complicated underlying equation and be the output of situated cognition.

But what is a survey researcher supposed to do with this information?

It would be easy at this point to throw the baby out with the bathwater and cast doubt on the whole survey and response process. But that’s not necessary, and that’s not the point.

The point is that each method of analysis has its own unique set of strengths and weaknesses. It is important to know the strengths and weaknesses of your methods in order to better understand what exactly you are finding and what your findings mean. And it also behooves us to supplement across methodologies. A reliable survey response is a strong finding, but it can mask underlying factors that can be accessed through other methodologies. As Pew demonstrated in their Kony 2012 report, mixing methodologies can lead to a more clear, nuanced narrative than any single method could yield.

It would be easy to dismiss Schwartz’s reporting, or to dismiss survey methodology. But dismissing either would be foolish, rash and unnecessary. Instead, let’s build on both. A wider foundation can build a better house, but the best house will need to take down some old walls and rethink its floorplan.

When Code Is Hot

Excellent article on TechCrunch by Jon Evans, “When Code is Hot”

http://techcrunch.com/2012/04/07/when-code-is-hot/

Excerpt:

“That first cited piece above begins with “Parlez-vous Python?”, a cutesy bit that’s also a pet peeve. Non-coders tend to think of different programming languages as, well, different languages. I’ve long maintained that while programming itself — “computational thinking”, as the professor put it — is indeed very like a language, “programming languages” are mere dialects; some crude and terse, some expressive and eloquent, but all broadly used to convey the same concepts in much the same way.

 
Like other languages, though, or like music, it’s best learned by the young. I am skeptical of the notion that many people who start learning to code in their 30s or even 20s will ever really grok the fundamental abstract notions of software architecture and design.

 
Stross quotes Michael Littman of Rutgers: “Computational thinking should have been covered in middle school, and it isn’t, so we in the C.S. department must offer the equivalent of a remedial course.” Similarly, the Guardian recently ran an excellent series of articles on why all children should be taught how to code. (One interesting if depressing side note there: the older the students, the more likely it is that girls will be peer-pressured out of the technical arena.)”

News

First, some news on the Crimson Hexagon front:

Repucom International Selects Crimson Hexagon Social Media Analysis Platform to Augment its Sponsorship Intelligence Services

 

Second, a great example of what I love about the hard sciences:

Earth Has Just One Moon, Right? Think Again

As social scientists, it is easy to think that what makes the harder sciences so reliable is the math and the equipment, but the truth is that the harder sciences are bolstered by the constant, constructive skepticism.

JPSM Distinguished Lecture

Tomorrow the Joint Program in Survey Methodology is having a special lecture at the University of Maryland.

Do survey respondents lie?

Situated cognition and socially desirable responding Prof. Norbert Schwarz University of Michigan

Survey researchers commonly assume that people know what they do, know what they believe, and can report on it with candor and accuracy, as Angus Campbell put it. From this perspective, many findings suggest that survey respondents are less than candid. The best known example is the observation that answers to racial attitude questions vary as a function of the interviewers race. Challenging this interpretation, a large body of social psychological research shows similar context effects under conditions that do not lend themselves to this interpretation, including conditions that use implicit attitude measures, which are not subject to deliberate “faking”.

From a situated cognition perspective, such findings reflect that attitude questions assess context sensitive evaluations that respondents form on the spot, drawing on information that is accessible at that point in time. The underlying processes operate in daily life as well as in survey interviews and reflect the situated nature of human judgment rather than a deliberate attempt to report a socially desirable answer.

I review relevant findings and discuss their implications for survey measurement.

Friday, March 30, 2012, 3:00 PM – 5:00 PM

2205 LeFrak Hall, University of Maryland, College Park MD USA

Metro stop: College Park on the Green line See http://www.jpsm.umd.edu/jpsm/?geninfo/directions.htm for directions and parking information.

 

Discussants: Paul Beatty, NCHS and David Cantor, Westat

 

A reception follows the lecture.

Don’t fear Big Data

I really enjoyed this RTI blog post about embracing big data:

https://blogs.rti.org/surveypost/2012/03/22/why-you-should-not-fear-but-embrace-the-age-of-big-data/

I suspect that oftentimes fear of big data is motivated by a concern that new, less tested, still evolving methods will replace the time tested methods that we have grown to have so much faith in. I sincerely believe that the foundation that we have is a strong one, and the knowledge we have developed through those processes should be embraced, especially the quality controls. But SUPPLEMENTING an analysis through a measured combination of data sources can lead to a more complete picture.

This week I spent some time analyzing Pew’s report on the Kony 2012 video. I believe that this report is an excellent example of what researchers are capable of when they look outside the artificial divisions of research group (this was a collaborative effort) and research methodology. Seven days after the release of the video, Pew was able to reconstruct a comprehensive narrative of the video’s dissemination, using traditional survey methods, sentiment analytic snapshots over time, and a careful breakdown of the media coverage of influential parties.

 

Dana Boyd also has an interesting analysis of the Kony phenomena on her Apophenia blog:

http://www.zephoria.org/thoughts/

Fostering Creativity at Work

This book looks fantastic. Whenever I need to do a lot of thinking at work, I’ll go for a walk or hit the gym. Or start reading about a similar topic. Or stare out the window. We don’t have Ping Pong tables, but we do have floor to ceiling windows overlooking a wooded patch. I can’t tell you how any cumulative hours I’ve spent watching the trees wave in the wind and working my way through a stumbling block.

 

http://www.npr.org/2012/03/21/148607182/fostering-creativity-and-imagination-in-the-workplace?ft=3&f=122101520&sc=nl&cc=sh-20120324

Zen as a Research Ethic

I have a Zen calendar on my desk for 2012. It has such gems as: “Although the world is full of suffering, it is also full of the overcoming of it” (Helen Keller)

The more I look at the calendar, the more it relates to everything I think about.

I read “To see is to forget the name of the thing one sees,” (Paul Valery) and I think of the Charles Goodwin paper I cited in a recent post about Professional Vision. He talks about ways of seeing as kind of coding structures, inculturation, or ways of foregrounding certain parts of what we see. Truly, being able to see deeper than that requires shedding that inculturation and observing more closely. As researchers, we often become so deeply incultured into our way of thinking, that we lose sight of our research goals. As survey researchers, we can easily fall into the pattern of first asking “who should we survey?” and “what should we ask?” before taking more time to consider whether a survey is even an appropriate methodology for the specific topic of focus. Of course, not this action based on praxis is not limited to survey researchers. Far from it! Every person, every field, every community of practice, every language has a way of thinking. And often instead of seeing or observing, we quickly begin to navigate our networks of inculturation.

These two are similarly meaningful in my interpretation:

Zen is not to confuse spirituality with thinking about God while one is peeling potatoes. Zen is just to peel the potatoes.” (Alan Watts)

If all beings are Buddha, why all this striving?” (Dogen)

These are a reminder to boil things down to what they simply are and not try to describe them as what you want them to be. In survey research, this comes up often in the process of reporting research results. If I know that I intended to measure something about Project Based Learning or STEM education, it is easily for me to begin to frame my findings by my intentions. But that is not true to my findings or my methodology, and it doesn’t make for good research. I can’t say that 10% of my respondents were using project based learning methods in the classroom if I asked about the number of group activities they conducted. I must simply say that 10% were using group activities (daily/monthly/occasionally- whatever the answer choices were)

In this way, my Zen calendar not only provides something to think about in a larger sense, but it keeps my research anchored.

Why Social Media couldn’t predict Super Tuesday

This piece is a nice reminder not only, as the authors conclude, that sentiment analysis has not fully matured, but also that sentiment analysis and social media analysis probably don’t accomplish what they think they are accomplishing:

 

http://www.retargeter.com/political-advertising/why-social-media-couldnt-predict-super-tuesday