The surprising unpredictability of language in use

This morning I recieved an e-mail from an international professional association that I belong to. The e-mail was in English, but it was not written by an American. As a linguist, I recognized the differences in formality and word use as signs that the person who wrote the e-mail is speaking from a set of experiences with English that differ from my own. Nothing in the e-mail was grammatically incorrect (although as a linguist I am hesitant to judge any linguistic differences as correct or incorrect, especially out of context).

Then later this afternoon I saw a tweet from Twitter on the correct use of Twitter abbreviations (RT, MT, etc.). If the growth of new Twitter users has indeed leveled off then Twitter is lucky, because the more Twitter grows the less they will be able to influence the language use of their base.

Language is a living entity that grows, evolves and takes shape based on individual experiences and individual perceptions of language use. If you think carefully about your experiences with language learning, you will quickly see that single exposures and dictionary definitions teach you little, but repeated viewings across contexts teach you much more about language.

Language use is patterned. Every word combination has a likelihood of appearing together, and that likelihood varies based on a host of contextual factors. Language use is complex. We use words in a variety of ways across a variety of contexts. These facts make language interesting, but they also obscure language use from casual understanding. The complicated nature of language in use interferes with analysts who build assumptions about language into their research strategies without realizing that their assumptions would not stand up to careful observation or study.

I would advise anyone involved in the study of language use (either as a primary or secondary aspect of their analysis) to take language use seriously. Fortunately, linguistics is fun and language is everywhere. So hop to it!

Advertisements

Great description of a census at Kakuma refugee camp

It’s always fun for a professional survey researcher to stumble upon a great pop cultural reference to a survey. Yesterday I heard a great description of a census taken at Kakuma refugee camp in Kenya. The description was in the book I’m currently reading: What Is the What by Dave Eggers (great book, I highly recommend it!). The book itself is fiction, loosely based on a true story, so this account likely stems from a combination of observation and imagination. The account reminds me of some of the field reports and ethnographic findings in other intercultural survey efforts, both national (US census) and inter or multinational.

To set the stage, Achak is the main character and narrator of the story. He is one of the “lost boys” of Sudan, and he found his way to Kakuma after a long and storied escape from his war-ravaged hometown. At Kakuma he was taken in by another Denka man, named Gop, who is acting as a kind of father to Achak.

What is the What by Dave Eggers

What is the What by Dave Eggers

“The announcement of the census was made while Gop was waiting for the coming of his wife and daughters, and this complicated his peace of mind. To serve us, to feed us, the UNHCR and Kakuma’s many aid groups needed to know how many refugees were at the camp. Thus, in 1994 they announced they would count us. It would only take a few days, they said. To the organizers I am sure it seemed a very simple, necessary, and uncontroversial directive. But for the Sudanese elders, it was anything but.

—What do you think they have planned? Gop Chol wondered aloud.

I didn’t know what he meant by this, but soon I understood what had him, and the majority of Sudanese elders, greatly concerned. Some learned elders were reminded of the colonial era, when Africans were made to bear badges of identification on their necks.

—Could this counting be a pretext of a new colonial period? Gop mused.—It’s very possible.
Probable even!

I said nothing.

At the same time, there were practical, less symbolic, reasons to oppose the census, including the fact that many elders imagined that it would decrease, not increase, our rations. If they discovered there were fewer of us than had been assumed, the food donations from the rest of the world would drop. The more pressing and widespread fear among young and old at Kakuma was that the census would be a way for the UN to kill us all. These fears were only exacerbated when the fences were erected.

The UN workers had begun to assemble barriers, six feet tall and arranged like hallways. The fences would ensure that we would walk single file on our way to be counted, and thus counted only once. Even those among us, the younger Sudanese primarily, who were not so worried until then, became gravely concerned when the fences went up. It was a malevolent-looking thing, that maze of fencing, orange and opaque. Soon even the best educated among us bought into the suspicion that this was a plan to eliminate the Dinka. Most of the Sudanese my age had learned of the Holocaust, and were convinced that this was a plan much like that used to eliminate the Jews in Germany and Poland. I was dubious of the growing paranoia, but Gop was a believer. As rational a man as he was, he had a long memory for injustices visited upon the people of Sudan.

—What isn’t possible, boy? he demanded.—See where we are? You tell me what isn’t possible at this time in Africa!

But I had no reason to distrust the UN. They had been feeding us at Kakuma for years. There was not enough food, but they were the ones providing for everyone, and thus it seemed nonsensical that they would kill us after all this time.

—Yes, he reasoned,—but see, perhaps now the food has run out. The food is gone, there’s no more money, and Khartoum has paid the UN to kill us. So the UN gets two things: they get to save food, and they are paid to get rid of us.

—But how will they get away with it?

—That’s easy, Achak. They say that we caught a disease only the Dinka can get. There are always illnesses unique to certain people, and this is what will happen. They’ll say there was a Dinka plague, and that all the Sudanese are dead. This is how they’ll justify killing every last one of us.
—That’s impossible, I said.

—Is it? he asked.—Was Rwanda impossible?

I still thought that Gop’s theory was unreliable, but I also knew that I should not forget that there were a great number of people who would be happy if the Dinka were dead. So for a few days, I did not make up my mind about the head count. Meanwhile, public sentiment was solidifying against our participation, especially when it was revealed that the fingers of all those counted, after being counted, would be dipped in ink.

—Why the ink? Gop asked. I didn’t know.

—The ink is a fail-safe measure to ensure the Sudanese will be exterminated.

I said nothing, and he elaborated. Surely if the UN did not kill us Dinka while in the lines, he theorized, they would kill us with this ink on the fingers. How could the ink be removed? It would, he thought, enter our bodies when we ate.

—This seems very much like what they did to the Jews, Gop said.

People spoke a lot about the Jews in those days, which was odd, considering that a short time before, most of the boys I knew thought the Jews were an extinct race. Before we learned about the Holocaust in school, in church we had been taught rather crudely that the Jews had aided in the killing of Jesus Christ. In those teachings, it was never intimated that the Jews were a people still inhabiting the earth. We thought of them as mythological creatures who did not exist outside the stories of the Bible. The night before the census, the entire series of fences, almost a mile long, was torn down. No one took responsibility, but many were quietly satisfied.

In the end, after countless meetings with the Kenyan leadership at the camp, the Sudanese elders were convinced that the head count was legitimate and was needed to provide better services to the refugees. The fences were rebuilt, and the census was conducted a few weeks later. But in a way, those who feared the census were correct, in that nothing very good came from it. After the count, there was less food, fewer services, even the departure of a few smaller programs. When they were done counting, the population of Kakuma had decreased by eight thousand people in one day.

How had the UNHCR miscounted our numbers before the census? The answer is called recycling.

Recycling was popular at Kakuma and is favored at most refugee camps, and any refugee anywhere in the world is familiar with the concept, even if they have a different name for it. The essence of the idea is that one can leave the camp and re-enter as a different person, thus keeping his first ration card and getting another when he enters again under a new name. This means that the recycler can eat twice as much as he did before, or, if he chooses to trade the extra rations, he can buy or otherwise obtain anything else he needs and is not being given by the UN—sugar, meat, vegetables. The trading resulting from extra ration cards provided the basis for a vast secondary economy at Kakuma, and kept thousands of refugees from anemia and related illnesses. At any given time, the administrators of Kakuma thought they were feeding eight thousand more people than they actually were. No one felt guilty about this small numerical deception.

The ration-card economy made commerce possible, and the ability of different groups to manipulate and thrive within the system led soon enough to a sort of social hierarchy at Kakuma.”

An Analytical person at the Nutcracker (or Research Methodology, Nutcracker Style)

Last night we attended a Russian Ballet performance of the Nutcracker. It was a great performance, and fun was had by all.

2013-12-17 18.38.03

Early in the performance I realized that although I have developed some understanding of the ballet, I hadn’t shared any of that knowledge with my kids. At this point, I started whispering to them quietly to explain what they were seeing. I whispered quick, helpful comments, such as “those are toys dancing” and “the kids have gone to sleep now, so this is just the adults dancing.” It wasn’t long into the performance that this dynamic began to change. I realized that their insights were much funnier than mine “wow, that guy should go on ‘So You Think You Can Dance!’ or ‘The Voice’ or something! “and that my comments were starting to be pretty off-base. My comments evolved into a mash-up of “The kids have gone to sleep now” “No, I guess the kids haven’t gone to sleep yet” “I really can’t tell if the kids are still up or not!” and “Those are the sugarplum fairies” “Wait, no, maybe these are the sugar plum fairies?” and “I don’t know, sweetie, just watch them dance!” By the end of the show I had no idea what was going on or why the Chuck.E.Cheese king was dancing around on stage (although one of the girls suspected this particular king was actually a bear?). The mom next to me told me she didn’t know what was going on either “and,” she added, “I go to the Nutcracker every year! Maybe that was what made it a Russian Nutcracker?” …And here I thought the Russian influences were the Matryoshka dolls and the Chinese dancers clothed in yellow (despite the awkward English conversation that the costumes prompted).

At the beginning of the show I was nervous to whisper with my kids, but I soon realized that there was a low hum all around me and throughout the concert hall of people whispering with their kids. This, I think, is what remix research methods should be all about- recording and interviewing many audience members to gain a picture of the many perspectives in their interpretations of the show. Here is a challenge question to my readers who are hipper to qualitative research methods: what research strategy could best capture many different interpretations of the same event?

Earlier this week I spoke with a qualitative researcher about the value of an outsider perspective when approaching a qualitative research project. Here is a good example of this dynamic at play: people clapped at various parts of the performance. I recognized that people were clapping at the end of solo or duo performances (like jazz). If I were to describe these dances, I would use the claps as a natural demarcation, but I probably would not think to make any note of the clapping itself. However, the kids in my crew hadn’t encountered clapping during a show before and assumed that clapping marked “something awesome or special.” Being preteens, the kids wanted to prove that they could clap before everyone else, and then revel in the wave of clapping that they seemingly started. At one point this went awry, and the preteens were the only audience members clapping. This awkward moment may have annoyed some of the people around us, but it really made the little sister’s day! From a research perspective, these kids would be more likely to thoroughly document and describe the clapping than I would, which would make for a much more thorough report. Similarly, from a kids-going-to-a-show perspective this was the first story they told to their Dad when they got home- and one that kicked off the rest of our report with uncontrollable laughter and tears.

As the show went on and appeared not to follow any of the recognizable plot points that I had expected (I expected a progressive journey through worlds experienced from the vantage of a sleigh but instead saw all of the worlds dancing together with some unrecognizable kids variously appearing on a sleigh and the main characters sometimes dancing in the mix or on their own), I began to search for other ways to make sense of the spectacle. I thought of a gymnast friend of mine and our dramatically different interpretations of gymnastics events (me: “Wow! Look what she did!” her: “Eh, she scratched the landing. There will be points off for that.” Which parts of the dancing should I be focusing on? I told my little one “Pay attention, so we can try these moves at home.” Barring any understanding of the technical competencies involved (but sure that laying your body at some of these amazing angles, or somehow spinning on one foot, or lifting another person into the air require tons of training, skills and knowledge) or any understanding of the plot as it was unfolding in front of me, I was left simply to marvel at it all. This is why research is an iterative process. In research, we may begin by marveling, but then we observe, note, and observe again. And who knows what amazing insights we will have developed once the process has run its course enough times for events to start making sense!

To be a researcher is not to understand, but rather to have the potential to understand- if you do the research.

Great readings that might shake you to your academic core? I’m compiling a list

In the spirit of research readings that might shake you to your academic core, I’m compiling a list. Please reply to this thread with any suggestions you have to add. They can be anything from short blog posts (microblog?) to research articles to books. What’s on your ‘must read’ list?

Here are a couple of mine to kick us off:

 

Charles Goodwin’s Professional Vision paper

I don’t think I’ve referred to any paper as much as this one. It’s about the way our professional training shapes the way we see the things around us. Shortly after reading this paper I was in the gym thinking about commonalities between the weight stacks and survey scales. I expect myself to be a certain relative strength, and when that doesn’t correlate with the place where I need to place my pin I’m a little thrown off.

It also has a deep analysis of the Rodney King verdict.

 

Revitalizing Chinatown Into a Heterotopia by Jia Lou

This article is based on a geosemiotic analysis of DC’s Chinatown. It is one of the articles that helped me to see that data really can come in all forms

 

After method: Mess in Social Science Research by John Law

This is the book that inspired this list. It also inspired this blog post.

 

On Postapocalyptic Research Methods and Failures, Honesty and Progress in Research

I’m reading a book that I like to call “post-apocalyptic research methodology.” It’s ‘After Method: Mess in Social Science Research’ by John Law. At this point the book reads like a novel. I can’t quite imagine where he’ll take his premise, but I’m searching for clues and turning pages. In the meantime, I’ve been thinking quite a bit about failure, honesty, uncertainty and humility in research.

How is the current research environment like a utopian society?

The research process is often idealized in public spaces. Whether the goal of the researcher is to publish a paper based on their research, present to an audience of colleagues or stakeholders about their research, or market the product of their research, all researchers have a vested interest in the smoothness of the research process. We expect to approach a topic, perform a series of time-tested methods or develop innovative new methods with strong historical traditions, apply these methods as neatly as possible, and end up with a series of strong themes that describe the majority of our data. However, in Law’s words “Parts of the world are caught in our ethnographies, our histories and our statistics. But other parts are not, and if they are then this is because they have been distorted into clarity.” (p. 2) We think of methods as a neutral middle step and not a political process, and this way of thinking allows us to focus on reliability and validity as surface measures and not inherent questions. “Method, as we usually imagine it, is a system for offering more or less bankable guarantees.” (p. 9)

Law points out that research methods are, in practice, very limited in the social sciences “talk of method still tends to summon up a relatively limited repertoire of responses.” (p. 3) Law also points out that every research method is inherently political. Every research method involves a way of seeing or a way of looking at the data, and that perspective maps onto the findings it yields. Different perspectives yield different findings, whether they are subtly or dramatically different. Law’s central assertion is that methods don’t just describe social realities but also help to create them. Recognizing the footprint of our own methods is a step toward better understanding our data and results.

In practice, the results that we focus on are largely true. They describe a large portion of the data, ascribing the rest of the data to noise or natural variation. When more of our data is described in our results, we feel more confident about our data and our analysis.

Law argues that this smoothed version of reality is far enough from the natural world that it should perk our ears. Research works to create a world that is simple and falls into place neatly and resembles nothing we know, “’research methods’ passed down to us after a century of social science tend to work on the assumption that the world is properly to be understood as a set of fairly specific, determinate, and more or less identifiable processes.” (p. 5) He suggests instead that we should recognize the parts that don’t fit, the areas of uncertainty or chaos, and the areas where our methods fail. “While standard methods are often extremely good at what they do, they are badly adapted to the study of the ephemeral, the indefinite and the irregular.” (p. 4). “Regularities and standardizations are incredibly powerful tools, but they set limits.” (p. 6)

Is the Utopia starting to fall apart?

The current research environment is a bit different from that of the past. More people are able to publish research at any stage without peer review using media like blogs. Researchers are able to discuss their research while it is in progress using social media like Twitter. There is more room to fail publicly than there ever has been before, and this allows for public acknowledgment of some of the difficulties and challenges that researcher’s face.

Building from ashes

Law briefly introduces his vision on p. 11 “My hope is that we can learn to live in a way that is less dependent on the automatic. To live more in and through slow method, or vulnerable method, or quiet method. Multiple method. Modest method. Uncertain method. Diverse method.”

Many modern discussions of about management talk about the value of failure as an innovative tool. Some of the newer quality control measures in aviation and medicine hinge on the recognition of failure and the retooling necessary to prevent or limit the recurrences of specific types of events. The theory behind these measures is that failure is normal and natural, and we could never predict the many ways in which failure could happen. So, instead of exclusively trying to predict or prohibit failure, failures should be embraced as opportunities to learn.

Here we can ask: what can researchers learn from the failures of the methods?

The first lesson to accompany any failure is humility. Recognizing our mistakes entails recognizing areas where we fell short, where our efforts were not enough. Acknowledging that our research training cannot be universal, that applying research methods isn’t always straightforward and simple, and that we cannot be everything to everyone could be an important stage of professional development.

How could research methodology develop differently if it were to embrace the uncertain, the chaotic and the places where we fall short?

Another question: What opportunities to researchers have to be publicly humble? How can those spaces become places to learn and to innovate?

Note: This blog post is dedicated to Dr Jeffrey Keefer @ NYU, who introduced me to this very cool book and has done some great work to bring researchers together

Methodology will only get you so far

I’ve been working on a post about humility as an organizational strategy. This is not that post, but it is also about humility.

I like to think of myself as a research methodologist, because I’m more interested in research methods than any specific area of study. The versatility of methodology as a concentration is actually one of the biggest draws for me. I love that I’ve been able to study everything from fMRI subjects and brain surgery patients to physics majors and teachers, taxi drivers and internet activists. I’ve written a paper on Persepolis as an object of intercultural communication and a paper on natural language processing of survey responses, and I’m currently studying migration patterns and communication strategies.

But a little dose of humility is always a good thing.

Yesterday I hosted the second in a series of online research, offline lunches that I’ve been coordinating. The lunches are intended as a way to get people from different sectors and fields who are conducting research on the internet together to talk about their work across the artificial boundaries of field and sector. These lunches change character as the field and attendees change.

I’ve been following the field of online research for many years now, and it has changed dramatically and continually before my eyes. Just a year ago Seth Grimes Sentiment Analysis Symposia were at the forefront of the field, and now I wonder if he is thinking of changing the title and focus of his events. Two years ago tagging text corpora with grammatical units was a standard midstep in text analysis, and now machine algorithms are far more common and often much more effective, demonstrating that grammar in use is far enough afield from grammar in theory to generate a good deal of error. Ten years ago qualitative research was often more focused on the description of platforms than the behaviors specific to them, and now the specific innerworkings of platform are much more of an aside to a behavioral focus.

The Association of Internet Researchers is currently having their conference in Denver (#ir14), generating more than 1000 posts per day under the conference hashtag and probably moving the field far ahead of where it was earlier this week.

My interest and focus has been on the methodology of internet research. I’ve been learning everything from qualitative methods to natural language processing and social network analysis to bayesian methods. I’ve been advocating for a world where different kinds of methodologists work together, where qualitative research informs algorithms and linguists learn from the differences between theoretical grammar and machine learned grammar, a world where computer scentists work iteratively with qualitative researchers. But all of these methods fall short because there is an elephant in the methodological room. This elephant, ladies and gentleman, is made of content. Is it enough to be a methodological specialist, swinging from project to project, grazing on the top layer of content knowledge without ever taking anything down to its root?

As a methodologist, I am free to travel from topic area to topic area, but I can’t reach the root of anything without digging deeper.

At yesterday’s lunch we spoke a lot about data. We spoke about how the notion of data means such different things to different researchers. We spoke about the form and type of data that different researchers expect to work with, how they groom data into the forms they are most comfortable with, how the analyses are shaped by the data type, how data science is an amazing term because just about anything could be data. And I was struck by the wide-openness of what I was trying to do. It is one thing to talk about methodology within the context of survey research or any other specific strategy, but what happens when you go wider? What happens when you bring a bunch of methodologists of all stripes together to discuss methodology? You lack the depth that content brings. You introduce a vast tundra of topical space to cover. But can you achieve anything that way? What holds together this wide realm of “research?”

We speak a lot about the lack of generalizable theories in internet research. Part of the hope for qualitative research is that it will create generalizable findings that can drive better theories and improve algorithmic efforts. But that partnership has been slow, and the theories have been sparse and lightweight. Is it possible that the internet is a space where theory alone just doesn’t cut it? Could it be that methodologists need to embrace content knowledge to a greater degree in order to make any of the headway we so desperately want to make?

Maybe the missing piece of the puzzle is actually the picture painted on the pieces?

comic

More Takeaways from the DC-AAPOR/WSS Summer Conference

Last week I shared my notes from the first two sessions of the DC-AAPOR/ WSS Summer conference preview/review. Here are the rest of the notes, covering the rest of the conference:

Session 3: Accessing and Using Records

  • Side note: Some of us may benefit from a support group format re: matching administrative records
  • AIR experiment with incentives & consent to record linkage: $2 incentive s/t worse than $0. $20 incentive yielded highest response rate and consent rate earlies t in the process, cheaper than phone follow-up
    • If relevant data is available, $20 incentive can be tailored to likely nonrespondents
    • Evaluating race & Hispanic origin questions- this was a big theme over the course of this conference. The social constructiveness of racial/ethnic identity doesn’t map well to survey questions. This Census study found changes in survey answers based on context, location, social position, education, ambiguousness of phenotype, self-perception, question format, census tract, and proxy reports. Also a high number of missing answers.

Session 4: Adaptive Design in Government Surveys

  • A potpourri of quotes from this session that caught my eye:
    • Re: Frauke Kreuter “the mother of all paradata”
      Peter Miller: “Response rates is not the goal”
      Robert Groves: “The way we do things is unsustainable”
    • Response rates are declining, costs are rising
    • Create a dashboard that works for your study. Include the relevant cars you need in order to have a decision maing tool that is tailored/dynamic and data based
      • Include paradata, response data
      • Include info re: mode switching, interventions
      • IMPORTANT: prioritize cases, prioritize modes, shift priorities with experience
      • Subsample open cases (not yet respondes)
      • STOP data collection at a sensible point, before your response bias starts to grow exponentially and before you waste money on expensive interventions that can actually work to make your data less representative
    • Interviewer paradata
      • Chose facts over inference
      • Presence or absence of key features (e.g. ease of access, condition of property)
        • (for a phone survey, these would probably include presence or absence of answer or answering mechanism, etc.)
        • For a household survey, household factors more helpful than neighborhood factors
    • Three kinds of adaptive design
      • Fixed design (ok, this is NOT adaptive)- treat all respondents the same
      • Preplanned adaptive- tailor mailing efforts in advance based on response propensity models
      • Real-time adaptive- adjust mailing efforts in response to real-time response data and evolving response propensities
    • Important aspect of adaptive design: document decisions and evaluate success, re-evaluate future strategy
    • What groups are under-responding and over-responding?
      • Develop propensity models
      • Design modes accordingly
      • Save $ by focusing resources
    • NSCG used adaptive design

Session 5: Public Opinion, Policy & Communication

  • Marital status checklist: categories not mutually exclusive- checkboxes
    • Cain conducted a meta-analysis of federal survey practices
    • Same sex marriage
      • Because of DOMA, federal agencies were not able to use same sex data. Now that it’s been struck down, the question is more important, has funding and policy issues resting on it
      • Exploring measurement:
        • Review of research
        • Focus groups
        • Cognitive interviews
        • Quantitative testing ß current phase
  • Estimates of same sex marriage dramatically inflated by straight people who select gender incorrectly (size/scope/scale)
  • ACS has revised marriage question
  • Instead of mother, father, parent 1, parent 2, …
    • Yields more same sex couples
    • Less nonresponse overall
    • Allow step, adopted, bio, foster, …
    • Plain language
      • Plain Language Act of 2010
      • See handout on plain language for more info
      • Pretty much just good writing practice in general
      • Data visualization makeovers using Tufte guidance
        • Maybe not ideal makeovers, but the data makeover idea is a fun one. I’d like to see a data makeover event of some kind…

Session 7: Questionaire Design and Evaluation

  • Getting your money’s worth! Targeting Resources to Make Cognitice Interviews Most Effective
    • When choosing a sample for cognitive interviews, focus on the people who tend to have the problems you’re investigating. Otherwise, the likelihood of choosing someone with the right problems is quite low
    • AIR experiment: cognitive interviews by phone
      • Need to use more skilled interviewers by phone, because more probing is necessary
      • Awkward silences more awkward without clues to what respondent is doing
      • Hard to evaluate graphics and layout by phone
      • When sharing a screen, interviewer should control mouse (they learned this the hard way)
      • ON the Plus side: more convenient for interviewee and interviewer, interviewers have access to more interviewees, data quality similar, or good enough
      • Try Skype or something?
      • Translation issues (much of the cognitive testing centered around translation issues- I’m not going into detail with them here, because these don’t transfer well from one survey to the next)
        • Education/internationall/translation: They tried to assign equivalent education groups and reflect their equivalences in the question, but when respondents didn’t agree to the equivalences suggested to them they didn’t follow the questions as written

Poster session

  • One poster was laid out like candy land. Very cool, but people stopped by more to make jokes than substantive comments
  • One poster had signals from interviews that the respondent would not cooperate, or 101 signs that your interview will not go smoothly. I could see posting that in an interviewer break room…

Session 8: Identifying and Repairing Measurement and Coverage Errors

  • Health care reform survey: people believe what they believe in spite of the terms and definitions you supply
  • Paraphrased Groves (1989:449) “Although survey language can be standardized, there is no guarantee that interpretation will be the same”
  • Politeness can be a big barrier in interviewer/respondent communication
  • Reduce interviewer rewording
  • Be sure to bring interviewers on board with project goals (this was heavily emphasized on AAPORnet while we were at this conference- the importance of interviewer training, valuing the work of the interviewers, making sure the interviewers feel valued, collecting interviewer feedback and restrategizing during the fielding period and debriefing the interviewers after the fielding period is done)
  • Response format effects when measuring employment: slides requested