Methodology will only get you so far

I’ve been working on a post about humility as an organizational strategy. This is not that post, but it is also about humility.

I like to think of myself as a research methodologist, because I’m more interested in research methods than any specific area of study. The versatility of methodology as a concentration is actually one of the biggest draws for me. I love that I’ve been able to study everything from fMRI subjects and brain surgery patients to physics majors and teachers, taxi drivers and internet activists. I’ve written a paper on Persepolis as an object of intercultural communication and a paper on natural language processing of survey responses, and I’m currently studying migration patterns and communication strategies.

But a little dose of humility is always a good thing.

Yesterday I hosted the second in a series of online research, offline lunches that I’ve been coordinating. The lunches are intended as a way to get people from different sectors and fields who are conducting research on the internet together to talk about their work across the artificial boundaries of field and sector. These lunches change character as the field and attendees change.

I’ve been following the field of online research for many years now, and it has changed dramatically and continually before my eyes. Just a year ago Seth Grimes Sentiment Analysis Symposia were at the forefront of the field, and now I wonder if he is thinking of changing the title and focus of his events. Two years ago tagging text corpora with grammatical units was a standard midstep in text analysis, and now machine algorithms are far more common and often much more effective, demonstrating that grammar in use is far enough afield from grammar in theory to generate a good deal of error. Ten years ago qualitative research was often more focused on the description of platforms than the behaviors specific to them, and now the specific innerworkings of platform are much more of an aside to a behavioral focus.

The Association of Internet Researchers is currently having their conference in Denver (#ir14), generating more than 1000 posts per day under the conference hashtag and probably moving the field far ahead of where it was earlier this week.

My interest and focus has been on the methodology of internet research. I’ve been learning everything from qualitative methods to natural language processing and social network analysis to bayesian methods. I’ve been advocating for a world where different kinds of methodologists work together, where qualitative research informs algorithms and linguists learn from the differences between theoretical grammar and machine learned grammar, a world where computer scentists work iteratively with qualitative researchers. But all of these methods fall short because there is an elephant in the methodological room. This elephant, ladies and gentleman, is made of content. Is it enough to be a methodological specialist, swinging from project to project, grazing on the top layer of content knowledge without ever taking anything down to its root?

As a methodologist, I am free to travel from topic area to topic area, but I can’t reach the root of anything without digging deeper.

At yesterday’s lunch we spoke a lot about data. We spoke about how the notion of data means such different things to different researchers. We spoke about the form and type of data that different researchers expect to work with, how they groom data into the forms they are most comfortable with, how the analyses are shaped by the data type, how data science is an amazing term because just about anything could be data. And I was struck by the wide-openness of what I was trying to do. It is one thing to talk about methodology within the context of survey research or any other specific strategy, but what happens when you go wider? What happens when you bring a bunch of methodologists of all stripes together to discuss methodology? You lack the depth that content brings. You introduce a vast tundra of topical space to cover. But can you achieve anything that way? What holds together this wide realm of “research?”

We speak a lot about the lack of generalizable theories in internet research. Part of the hope for qualitative research is that it will create generalizable findings that can drive better theories and improve algorithmic efforts. But that partnership has been slow, and the theories have been sparse and lightweight. Is it possible that the internet is a space where theory alone just doesn’t cut it? Could it be that methodologists need to embrace content knowledge to a greater degree in order to make any of the headway we so desperately want to make?

Maybe the missing piece of the puzzle is actually the picture painted on the pieces?

comic

The data Rorschach test, or what does your research say about you?

Sure, there is a certain abundance of personality tests: inkblot tests, standardized cognitive tests, magazine quizzes, etc. that we could participate in. But researchers participate in Rorschach tests of our own every day. There are a series of questions we ask as part of the research process, like:

What data do we want to collect or use? (What information is valuable to us? What do we call data?)

What format are we most comfortable with it in? (How clean does it have to be? How much error are we comfortable with? Does it have to resemble a spreadsheet? How will we reflect sources and transformations? What can we equate?)

What kind of analyses do we want to conduct? (This is usually a great time for our preexisting assumptions about our data to rear their heads. How often do we start by wondering if we can confirm our biases with data?!)

What results do we choose to report? To whom? How will we frame them?

If nothing else, our choices regarding our data reflect many of our values as well as our professional and academic experiences. If you’ve ever sat in on a research meeting, you know that “you want to do WHAT with which data?!” feeling that comes when someone suggests something that you had never considered.

Our choices also speak to the research methods that we are most comfortable with. Last night I attended a meetup event about Natural Language Processing, and it quickly became clear that the mathematician felt most comfortable when the data was transformed into numbers, the linguist felt most comfortable when the data was transformed into words and lexical units, and the programmer was most comfortable focusing on the program used to analyze the data. These three researchers confronted similar tasks, but their three different methods that will yield very different results.

As humans, we have a tendency to make assumptions about the people around us, either by assuming that they are very different or very much the same. Those of you who have seen or experienced a marriage or serious long-term partnership up close are probably familiar with the surprised feeling we get when we realize that one partner thinks differently about something that we had always assumed they would not differ on. I remember, for example, that small feeling that my world was upside down just a little bit when I opened a drawer in the kitchen and saw spoons and forks together in the utensil organizer. It had simply never occurred to me that anyone would mix the two, especially not my own husband!

My main point here is not about my husband’s organizational philosophy. It’s about the different perspectives inherently tied up in the research process. It can be hard to step outside our own perspective enough to see what pieces of ourselves we’ve imposed on our research. But that awareness is an important element in the quality control process. Once we can see what we’ve done, we can think much more carefully about the strengths and weaknesses of our process. If you believe there is only one way, it may be time to take a step back and gain a wider perspective.

Planning a second “Online Research, Offline Lunch”

In August we hosted the first Online Research, Offline Lunch for researchers involved in online research in any field, discipline or sector in the DC area. Although Washington DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. These lunches provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives. We kept the first gathering small. But the enthusiasm for this small event was quite large, and it was a great success! We had interesting conversations, learned a lot, made some valuable connections, and promised to meet again.

Many expressed interest in the lunches but weren’t able to attend. If you have any specific scheduling requests, please let me know now. Although I certainly can’t accommodate everyone’s preferences, I will do my best to take them into account.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

 

More Takeaways from the DC-AAPOR/WSS Summer Conference

Last week I shared my notes from the first two sessions of the DC-AAPOR/ WSS Summer conference preview/review. Here are the rest of the notes, covering the rest of the conference:

Session 3: Accessing and Using Records

  • Side note: Some of us may benefit from a support group format re: matching administrative records
  • AIR experiment with incentives & consent to record linkage: $2 incentive s/t worse than $0. $20 incentive yielded highest response rate and consent rate earlies t in the process, cheaper than phone follow-up
    • If relevant data is available, $20 incentive can be tailored to likely nonrespondents
    • Evaluating race & Hispanic origin questions- this was a big theme over the course of this conference. The social constructiveness of racial/ethnic identity doesn’t map well to survey questions. This Census study found changes in survey answers based on context, location, social position, education, ambiguousness of phenotype, self-perception, question format, census tract, and proxy reports. Also a high number of missing answers.

Session 4: Adaptive Design in Government Surveys

  • A potpourri of quotes from this session that caught my eye:
    • Re: Frauke Kreuter “the mother of all paradata”
      Peter Miller: “Response rates is not the goal”
      Robert Groves: “The way we do things is unsustainable”
    • Response rates are declining, costs are rising
    • Create a dashboard that works for your study. Include the relevant cars you need in order to have a decision maing tool that is tailored/dynamic and data based
      • Include paradata, response data
      • Include info re: mode switching, interventions
      • IMPORTANT: prioritize cases, prioritize modes, shift priorities with experience
      • Subsample open cases (not yet respondes)
      • STOP data collection at a sensible point, before your response bias starts to grow exponentially and before you waste money on expensive interventions that can actually work to make your data less representative
    • Interviewer paradata
      • Chose facts over inference
      • Presence or absence of key features (e.g. ease of access, condition of property)
        • (for a phone survey, these would probably include presence or absence of answer or answering mechanism, etc.)
        • For a household survey, household factors more helpful than neighborhood factors
    • Three kinds of adaptive design
      • Fixed design (ok, this is NOT adaptive)- treat all respondents the same
      • Preplanned adaptive- tailor mailing efforts in advance based on response propensity models
      • Real-time adaptive- adjust mailing efforts in response to real-time response data and evolving response propensities
    • Important aspect of adaptive design: document decisions and evaluate success, re-evaluate future strategy
    • What groups are under-responding and over-responding?
      • Develop propensity models
      • Design modes accordingly
      • Save $ by focusing resources
    • NSCG used adaptive design

Session 5: Public Opinion, Policy & Communication

  • Marital status checklist: categories not mutually exclusive- checkboxes
    • Cain conducted a meta-analysis of federal survey practices
    • Same sex marriage
      • Because of DOMA, federal agencies were not able to use same sex data. Now that it’s been struck down, the question is more important, has funding and policy issues resting on it
      • Exploring measurement:
        • Review of research
        • Focus groups
        • Cognitive interviews
        • Quantitative testing ß current phase
  • Estimates of same sex marriage dramatically inflated by straight people who select gender incorrectly (size/scope/scale)
  • ACS has revised marriage question
  • Instead of mother, father, parent 1, parent 2, …
    • Yields more same sex couples
    • Less nonresponse overall
    • Allow step, adopted, bio, foster, …
    • Plain language
      • Plain Language Act of 2010
      • See handout on plain language for more info
      • Pretty much just good writing practice in general
      • Data visualization makeovers using Tufte guidance
        • Maybe not ideal makeovers, but the data makeover idea is a fun one. I’d like to see a data makeover event of some kind…

Session 7: Questionaire Design and Evaluation

  • Getting your money’s worth! Targeting Resources to Make Cognitice Interviews Most Effective
    • When choosing a sample for cognitive interviews, focus on the people who tend to have the problems you’re investigating. Otherwise, the likelihood of choosing someone with the right problems is quite low
    • AIR experiment: cognitive interviews by phone
      • Need to use more skilled interviewers by phone, because more probing is necessary
      • Awkward silences more awkward without clues to what respondent is doing
      • Hard to evaluate graphics and layout by phone
      • When sharing a screen, interviewer should control mouse (they learned this the hard way)
      • ON the Plus side: more convenient for interviewee and interviewer, interviewers have access to more interviewees, data quality similar, or good enough
      • Try Skype or something?
      • Translation issues (much of the cognitive testing centered around translation issues- I’m not going into detail with them here, because these don’t transfer well from one survey to the next)
        • Education/internationall/translation: They tried to assign equivalent education groups and reflect their equivalences in the question, but when respondents didn’t agree to the equivalences suggested to them they didn’t follow the questions as written

Poster session

  • One poster was laid out like candy land. Very cool, but people stopped by more to make jokes than substantive comments
  • One poster had signals from interviews that the respondent would not cooperate, or 101 signs that your interview will not go smoothly. I could see posting that in an interviewer break room…

Session 8: Identifying and Repairing Measurement and Coverage Errors

  • Health care reform survey: people believe what they believe in spite of the terms and definitions you supply
  • Paraphrased Groves (1989:449) “Although survey language can be standardized, there is no guarantee that interpretation will be the same”
  • Politeness can be a big barrier in interviewer/respondent communication
  • Reduce interviewer rewording
  • Be sure to bring interviewers on board with project goals (this was heavily emphasized on AAPORnet while we were at this conference- the importance of interviewer training, valuing the work of the interviewers, making sure the interviewers feel valued, collecting interviewer feedback and restrategizing during the fielding period and debriefing the interviewers after the fielding period is done)
  • Response format effects when measuring employment: slides requested

Takeaways from the DC AAPOR & WSS Summer Conference Preview/Review 2013

“The way we do things is unsustainable” – Robert Groves, Census

This week I attended a great conference sponsored by DC-AAPOR. I’m typing up my notes from the sessions to share, but there are a lot of notes. This covers the morning sessions on day 1.

We are coming to a new point of understanding with some of the more recent developments in survey research. For the first time in recent memory, the specter of limited budgets loomed large. Researchers weren’t just asking “How can I do my work better?” but “How can I target my improvements so that my work can be better, faster, and less expensive?”

Session 1: Understanding and Dealing with Nonresponse

  • Researchers have been exploring the potential of nonresponse propensity modeling for a while. In the past, nonresponse propensities were used as a way to cut down on bias and draw samples that should yield to a more representative response group.
  • In this session, nonresponse propensity modeling was seen as a way of helping to determine a cutoff point in survey data collection.
  • Any data on mode propensity for individual respondents (in longitudinal surveys) or groups of respondents can be used to target people in their likely best mode from the beginning, instead of treating all respondents to the same mailing strategy. This can drastically reduce field time and costs.
  • Prepaid incentives have become accepted best practice in the world of incentives
  • Our usual methods of contact are continually less successful. It’s good to think outside the box. (Or inside the box: one group used certified UPS mail to deliver prepaid incentives)
  • Dramatic increases in incentives dramatically increased response rates and lowered field times significantly
  • Larger lag times in longitudinal surveys led to a larger dropoff in response rate
  • Remember Leverage Salience Theory- people with a vested interest in a survey are more likely to respond (something to keep in mind when writing invitations, reminders, and other respondent materials, etc.)
  • Nonresponse propensity is important to keep in mind in the imputation phase as well as the mailing or fielding phase of a survey
  • Re-engaging respondents in longitudinal surveys is possible. Recontacting can be difficult, esp. finding updated contact information. It would be helpful to share strategies re: maiden names, Spanish names, etc.

 

Session 2: Established Modes & New Technologies

  • ACASI>CAPI in terms of sensitive info
  • Desktop & mobile respondents follow similar profiles, vary significantly from distribution of traditional respondent profiles
  • Mobile respondents log frequent re-entries onto the surveys, so surveys must allow for saved progress and reentry
  • Mobile surveys that weren’t mobile optimized had the same completion rates as mobile surveys that were optimized. (There was some speculation that this will change over time, as web optimization becomes more standard)
  • iPhones do some mobile optimization of their own (didn’t yield higher complete rates, though, just a prettier screenshot)
  • The authors of the Gallup paper (McGeeney & Marlar) developed a best practices matrix- I requested a copy
  • Smartphone users are more likely to take a break while completing a survey (according to paradata based on OS)
  • This session boasted a particularly fun presentation by Paul Schroeder (abt SRBI) about distracted driving (a mobile survey! Hah!) in which he “saw the null hypothesis across a golden field, and they ran toward each other and embraced.” He used substantive responses, demographics, etc. to calculate the ideal number of call attempts for different survey subgroups. (This takes me back to a nonrespondent from a recent survey we fielded with a particularly large number of contact attempts, who replied to an e-mail invitation to ask if we had any self-respect left at that point)

Upcoming DC Event: Online Research Offline Lunch

ETA: Registration for this event is now CLOSED. If you have already signed up, you will receive a confirmation e-mail shortly. Any sign-ups after this date will be stored as a contact list for any future events. Thank you for your interest! We’re excited to gather with such a diverse and interesting group.

—–

Are you in or near the DC area? Come join us!

Although DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. This lunch will provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives.

Date & Time: August 6, 2013, 12:30 p.m.

Location: Near Gallery Place or Metro Center. Once we have a rough headcount, we’ll choose an appropriate location. (Feel free to suggest a place!)

Please RSVP using this form:

Spam, Personal histories and Language competencies

Over the recent holiday, I spent some time sorting through many boxes of family memorabilia. Some of you have probably done this with your families. It is fascinating, sentimental and mind-boggling. Highlights include both the things that strike a chord and things that can be thrown away. It’s a balance of efficiency and sap.

 

I’m always amazed by the way family memorabilia tells both private, personal histories and larger public ones. The boxes I dealt with last week were my mom’s, and her passion was politics. Even the Christmas cards she saved give pieces of political histories. Old thank you cards provide unknown nuggets of political strategy. She had even saved stirrers and plastic cups from an inauguration!

 

Campaign button found in the family files

Campaign button found in the family files

 

 

My mom continued to work in politics throughout her life, but the work that she did more recently is understandably fresher and more tangible for me. I remember looking through printed Christmas cards from politicians and wondering why she held on to them. In her later years I worried about her tendency to hold on to mail merged political letters. I wondered if her tendency to personalize impersonal documents made her vulnerable to fraud. To me, her belief in these documents made no sense.

 

Flash forward one year to me sorting through boxes of handwritten letters from politicians that mirror the spam she held on to. For many years she received handwritten letters from elected politicians in Washington. At some point, the handwritten letters evolved into typed letters that were hand-corrected and included handwritten sections. These evolved into typed letters on which the only handwriting was the signature. Eventually, even the signatures became printed. But the intention and function of these letters remained the same, even as their typography evolved. She believed in these letters because she had been receiving them for many decades. She believed they were personal because she had seen more of them that were personal than not. The phrases that I believe to be formulaic and spammy were once handwritten, intentional, personal and probably even heartfelt.

 

 

There are a few directions I could go from here:

 

– I better understand why older people complain about the impersonalization of modern society and wax poetic about the old letter writing tradition. I could include a few anecdotes about older family members.

 

– I’m amazed that people would take the time to write long letters using handwriting that may never have been deciphered

 

– I could wax poetic about some of the cool things I found in the storage facility

 

 

But I won’t. Not in this blog. Instead, I’ll talk about competencies.

 

Spam is a manifest of language competencies, although we often dismiss it as a total lack of language competence. In my Linguistics study, we were quickly taught the mantra “difference, not deficiency.” In fact it takes quite a bit of skill to develop spam letters. In survey research, the survey invitation letters that people so often dismiss have been heavily researched and optimized to yield a maximum response rate. In his book The Sociolinguistics of Globalization, Jan Blommaert details the many competencies necessary to create the Nigerian bank scam letters that were so heavily circulated a few years ago. And now I’ve learned that the political letters that I’m so quick to dismiss as thoughtless mail merges are actually part of a deep tradition of political action. Will that be enough for me to hold on to them? No. But I am saving the handwritten stuff. Boxes and boxes of it!

 

 

One day last week, as I drove to the storage facility I heard an interview with Michael Pollan about Food Literacy. Pollan’s point was that the food draughts in some urban areas are not just a function of access (Food draughts are areas where fresh food is difficult to obtain and grocery stores are few and far between, if they’re available at all). Pollan believes that even if there were grocery stores available, the people in these neighborhoods lack the basic cooking skills to prepare the food. He cited a few basic cooking skills which are not basic to me (partly because I’m a vegetarian, and partly because of the cooking traditions I learned from) as a part of his argument.

 

As a linguist, it is very interesting to hear the baggage that people attach to language metaphorically carried over to food (“food illiteracy”). I wonder what value the “difference, not deficiency” mantra holds here. I’m not ready to believe that people in areas subject to food draught are indeed kitchen illiterate. But I wouldn’t hesitate to agree that their food cultures probably differ significantly from Pollan’s. The basic staples and cooking methods probably differ significantly. Pollan could probably make a lot more headway with his cause if, instead of assuming that the people he is trying to help lack any basic cooking skills, he advocated toward a culture change that included access, attainability, and the potential to learn different practical cooking skills. It’s a subtle shift, but an important one.

 

As a proud uncook, I’m a huge fan of any kind of food preparation that is two steps or less, cheap, easy and fresh. Fast food for me involves putting a sweet potato in the microwave and pressing “potato,” grabbing for an apple or carrots and peanut butter, or tossing chickpeas into a dressing. Slow food involves the basic sautéing, roasting, etc. that Pollan advocates. I imagine that the skills he advocates are more practical and enjoyable for him than they are for people like me, whose mealtimes are usually limited and chaotic. What he calls basic is impractical for many of us. And the differences in time and money involved in uncooking and “basics” add up quickly.

 

 

 

So I’ve taken this post in quite a few directions, but it all comes together under one important point. Different language skills are not a lack of language skills altogether. Similarly, different survival skills are not a total lack of survival skills. We all carry unique skillsets that reflect our personal histories with those skills as well as the larger public histories that our personal histories help to compose. We, as people, are part of a larger public. The political spam I see doesn’t meet my expectations of valuable, personal communication, but it is in fact part of a rich political history. The people who Michael Pollan encounters have ways of feeding themselves that differ from Pollan’s expectations, but they are not without important survival skills. Cultural differences are not an indication of an underlying lack of culture.

2013-07-05 11.13.21

 

Fitness for Purpose, Representativeness and the perils of online reviews

Have you ever planned a trip online? In January, when I traveled to Amsterdam, I did all of the legwork online and ended up in a surprising place.

Amsterdam City Center is extremely easy to navigate. From the train station (a quick ride from the airport and a quick ride around The Netherlands), the canals extend outward like spokes. Each canal is flanked by streets. Then the city has a number of concentric rings emanating from the train station. Not only is the underlying map easy to navigate, there is a traveler station at the center and maps available periodically. English speaking tourists will see that not only do many people speak English, but Dutch has enough overlap with English to be comprehensible after even a short exposure.

But the city center experience was not as smooth for me. I studied map after map in the city center without finding my hotel. I asked for directions, and no one had heard of the hotel or the street it was on. The traveler center seemed flummoxed as well. Eventually I found someone who could help and found myself on a long commuter tram ride well outside the city center and tourist areas. The hotel had received great reviews and recommendations from many travelers. But clearly, the travelers who boasted about it were not quite the typical travelers, who likely would have ended up in one of the many hotels I saw from the tram window.

Have you ever discovered a restaurant online? I recently went to a nice, local restaurant that I’d been reading about for years. I ordered the truffle fries (fries with truffle salt and some kind of fondue sauce), because people had really raved about them, only to discover once they arrived that they were fundamentally french fries (totally not my bag- I hate fried food).

These review sites are not representative of anything. And yet we/I repeatedly use them as if they were reliable sources of information. One could easily argue that they may not be representative, but they are good enough for their intended use (fitness for purpose <– big, controversial notion from a recent AAPOR task force report on Nonprobability Sampling). I would argue that they are clearly not excellent for their intended use. But does that invalidate them altogether? They often they provide the only window that we have into the whatever it is that we intend them for.

Truffle fried aside, the restaurant was great. And location aside, the hotel was definitely an interesting experience.

Toilet capsule in hotel room (with frosted glass rotating pane for some degree of privacy)

Toilet capsule in hotel room (with frosted glass rotating pane for some degree of privacy)

Representativeness, qual & quant, and Big Data. Lost in translation?

My biggest challenge in coming from a quantitative background to a qualitative research program was representativeness. I came to class firmly rooted in the principle of Representativeness, and my classmates seemed not to have any idea why it mattered so much to me. Time after time I would get caught up in my data selection. I would pose the wider challenge of representativeness to a colleague, and they would ask “representative of what? why?”

 

In the survey research world, the researcher begins with a population of interest and finds a way to collect a representative sample of the population for study. In the qualitative world that accompanies survey research units of analysis are generally people, and people are chosen for their representativeness. Representativeness is often constructed by demographic characteristics. If you’ve read this blog before, you know of my issues with demographics. Too often, demographic variables are used as a knee jerk variable instead of better considered variables that are more relevant to the analysis at hand. (Maybe the census collects gender and not program availability, for example, but just because a variable is available and somewhat correlated doesn’t mean that it is in fact a relevant variable, especially when the focus of study is a population for whom gender is such an integral societal difference.)

 

And yet I spent a whole semester studying 5 minutes of conversation between 4 people. What was that representative of? Nothing but itself. It couldn’t have been exchanged for any other 5 minutes of conversation. It was simply a conversation that this group had and forgot. But over the course of the semester, this piece of conversation taught me countless aspects of conversation research. Every time I delved back into the data, it became richer. It was my first step into the world of microanalysis, where I discovered that just about anything can be a rich dataset if you use it carefully. A snapshot of people at a lecture? Well, how are their bodies oriented? A snapshot of video? A treasure trove of gestures and facial expressions. A piece of graffiti? Semiotic analysis! It goes on. The world of microanalysis is built on the practice of layered noticing. It goes deeper than wide.

 

But what is it representative of? How could a conversation be representative? Would I need to collect more conversations, but restrict the participants? Collect conversations with more participants, but in similar contexts? How much or how many would be enough?

 

In the world of microanalysis, people and objects constantly create and recreate themselves. You consistently create and recreate yourself, but your recreations generally fall into a similar range that makes you different from your neighbors. There are big themes in small moments. But what are the small moments representative of? Themselves. Simply, plainly, nothing more and nothing else. Does that mean that they don’t matter? I would argue that there is no better way to understand the world around us in deep detail than through microanalysis. I would also argue that macroanalysis is an important part of discovering the wider patterns in the world around us.

 

Recently a NY Times blog post by Quentin Hardy has garnered quite a bit of attention.

Why Big Data is Not Truth: http://bits.blogs.nytimes.com/2013/06/01/why-big-data-is-not-truth/

This post has really struck a chord with me, because I have had a hard time understanding Hardy’s complaint. Is big data truth? Is any data truth? All data is what it is; a collection of some sort, collected under a specific set of circumstances. Even data that we hope to be more representative has sampling and contextual limitations. Responsible analysts should always be upfront about what their data represents. Is big data less truthful than other kinds of data? It may be less representative than, say, a systematically collected political poll. But it is what it is: different data, collected under different circumstances in a different way. It shouldn’t be equated with other data that was collected differently. One true weakness of many large scale analyses is the blindness to the nature of the data, but that is a byproduct of the training algorithms that are used for much of the analysis. The algorithms need large training datasets, from anywhere. These sets often are developed through massive web crawlers. Here, context gets dicey. How does a researcher represent the data properly when they have no idea what it is? Hopefully researchers in this context will be wholly aware that, although their data has certain uses, it also has certain [huge] limitations.

 

I suspect that Hardy’s complaint is with the representations of massive datasets collected from webcrawlers as a complete truth from which any analyses could be run and all of the greater truths of the world could be revealed. On this note, Hardy is exactly right. Data simply is what it is, nothing more and nothing less. And any analysis that focuses on an unknown dataset is just that: an analysis without context. Which is not to say that all analyses need to be representative, but rather that all responsible analyses of good quality need to be self aware. If you do not know what the data represents and when and how it was collected, then you cannot begin to discuss the usefulness of any analysis of it.

What is the role of Ethnography and Microanalysis in Online Research?

There is a large disconnect in online research.

The largest, most profile, highest value and most widely practiced side of online research was created out of a high demand to analyze the large amount of consumer data that is constantly being created and largely public available. This tremendous demand led to research methods that were created in relative haste. Math and programming skills thrived in a realm where social science barely made a whisper. The notion of atheoretical research grew. The level of programming and mathematical competence required to do this work continues to grow higher every day, as the fields of data science and machine learning become continually more nuanced.

The largest, low profile, lowest value and increasingly more practiced side of online research is the academic research. Turning academia toward online research has been like turning a massive ocean liner. For a while online research was not well respected. At this point it is increasingly well respected, thriving in a variety of fields and in a much needed interdisciplinary way, and driven by a search for a better understanding of online behavior and better theories to drive analyses.

I see great value in the intersection between these areas. I imagine that the best programmers have a big appetite for any theory they can use to drive their work in a useful and productive ways. But I don’t see this value coming to bear on the market. Hiring is almost universally focused on programmers and data scientists, and the microanalytic work that is done seems largely invisible to the larger entities out there.

It is common to consider quantitative and qualitative research methods as two separate languages with few bilinguals. At the AAPOR conference in Boston last week, Paul Lavarakas mentioned a book he is working on with Margaret Roller which expands the Total Survey Error model to both quantitative and qualitative research methodology. I spoke with Margaret Roller about the book, and she emphasized the importance of qualitative researchers being able to talk more fluently and openly about methodology and quality controls. I believe that this is, albeit a huge challenge in wording and framing, a very important step for qualitative research, in part because quality frameworks lend credibility to qualitative research in the eyes of a wider research community. I wish this book a great deal of success, and I hope that it is able to find an audience and a frame outside the realm of survey research (Although survey research has a great deal of foundational research, it is not well known outside of the field, and this book will merit a wider audience).

But outside of this book, I’m not quite sure where or how the work of bringing these two distinct areas of research can or will be done.

Also at the AAPOR conference last week, I participated in a panel on The Role of Blogs in Public Opinion Research (intro here and summary here). Blogs serve a special purpose in the field of research. Academic research is foundational and important, but the publish rate on papers is low, and the burden of proof is high. Articles that are published are crafted as an argument. But what of the bumps along the road? The meditations on methodology that arise? Blogs provide a way for researchers to work through challenges and to publish their failures. They provide an experimental space where fields and ideas can come together that previously hadn’t mixed. They provide a space for finding, testing, and crossing boundaries.

Beyond this, they are a vehicle for dissemination. They are accessible and informally advertised. The time frame to publish is short, the burden lower (although I’d like to believe that you have to earn your audience with your words). They are a public face to research.

I hope that we will continue to test these boundaries, to cross over barriers like quantitative and qualitative that are unhelpful and obtrusive. I hope that we will be able to see that we all need each other as researchers, and the quality research that we all want to work for will only be achieved through the mutual recognition that we need.