The curse of the elevator speech

Yesterday I was involved in an innocent watercooler chat in which I was asked what Sociolinguistics is. This should be an easy enough question, because I just got a master’s degree in it. But it’s not. Sociolinguistics is a large field that means different things to different people. For every way of studying language, there are social and behavioral correlates that can also be studied. So a sociolinguist could focus on any number of linguistic areas, including phonology, syntax, semantics, or, in my case, discourse. My studies focus on the ways in which people use language, and the units of analysis in my studies are above the sentence level. Because Linguistics is such a large and siloed field, explaining Sociolinguistics through the lens of discourse analysis feels a bit like explaining vegetarianism through a pescatarian lens. The real vegetarians and the real linguists would balk.

There was a follow up question at the water cooler about y’all. “Is it a Southern thing?” My answer to this was so admittedly lame that I’ve been trying to think of a better one (sometimes even the most casual conversations linger, don’t they?).

My favorite quote of this past semester was from Jan Blommaert: “Language reflects a life, and not just a birth, and it is a life that is lived in a real sociocultural, historical and political space” Y’all has long been considered a southernism, but when I think back to my own experience with it, it was never about southern language or southern identity. One big clue to this is that I do sometimes use y’all, but I don’t use other southern language features along with it.

If I wanted to further investigate y’all from a sociolinguistic perspective, I would take language samples, either from one or a variety of speakers (and this sampling would have clear, meaningful consequences) and track the uses of y’all to see when it was invoked and what function it serves when invoked. My best, uninformed guess is that it does relational work and invokes registers that are more casual and nonthreatening. But without data, that is nothing but an uninformed guess.

This work has likely been done before. It would be interesting to see.
(ETA: Here is an example of this kind of work in action, by Barbara Johnstone)

What is the role of Ethnography and Microanalysis in Online Research?

There is a large disconnect in online research.

The largest, most profile, highest value and most widely practiced side of online research was created out of a high demand to analyze the large amount of consumer data that is constantly being created and largely public available. This tremendous demand led to research methods that were created in relative haste. Math and programming skills thrived in a realm where social science barely made a whisper. The notion of atheoretical research grew. The level of programming and mathematical competence required to do this work continues to grow higher every day, as the fields of data science and machine learning become continually more nuanced.

The largest, low profile, lowest value and increasingly more practiced side of online research is the academic research. Turning academia toward online research has been like turning a massive ocean liner. For a while online research was not well respected. At this point it is increasingly well respected, thriving in a variety of fields and in a much needed interdisciplinary way, and driven by a search for a better understanding of online behavior and better theories to drive analyses.

I see great value in the intersection between these areas. I imagine that the best programmers have a big appetite for any theory they can use to drive their work in a useful and productive ways. But I don’t see this value coming to bear on the market. Hiring is almost universally focused on programmers and data scientists, and the microanalytic work that is done seems largely invisible to the larger entities out there.

It is common to consider quantitative and qualitative research methods as two separate languages with few bilinguals. At the AAPOR conference in Boston last week, Paul Lavarakas mentioned a book he is working on with Margaret Roller which expands the Total Survey Error model to both quantitative and qualitative research methodology. I spoke with Margaret Roller about the book, and she emphasized the importance of qualitative researchers being able to talk more fluently and openly about methodology and quality controls. I believe that this is, albeit a huge challenge in wording and framing, a very important step for qualitative research, in part because quality frameworks lend credibility to qualitative research in the eyes of a wider research community. I wish this book a great deal of success, and I hope that it is able to find an audience and a frame outside the realm of survey research (Although survey research has a great deal of foundational research, it is not well known outside of the field, and this book will merit a wider audience).

But outside of this book, I’m not quite sure where or how the work of bringing these two distinct areas of research can or will be done.

Also at the AAPOR conference last week, I participated in a panel on The Role of Blogs in Public Opinion Research (intro here and summary here). Blogs serve a special purpose in the field of research. Academic research is foundational and important, but the publish rate on papers is low, and the burden of proof is high. Articles that are published are crafted as an argument. But what of the bumps along the road? The meditations on methodology that arise? Blogs provide a way for researchers to work through challenges and to publish their failures. They provide an experimental space where fields and ideas can come together that previously hadn’t mixed. They provide a space for finding, testing, and crossing boundaries.

Beyond this, they are a vehicle for dissemination. They are accessible and informally advertised. The time frame to publish is short, the burden lower (although I’d like to believe that you have to earn your audience with your words). They are a public face to research.

I hope that we will continue to test these boundaries, to cross over barriers like quantitative and qualitative that are unhelpful and obtrusive. I hope that we will be able to see that we all need each other as researchers, and the quality research that we all want to work for will only be achieved through the mutual recognition that we need.

Revisiting Latino/a identity using Census data

On April 10, I attended a talk by Jennifer Leeman (Research Sociolinguist @Census and Assistant Professor @George Mason) entitled “Spanish and Latino/a identity in the US Census.” This was a great talk. I’ll include the abstract below, but here are some of her main points:

  • Census categories promote and legitimize certain understandings, particularly because the Census, as a tool of the government, has an appearance of neutrality
  • Census must use categories from OMB
  • The distinction between race and ethnicity is fuzzy and full of history.
    • o   In the past, this category has been measured by surname, mothertongue, birthplace
      o   Treated as hereditary (“perpetual foreigner” status)
      o   Self-id new, before interviewer would judge, record
  • In the interview context, macro & micro meet
    • o   Macro level demographic categories
    • o   Micro:
      • Interactional participant roles
      • Indexed through labels & structure
      • Ascribed vs claimed identities
  • The study: 117 telephone interviews in Spanish
    • o   2 questions, ethnicity & race
    • o   Ethnicity includes Hispano, Latino, Español
      • Intended as synonyms but treated as a choice by respondents
      • Different categories than English (Adaptive design at work!)
  • The interviewers played a big role in the elicitation
    • o   Some interviewers emphasized standardization
      • This method functions differently in different conversational contexts
    • o   Some interviewers provided “teaching moments” or on-the-fly definitions
      • Official discourses mediated through interviewer ideologies
      • Definitions vary
  • Race question also problematic
    • o   Different conceptions of Indioamericana
      • Central, South or North American?
  • Role of language
    • o   Assumption of monolinguality problematic, bilingual and multilingual quite common, partial and mixed language resources
    • o   “White” spoken in English different from “white” spoken in Spanish
    • o   Length of time in country, generation in country belies fluid borders
  • Coding process
    • o   Coding responses such as “American, born here”
    • o   ~40% Latino say “other”
    • o   Other category ~ 90% Hispanic (after recoding)
  • So:
    • o   Likely result: one “check all that apply” question
      • People don’t read help texts
    • o   Inherent belief that there is an ideal question out there with “all the right categories”
      • Leeman is not yet ready to believe this
    • o   The takeaway for survey researchers:
      • Carefully consider what you’re asking, how you’re asking it and what information you’re trying to collect
  • See also Pew Hispanic Center report on Latino/a identity

 

 

 ABSTRACT

Censuses play a crucial role in the institutionalization and circulation of specific constructions of national identity, national belonging, and social difference, and they are a key site for the production and institutionalization of racial discourse (Anderson 1991; Kertzer & Arel 2002; Nobles 2000; Urla 1994).  With the recent growth in the Latina/o population, there has been increased interest in the official construction of the “Hispanic/Latino/Spanish origin” category (e.g., Rodriguez 2000; Rumbaut 2006; Haney López 2005).  However, the role of language in ethnoracial classification has been largely overlooked (Leeman 2004). So too, little attention has been paid to the processes by which the official classifications become public understandings of ethnoracial difference, or to the ways in which immigrants are interpellated into new racial subjectivities.

This presentation addresses these gaps by examining the ideological role of Spanish in the history of US Census Bureau’s classifications of Latina/os as well as in the official construction of the current “Hispanic/Latino/Spanish origin” category. Further, in order to gain a better understanding of the role of the census-taking in the production of new subjectivities, I analyze Spanish-language telephone interviews conducted as part of Census 2010.  Insights from recent sociocultural research on the language and identity (Bucholtz and Hall 2005) inform my analysis of how racial identities are instantiated and negotiated, and how respondents alternatively resist and take up the identities ascribed to them.

* Dr. Leeman is a Department of Spanish & Portuguese Graduate (GSAS 2000).

Digital Democracy Remixed

I recently transitioned from my study of the many reasons why the voice of DC taxi drivers is largely absent from online discussions into a study of the powerful voice of the Kenyan people in shaping their political narrative using social media. I discovered a few interesting things about digital democracy and social media research along the way, and the contrast between the groups was particularly useful.

Here are some key points:

  • The methods of sensemaking that journalists use in social media is similar to other methods of social media research, except for a few key factors, the most important of which is that the bar for verification is higher
  • The search for identifiable news sources is important to journalists and stands in contrast with research methods that are built on anonymity. This means that the input that journalists will ultimately use will be on a smaller scale than the automated analyses of large datasets widely used in social media research.
  • The ultimate information sources for journalists will be small, but the phenomena that will capture their attention will likely be big. Although journalists need to dig deep into information, something in the large expanse of social media conversation must capture or flag their initial attention
  • It takes some social media savvy to catch the attention of journalists. This social media savvy outweighs linguistic correctness in the ultimate process of getting noticed. Journalists act as intermediaries between social media participants and a larger public audience, and part of the intermediary process is language correcting.
  • Social media savvy is not just about being online. It is about participating in social media platforms in a publicly accessible way in regards to publicly relevant topics and using the patterned dialogic conventions of the platform on a scale that can ultimately draw attention. Many people and publics go online but do not do this.

The analysis of social media data for this project was particularly interesting. My data source was the comments following this posting on the Al Jazeera English Facebook feed.

fb

It evolved quite organically. After a number of rounds of coding I noticed that I kept drawing diagrams in the margins of some of the comments. I combined the diagrams into this framework:

scales

Once this framework was built, I looked closely at the ways in which participants used this framework. Sometimes participants made distinct discursive moves between these levels. But when I tried to map the participants’ movements on their individual diagrams, I noticed that my depictions of their movements rarely matched when I returned to a diagram. Although my coding of the framework was very reliable, my coding of the movements was not at all. This led me to notice that oftentimes the frames were being used more indexically. Participants were indexing levels of the frame, and this indexical process created powerful frame shifts. So, on the level of Kenyan politics exclusively, Uhuru’s crimes had one meaning. But juxtaposed against the crimes of other national leaders’ Uhuru’s crimes had a dramatically different meaning. Similarly, when the legitimacy of the ICC was questioned, the charges took on a dramatically different meaning. When Uhuru’s crimes were embedded in the postcolonial East vs West dynamic, they shrunk to the degree that the indictments seemed petty and hypocritical. And, ultimately, when religion was invoked the persecution of one man seemed wholly irrelevant and sacrilegious.

These powerful frame shifts enable the Kenyan public to have a powerful, narrative changing voice in social media. And their social media savvy enables them to gain the attention of media sources that amplify their voices and thus redefine their public narrative.

readyforcnn

Instagram is changing the way I see

I recently joined Instagram (I’m late, I know).

I joined because my daughter wanted to, because her friends had, to see what it was all about. She is artistic, and we like to talk about things like color combinations and camera angles, so Instagram is a good fit for us. But it’s quickly changing the way I understand photography. I’ve always been able to set up a good shot, and I’ve always had an eye for color. But I’ve never seriously followed up on any of it. It didn’t take long on Instagram to learn that an eye for framing and color is not enough to make for anything more than accidental great shots. The great shots that I see are the ones that pick deeper patterns or unexpected contrasts out of seemingly ordinary surroundings. They don’t simply capture beauty, they capture an unexpected natural order or a surprising contrast, or they tell a story. They make you gasp or they make you wonder. They share a vision, a moment, an insight. They’re like the beginning paragraph of a novel or the sketch outline of a poem. Realizing that, I have learned that capturing the obvious beauty around me is not enough. To find the good shots, I’ll need to leave my comfort zone, to feel or notice differently, to wonder what or who belongs in a space and what or who doesn’t, and why any of it would capture anyone’s interest. It’s not enough to see a door. I have to wonder what’s behind it. To my surprise, Instagram has taught me how to think like a writer again, how to find hidden narratives, how to feel contrast again.

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn't? What would catch your eye?

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn’t? What would catch your eye?

This kind of change has a great value, of course, for a social media researcher. The kinds of connections that people forge on social media, the different ways in which people use platforms and the ways in which platforms shape the way we interact with the world around us, both virtual and real, are vitally important elements in the research process. In order to create valid, useful research in social media, the methods and thinking of the researcher have to follow closely with the methods and thinking of the users. If your sensemaking process imitates the sensemaking process of the users, you know that you’re working in the right direction, but if you ignore the behaviors and goals of the users, you have likely missed the point altogether. (For example, if you think of Twitter hashtags simply as an organizational scheme, you’ve missed the strategic, ironic, insightful and often humorous ways in which people use hashtags. Or if you think that hashtags naturally fall into specific patterns, you’re missing their dialogic nature.)

My current research involves the cycle between social media and journalism, and it runs across platforms. I am asking questions like ‘what gets picked up by reporters and why?’ and ‘what is designed for reporters to pick up?’ And some of these questions lead me to examine the differences between funny memes that circulate like wildfire through Twitter leading to trends and a wider stage and the more indepth conversation on public facebook pages, which cannot trend as easily and is far less punchy and digestible. What role does each play in the political process and in constituting news?

Of course, my current research asks more questions than these, but it’s currently under construction. I’d rather not invite you into the workzone until some of the pulp and debris have been swept aside…

Total Survey Error: as Iconic as the Statue of Liberty herself?

In Jan Blommaerts book, the Sociolinguistics of Globalization, I learned about the iconicity of language. Languages, dialects, phrases and words have the potential to be as iconic as the statue of liberty. As I read Blommaert’s book, I am also reading about Total Survey Error, which I believe to be an iconic concept in the field of survey research.

Total Survey Error (TSE) is a relatively new, albeit very comprehensive framework for evaluating a host of potential error sources in survey research. It is often mentioned by AAPOR members (national and local), at JPSM classes and events, and across many other events, publications and classes for survey researchers. But here’s the catch: TSE came about after many of us entered the field. In fact, by the time TSE debuted and caught on as a conceptual framework, many people had already been working in the field for long enough that a framework didn’t seem necessary or applicable.

In the past, survey research was a field that people grew into. There were no degree or certificate programs in survey research. People entered the field from a variety of educational and professional backgrounds and worked their way up through the ranks from data entry, coder or interviewing positions to research assistant and analyst positions, and eventually up to management. Survey research was a field that valued experience, and much of the essential job knowledge came about through experience. This structure strongly characterizes my own office, where the average tenure is fast approaching two decades. The technical and procedural history of the department is alive and well in our collections of artifacts and shared stories. We do our work with ease, because we know the work well, and the team works together smoothly because of our extensive history together. Challenges or questions are an opportunity for remembering past experiences.

Programs such as the Joint Program in Survey Methodology (JPSM, a joint venture between the University of Michigan and University of Maryland) are relatively new, arising, for the most part, once many survey researchers were well established into their routines. Scholarly writings and journals multiplied with the rise of the academic programs. New terms and new methods sprang up. The field gained an alternate mode of entry.

In sociolinguistics, we study evidentiality, because people value different forms of evidence. Toward this end, I did a small study of survey researchers’ language use and mode of evidentials and discovered a very stark split between those that used experience to back up claims and those who relied on research to back up claims. This stark difference matched up well to my own experiences. In fact, when I coach jobseekers who are looking for survey research positions, I  draw on this distinction and recommend that they carefully listen to the types of evidentials they hear from the people interviewing them and try to provide evidence in the same format. The divide may not be visible from the outside of the field, but it is a strong underlying theme within it.

The divide is not immediately visible from the outside because the face of the field is formed by academic and professional institutions that readily embrace the academic terminology. The people who participate in these institutions and organizations tend to be long term participants who have been exposed to the new concepts through past events and efforts.

But I wonder sometimes whether the overwhelming public orientation to these methods doesn’t act to exclude some longtime survey researchers in some ways. I wonder whether some excellent knowledge and history get swept away with the new. I wonder whether institutions that represent survey research represent the field as a whole. I wonder what portion of the field is silent, unrepresented or less connected to collective resources and changes.

Particularly as the field encounters a new set of challenges, I wonder how well prepared the field will be- not just those who have been following these developments closely, but also those who have continued steadfast, strong, and with limited errors- not due to TSE adherence, but due to the strength of their experience. To me, the Total Survey Error Method is a powerful symbol of the changes afoot in the field.

For further reference, I’m including a past AAPOR presidential address by Robert Groves

groves aapor

Proceedings of the Fifty-First Annual Conference of the American Association for Public Opinion Research
Source: Source: The Public Opinion Quarterly, Vol. 60, No. 3 (Autumn, 1996), pp. 471-513
ETA other references:

Bob Groves: The Past, Present and Future of Total Survey Error

Slideshow summary of above article

Is there Interdisciplinary hope for Social Media Research?

I’ve been trying to wrap my head around social media research for a couple of years now. I don’t think it would be as hard to understand from any one academic or professional perspective, but, from an interdisciplinary standpoint, the variety of perspectives and the disconnects between them are stunning.

In the academic realm:

There is the computer science approach to social media research. From this standpoint, we see the fleshing out of machine learning algorithms in a stunning horserace of code development across a few programming languages. This is the most likely to be opaque, proprietary knowledge.

There is the NLP or linguistic approach, which overlaps to some degree with the cs approach, although it is often more closely tied to grammatical rules. In this case, we see grammatical parsers, dictionary development, and api’s or shared programming modules, such as NLTK or GATE. Linguistics is divided as a discipline, and many of these divisions have filtered into NLP.

Both the NLP and CS approaches can be fleshed out, trained, or used on just about any data set.

There are the discourse approaches. Discourse is an area of linguistics concerned with meaning above the level of the sentence. This type of research can follow more of a strict Conversation Analysis approach or a kind of Netnography approach. This school of thought is more concerned with context as a determiner or shaper of meaning than the two approaches above.

For these approaches, the dataset cannot just come from anywhere. The analyst should understand where the data came from.

One could divide these traditions by programming skills, but there are enough of us who do work on both sides that the distinction is superficial. Although, generally speaker, the deeper one’s programming or qualitative skills, the less likely one is to cross over to the other side.

There is also a growing tradition of data science, which is primarily quantitative. Although I have some statistical background and work with quantitative data sets every day, I don’t have a good understanding of data science as a discipline. I assume that the growing field of data visualization would fall into this camp.

In the professional realm:

There are many companies in horseraces to develop the best systems first. These companies use catchphrases like “big data” and “social media firehose” and often focus on sentiment analysis or topic analysis (usually topics are gleaned through keywords). These companies primarily market to the advertising industry and market researchers, often with inflated claims of accuracy, which are possible because of the opacity of their methods.

There is the realm of market research, which is quickly becoming dependent on fast, widely available knowledge. This knowledge is usually gleaned through companies involved in the horserace, without much awareness of the methodology. There is an increasing need for companies to be aware of their brand’s mentions and interactions online, in real time, and as they collect this information it is easy, convenient and cost effective to collect more information in the process, such as sentiment analyses and topic analyses. This field has created an astronomically high demand for big data analysis.

There is the traditional field of survey research. This field is methodical and error focused. Knowledge is created empirically and evaluated critically. Every aspect of the survey process is highly researched and understood in great depth, so new methods are greeted with a natural skepticism. Although they have traditionally been the anchors of good professional research methods and the leaders in the research field, survey researchers are largely outside of the big data rush. Survey researchers tend to value accuracy over timeliness, so the big, fast world of big data, with its dubious ability to create representative samples, hold little allure or relevance.

The wider picture

In the wider picture, we have discussions of access and use. We see a growing proportion of the population coming online on an ever greater variety of devices. On the surface, the digital divide is fast shrinking (albeit still significant). Some of the digital access debate has been expanded into an understanding of differential use- essentially that different people do different activities while online. I want to take this debate further by focusing on discursive access or the digital representation of language ideologies.

The problem

The problem with such a wide spread of methods, needs, focuses and analytic traditions is that there isn’t enough crossover. It is very difficult to find work that spreads across these domains. The audiences are different, the needs are different, the abilities are different, and the professional visions are dramatically different across traditions. Although many people are speaking, it seems like people are largely speaking within silos or echo chambers, and knowledge simply isn’t trickling across borders.

This problem has rapidly grown because the underlying professional industries have quickly calcified. Sentiment analysis is not the revolutionary answer to the text analysis problem, but it is good enough for now, and it is skyrocketing in use. Academia is moving too slow for the demands of industry and not addressing the needs of industry, so other analytic techniques are not being adopted.

Social media analysis would best be accomplished by a team of people, each with different training. But it is not developing that way. And that, I believe, is a big (and fast growing) problem.

Dispatch from the quantitative | qualitative border

On Tuesday evening I attended my first WAPA meeting (Washington Association of Professional Anthropologists). This group meets monthly, first with a happy hour and then with a speaker. Because I have more of a quantitative background, the work of professional anthropologists really blows my mind. The topics are wide ranging and the work interesting and innovative. I’ve been sorry to miss so many of their gatherings.

This week’s topic was near and dear to my heart in two ways.

1. The work was done in a survey context as a qualitative investigation preceding the development of survey questions. As a professional survey methodologist, I have worked through the surprisingly complicated question writing process many hundreds of times, so this approach really fascinates me!

2. The work surrounded the topic of childbirth. As a mother of two and a [partially] trained birth assistant, I love to talk about childbirth.

The purpose of the study at hand was to explore infant mortality in greater depth by investigating certain aspects of the delivery process. The topics of interest included:

– whether the birth was attended by a professional or not
– whether the birth was at home or in a medical facility
– delivery of the placenta
– how soon after the birth the baby was wiped
– cord cutting and tying
– whether the baby was swaddled and whether the baby’s head was covered
– how soon the baby was bathed

The study was based on 80 respondents (half facility births, half homebirths) (half moms of newborns, half moms of 1-2 year olds) from each of two countries. The researchers collected two kinds of data: extensive unstructured interviews and survey questions. The interviews were coded using Atlas ti into specific, identifiable, repeated events that were relevant to infant mortality and then placed onto a timeline. The timeline guided the recommended order of the survey questions.

One audience member shared that she would have collected stories of “what is a normal childbirth?” from participants in addition to the women’s personal stories. Her focus with this tactic was to collect the language with which people usually discuss these events in childbirth. She mentioned that her field was linguistic anthropology. The language she was talking about is referred to by survey researchers as “native terms-” essentially the terms that people normally use when discussing a given topic. One of the goals of question writing is to write a question using the terms that a respondent would naturally use to classify their response, making the response process easier for the respondent and collecting higher quality data. The presenters mentioned that, although they did not collect normative stories, collecting native terms was a part of their research process and recommendations.

The topics of focus are problematic ones to investigate. Most women can tell whether or not they gave birth in a facility and whether or not the birth was attended by a professional. Women can usually remember their labor and delivery in detail (usually for the rest of their lives), as well as the first time they held and fed their babies. Often women can also remember the delivery of the placenta or whether or not they hemorrhaged or tore significantly during the birth process.

But other aspects of the birth, such as the cord cutting and tying and the first wiping and swaddling of the baby, are usually done by someone other than the mother (if there is someone else present). They often don’t command the attention of the mother, who is full of emotion and adrenaline and catching her breath from an all encompassing, life changingly powerful experience. These moments are often not as memorable as others, and the mothers are often not as fully aware of them or able to report them.

I wondered if the moms were able to use the same level of detail in retelling these parts of their stories? Was there any indication that these sections of the stories they told were their own personal stories and not a general recounting of events as they are supposed to happen? In survey research, we talk about satisficing, or providing an answer because an answer is expected, not because it is correct. In societies where babies are frequently born at home, people often grow up around childbirth and know the general, expected order of events. How would the results of the study have been different if the researchers had used a slightly different approach: instead of assuming that the mothers would be able to recount all of these details of their own experiences, the researchers could have taken a deeper look at who performed the target activities, how detailed an account of the activities the mothers were able to provide, and the nature of the mom’s involvement or role in the target activities.

I wondered if working with this alternative approach would have led to questions more like “The next few questions refer to the moments after your baby was born and the first time you held and nursed your baby. Was the baby already wiped when you first held and nursed them? Was the babies cord already cut and tied? Was the baby already swaddled? Was the baby’s head already covered?” Although questions like these wouldn’t separate out the first 5 minutes from the first 10, they would likely be easier for the mom to answer and yield more complete and accurate responses.

All in all, this event was a fantastic one. I learned about an area of research that I hadn’t known existed. The speaker was great, and the audience was engaged. If you have an opportunity to attend a WAPA event, I highly recommend it.

Turns out Ethnography happens one slice at a time

Some of you may have noticed that I promised to report some research and then didn’t.

Last semester, for my Ethnography of Communication class, I did an Ethnography of DC taxi drivers. The theme of the Ethnography was “the voice of the drivers.” It was multilayered, and it involved data from a great variety of sources. I had hoped to share my final paper for the class here, but that won’t work for three reasons.

1.) The nature of Ethnography. Ethnography involves collecting a great deal of data and then choosing what to report, in what way, and in what context. The goal of the final paper was to reflect on the methodology. This was an important exercise, but I really wanted to share more of my findings and less of my methodology here.

2.) The particular aspect of my findings that I most want to share here has to do with online discourse. Specifically, I want to examine the lack of representation of the drivers perspective online. There are quite a few different ways to accomplish this. I have tried to do it a number of ways, using different slices of data and using different analytic strategies. But I haven’t decided which is the best set of data or method of analysis. But I am a very lucky researcher. Next week I’m headed to a workshop at Radbound University in Nijmegen, Netherlands. The workshop is on the Microanalysis of Online Discourse. I am eager to bring my data and methodological questions and to recieve insight from such an amazing array of researchers. I am also very eager to see what they bring!

Much of the discussion in the analysis of online discourse either excludes the issue of representation altogether or focuses on it entirely. Social media is often hailed as the great democratizer of communication. Internet access was long seen as the biggest obstacle to this new democracy . From this starting point, much of the research has evolved to consider more of nuances of differential use, including the complicated nature of internet access as well as behavior and goals of internet users. This part of my findings is an example of differential use and of different styles of participation. Working with this data has changed the way I see social media and the way I understand the democratization of news.

3.) Scope. The other major reason why I haven’t shared my findings is because of the sheer scope of this project. I was fortunate enough to only have taken one class last semester, which left me the freedom to work much harder on it. Also, as a working/student mom, I chose a project that involved my whole family in an auto-ethnographic way, so much of my work brought me closer to my family, rather than farther apart (spending time away from family to study is one of the hardest parts of working student motherhood!)

I have amassed quite a bit of data at this point, and I plan to write a few different papers using it.

Stay tuned, because I will release slices of it. But have some patience, because each slice will only be released in its own good time.

 

At this point, I feel the need to reference the Hutzler Banana Slicer

Turns out, Ethnography is more like this:

 

than like this:

Data Storytelling

In the beginning of our Ethnography of Communication class, one of the students asked about the kinds of papers one writes about an ethnography. It seemed like a simple question at the time. In order to report on ethnographic data, the researcher chooses a theme and then pulls out the parts of their data that fit the theme. Now that I’m at the point in my ethnography where I’m choosing what to report, I can safely say that this question is not one with an easy answer.

At this point, I’ve gathered together a tremendous amount of data about DC taxi drivers. I’ve already given my final presentation for my class, and written most of my final paper. But the data gathering phase hasn’t ended yet. I have been wondering whether I have enough data gathered together to write a book, and I probably could write a book, but that still doesn’t make my project feel complete. I don’t feel like the window I’ve carved is large enough to do this topic any justice.

The story that I set out to tell about the drivers is one of their absence in the online public sphere. As the wife of a DC driver, I was sick and tired of seeing blog posts and newspaper articles with seemingly unending streams of offensive, ignorant, or simply one sided comments. This story turns out to be one with many layers, one that goes far beyond issues of internet access, delves deeply into matters of differential use of technology, and one that strikes fractures into the soil of the grand potential of participatory democracy. It is also a story grounded in countless daily interactions, involving a large number of participants and situations. The question is large, the data abundant, and the paths to the story many. Each more narrow path begs a depth that is hungry for more data and more analysis. Each answer is defined by more questions. More specifically, do I start with the rides? With a specific ride? With the drivers? With a specific driver? With a specific piece of legislation? With one online discussion or theme? How can I make sure that my analysis is grounded and objective? How far do I trace the story, and which parts of the story does it leave out? What happens with the rest of the story? What is my responsibility and to whom?

This paper will clearly not be the capstone to the ethnography, just one story told through the data I’ve gathered together in the past few months. More stories can be told, and will be told with the data. Specifically, I’m hoping to delve more deeply into the driver’s social networks, for their role in information exchange. And the fallout from stylistic differences in online discussions. And, more prescriptively, into ways that drivers voices can be better represented in the public sphere. And maybe more?

It feels strange to write a paper that isn’t descriptive of the data as a whole. Every other project that I’ve worked on has led to a single publication that summarized the whole set. It seems strange, coming from a quantitative perspective where the data strongly confines the limits of what can and cannot be said in the report and what is more or less important to include in the report, to have a choice of data, and, more importantly, a choice of story to tell. Instead of pages of numbers to look through, compare and describe, I’m entering the final week of this project with the same cloud of ambiguity that has lingered throughout. And I’m looking for ways that my data can determine what can and cannot be reported on and what stories should be told. Where, in this sea of data, is my life raft of objectivity? (Hear that note of drama? That comes from the lack of sleep and heightened anxiety that finals bring about- one part of formal education that I will not miss!!)

I have promised to share my paper here once it has been written. I might end up making some changes before sharing it, but I will definitely share it. My biggest hope is that it will inspire some fresh, better informed conversation on the taxi situation in DC and on what it means to be represented in a participatory democracy.