The data Rorschach test, or what does your research say about you?

Sure, there is a certain abundance of personality tests: inkblot tests, standardized cognitive tests, magazine quizzes, etc. that we could participate in. But researchers participate in Rorschach tests of our own every day. There are a series of questions we ask as part of the research process, like:

What data do we want to collect or use? (What information is valuable to us? What do we call data?)

What format are we most comfortable with it in? (How clean does it have to be? How much error are we comfortable with? Does it have to resemble a spreadsheet? How will we reflect sources and transformations? What can we equate?)

What kind of analyses do we want to conduct? (This is usually a great time for our preexisting assumptions about our data to rear their heads. How often do we start by wondering if we can confirm our biases with data?!)

What results do we choose to report? To whom? How will we frame them?

If nothing else, our choices regarding our data reflect many of our values as well as our professional and academic experiences. If you’ve ever sat in on a research meeting, you know that “you want to do WHAT with which data?!” feeling that comes when someone suggests something that you had never considered.

Our choices also speak to the research methods that we are most comfortable with. Last night I attended a meetup event about Natural Language Processing, and it quickly became clear that the mathematician felt most comfortable when the data was transformed into numbers, the linguist felt most comfortable when the data was transformed into words and lexical units, and the programmer was most comfortable focusing on the program used to analyze the data. These three researchers confronted similar tasks, but their three different methods that will yield very different results.

As humans, we have a tendency to make assumptions about the people around us, either by assuming that they are very different or very much the same. Those of you who have seen or experienced a marriage or serious long-term partnership up close are probably familiar with the surprised feeling we get when we realize that one partner thinks differently about something that we had always assumed they would not differ on. I remember, for example, that small feeling that my world was upside down just a little bit when I opened a drawer in the kitchen and saw spoons and forks together in the utensil organizer. It had simply never occurred to me that anyone would mix the two, especially not my own husband!

My main point here is not about my husband’s organizational philosophy. It’s about the different perspectives inherently tied up in the research process. It can be hard to step outside our own perspective enough to see what pieces of ourselves we’ve imposed on our research. But that awareness is an important element in the quality control process. Once we can see what we’ve done, we can think much more carefully about the strengths and weaknesses of our process. If you believe there is only one way, it may be time to take a step back and gain a wider perspective.

Advertisement

Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News

I attended another great CLIP event today, Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News, by Brendan O’Connor, CMU. I’d love to write it up, but I decided instead to share my notes. I hope they’re easy to follow. Please feel free to ask any follow-up questions!

 

Computational Social Science

– Then: 1890 census tabulator- hand cranked punch card tabulator

– Now: automated text analysis

 

Goal: develop methods of predicting, etc conflicts

– events = data

– extracting events from news stories

– information extraction from large scale news data

– goal: time series of country-country interactions

– who did what to whom? in what order?

Long history of manual coding of this kind of data for this kind of purpose

– more recently: rule based pattern extraction, TABARI

– —> developing event types (diplomatic events, aggressions, …) from verb patterns – TABARI hand engineered 15,000 coding patterns over the course of 2 decades —> very difficult, validity issues, changes over time- all developed by political scientists Schrodt 1994- in MUCK (sp?) days – still a common poli sci methodology- GDELT project- software, etc. w/pre & postprocessing

http://gdelt.utdallas.edu

– Sources: mainstream media news, English language, select sources

 

THIS research

– automatic learning of event types

– extract events/ political dynamics

→ use Bayesian probabilistic methods

– using social context to drive unsupervised learning about language

– data: Gigaword corpus (news articles) – a few extra sources (end result mostly AP articles)

– named entities- dictionary of country names

– news biases difficult to take into account (inherent complication of the dataset)(future research?)

– main verb based dependency path (so data is pos tagged & then sub/obj tagged)

– 3 components: source (acting country)/ recipient (recipient country)/ predicate (dependency path)

– loosely Dowty 1990

– International Relations (IR) is heavily concerned with reciprocity- that affects/shapes coding, goals, project dynamics (e.g. timing less important than order, frequency, symmetry)

– parsing- core NLP

– filters (e.g. Georgia country vs. Georgia state) (manual coding statements)

– analysis more focused on verb than object (e.g. text following “said that” excluded)

– 50% accuracy finding main verb (did I hear that right? ahhh pos taggers and their many joys…)

– verb: “reported that” – complicated: who is a valid source? reported events not necessarily verified events

– verb: “know that” another difficult verb

 The models:

– dyads = country pairs

– each w/ timesteps

– for each country pair a time series

– deduping necessary for multiple news coverage (normalizing)

– more than one article cover a single event

– effect of this mitigated because measurement in the model focuses on the timing of events more than the number of events

1st model

– independent contexts

– time slices

– figure for expected frequency of events (talking most common, e.g.)

2nd model

– temporal smoothing: assumes a smoothness in event transitions

– possible to put coefficients that reflect common dynamics- what normally leads to what? (opportunity for more research)

– blocked Gibbs sampling

– learned event types

– positive valence

– negative valence

– “say” ← some noise

– clusters: verbal conflict, material conflict, war terms, …

How to evaluate?

– need more checks of reasonableness, more input from poli sci & international relations experts

– project end goal: do political sci

– one evaluative method: qualitative case study (face validity)

– used most common dyad Israeli: Palestinian

– event class over time

– e.g. diplomatic actions over time

– where are the spikes, what do they correspond with? (essentially precision & recall)

– another event class: police action & crime response

– Great point from audience: face validity: my model says x, then go to data- can’t develop labels from the data- label should come from training data not testing data

– Now let’s look at a small subset of words to go deeper

– semantic coherence?

– does it correlate with conflict?

– quantitative

– lexical scale evaluation

– compare against TABARI (lucky to have that as a comparison!!)

– another element in TABARI: expert assigned scale scores – very high or very low

– validity debatable, but it’s a comparison of sorts

– granularity invariance

– lexical scale impurity

Comparison sets

– wordnet – has synsets – some verb clusters

– wordnet is low performing, generic

– wordnet is a better bar than beating random clusters

– this model should perform better because of topic specificity

 

“Gold standard” method- rarely a real gold standard- often gold standards themselves are problematic

– in this case: militarized interstate dispute dataset (wow, lucky to have that, too!)

Looking into semi-supervision, to create a better model

 speaker website:

http://brenocon.com

 

Q &A:

developing a user model

– user testing

– evaluation from users & not participants or collaborators

– terror & protest more difficult linguistic problems

 

more complications to this project:

– Taiwan, Palestine, Hezbollah- diplomatic actors, but not countries per se

Planning a second “Online Research, Offline Lunch”

In August we hosted the first Online Research, Offline Lunch for researchers involved in online research in any field, discipline or sector in the DC area. Although Washington DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. These lunches provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives. We kept the first gathering small. But the enthusiasm for this small event was quite large, and it was a great success! We had interesting conversations, learned a lot, made some valuable connections, and promised to meet again.

Many expressed interest in the lunches but weren’t able to attend. If you have any specific scheduling requests, please let me know now. Although I certainly can’t accommodate everyone’s preferences, I will do my best to take them into account.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

 

Data science can be pretty badass, but…

Every so often I’m reminded of the power of data science. Today I attended a talk entitled ‘Spatiotemporal Crime Prediction Using GPS & Time-tagged Tweets” by Matt Gerber of the UVA PTL. The talk was a UMD CLIP event (great events! Go if you can!).

Gerber began by introducing a few of the PTL projects, which include:

  • Developing automatic detection methods for extremist recruitment in the Dark Net
  • Turning medical knowledge from large bodies of unstructured texts into medical decision support models
  • Many other cool initiatives

He then introduced the research at hand: developing predictive models for criminal activity. The control model in this case use police report data from a given period of time to map incidents onto a map of Chicago using latitude and longitude. He then superimposed a grid on the map and collapsed incidents down into a binary presence vs absence model. Each square in the grid would either have one or more crimes (1) or not have any crimes (-1). This was his training data. He built a binary classifier and then used logistic regression to compute probabilities and layered a kernel density estimator on top. He used this control model to compare with a model built from unstructured text. The unstructured text consisted of GPS tagged Twitter data (roughly 3% of tweets) from the Chicago area. He drew the same grid using longitude and latitude coordinates and tossed all of the tweets from each “neighborhood” (during the same one month training window) into the boxes. Then, using essentially a one box=one document for a document based classifier, he subjected each document to topic modeling (using LDA & MALLET). He focused on crime related words and topics to build models to compare against the control models. He found that the predictive value of both models was similar when compared against actual crime reports from days within the subsequent month.

This is a basic model. The layering can be further refined and better understood (there was some discussion about the word “turnup,” for example). Many more interesting layers can be built into it in order to improve its predictive power, including more geographic features, population densities, some temporal modeling to accommodate the periodic nature of some crimes (e.g. most robberies happen during the work week, while people are away from their homes), a better accommodation for different types of crime, and a host of potential demographic and other variables.

I would love to dig deeper into this data to gain a deeper understanding of the conversation underlying the topic models. I imagine there is quite a wealth of deeper information to be gained as well as a deeper understanding of what kind of work the models are doing. It strikes me that each assumption and calculation has a heavy social load attached to it. Each variable and each layer that is built into the model and roots out correlations may be working to reinforce certain stereotypes and anoint them with the power of massive data. Some questions need to be asked. Who has access to the internet? What type of access? How are they using the internet? Are there substantive differences between tweets with and without geotagging? What varieties of language are the tweeters using? Do classifiers take into account language variation? Are the researchers simply building a big data model around the old “bad neighborhood” notions?

Data is powerful, and the predictive power of data is fascinating. Calculations like these raise questions in new ways, remixing old assumptions into new correlations. Let’s not forget to question new methods, put them into their wider sociocultural contexts and delve qualitatively into the data behind the analyses. Data science can be incredibly powerful and interesting, but it needs a qualitative and theoretical perspective to keep it rooted. I hope to see more, deeper interdisciplinary partnerships soon, working together to build powerful, grounded, and really interesting research!

 

Rethinking Digital Democracy- More reflections from #SMSociety13

What does digital democracy mean to you?

I presented this poster: Rethinking Digital Democracy v4 at the Social Media and Society conference last weekend, and it demonstrated only one of many images of digital democracy.

Digital democracy was portrayed at this conference as:

having a voice in the local public square (Habermas)

making local leadership directly accountable to constituents

having a voice in an external public sphere via international media sources

coordinating or facilitating a large scale protest movement

the ability to generate observable political changes

political engagement and/or mobilization

a working partnership between citizenry, government and emergency responders in crisis situations

a systematic archival of government activity brought to the public eye. “Archives can shed light on the darker places of the national soul”(Wilson 2012)

One presenter had the most systematic representation of digital democracy. Regarding the recent elections in Nigeria, he summarized digital democracy this way: “social media brought socialization, mobilization, participation and legitimization to the Nigerian electoral process.”
Not surprisingly, different working definitions brought different measures. How do you know that you have achieved digital democracy? What constitutes effective or successful digital democracy? And what phenomena are worthy of study and emulation? The scope of this question and answer varies greatly among some of the examples raised during the conference, which included:

citizens in the recent Nigerian election

citizens who tweet during a natural disaster or active crisis situation

citizens who changed the international media narrative regarding the recent Kenyan elections and ICC indictment

Arab Spring actions, activities and discussions
“The power of the people of greater than the people in power” a perfect quote related to Arab revolutions on a slide from Mona Kasra

the recent Occupy movement in the US

tweets to, from and about the US congress

and many more that I wasn’t able to catch or follow…

In the end, I don’t have a suggestion for a working definition or measures, and my coverage here really only scratches the surface of the topic. But I do think that it’s helpful for people working in the area to be aware of the variety of events, people, working definitions and measures at play in wider discussions of digital democracy. Here are a few question for researchers like us to ask ourselves:

What phenomenon are we studying?

How are people acting to affect their representation or governance?

Why do we think of it as an instance of digital democracy?

Who are “the people” in this case, and who is in a position of power?

What is our working definition of digital democracy?

Under that definition, what would constitute effective or successful participation? Is this measurable, codeable or a good fit for our data?

Is this a case of internal or external influence?

And, for fun, a few interesting areas of research:

There is a clear tension between the ground-up perception of the democratic process and the degree of cohesion necessary to affect change (e.g. Occupy & the anarchist framework)

Erving Goffman’s participant framework is also further ground for research in digital democracy (author/animator/principal <– think online petition and e-mail drives, for example, and the relationship between reworded messages, perceived efficacy and the reception that the e-mails receive).

It is clear that social media helps people have a voice and connect in ways that they haven’t always been able to. But this influence has yet to take any firm shape either among researchers or among those who are practicing or interested in digital democracy.

I found this tweet particularly apt, so I’d like to end on this note:

“Direct democracy is not going to replace representative government, but supplement and extend representation” #YES #SMSociety13

— Ray MacLeod (@RayMacLeod) September 14, 2013

 

 

Language use & gaps in STEM education

Today our microanalytic research group focused on videos of STEM education.

 

Watching STEM classes reminds me of a field trip a fellow researcher and I took to observe a physics class that used project based learning. Project based learning is a more hands on and collaborative teaching approach which is gaining popularity among physics educators as an alternative to traditional lecture. We observed an optics lab at a local university, and after the class we spoke about what we had observed. Whereas the other researcher had focused on the optics and math, I had been captivated by the awkwardness of the class. I had never envisioned the PJBL process to be such an awkward one!

 

The first video that we watched today involved a student interchangeably using the terms chart and graph and softening their use with the term “thing.” There was some debate among the researchers as to whether the student had known the answer but chosen to distance himself from the response or whether the student was hedging because he was uncertain. The teacher responded by telling the student not to talk about things, but rather to talk to her in math terms.

 

What does it mean to understand something in math? The math educators in the room made it clear that a lack of the correct terminology signaled that the student didn’t necessarily understand the subject matter. There was no way for the teacher to know whether the student knew the difference between a chart and a graph from their use of the terms. The conversation on our end was not about the conceptual competence that the student showed. He was at the board, working through the problem, and he had begun his interaction with a winding description of the process necessary (as he imagined it) to solve the problem. It was clear that he did understand the problem and the necessary steps to solve it on some level (whether correct or not), but that level of understanding was not one that mattered.

 

I was surprised at the degree to which the use of mathematical language was framed as a choice on the part of the student. The teacher asked the student to use mathematical language with her. One math educator in our group spoke about students “getting away with fudging answers.” One researcher said that the correct terms “must be used,” and another commented about the lack of correct terms as indication that the student did “not have a proper understanding” of the material. All of this talk seems to bely the underlying truth that the student chose to use inexact language for a reason (whether social or epistemic).

 

The next video we watched showed a math teacher working through a problem. I was really struck by her lack of enthusiasm. I noticed her sighs, her lack of engagement with the students even when directly addressing them, and her tone when reading the problem from the textbook. Despite her apparent lack of enthusiasm, her mood appeared considerably brighter when she finished working through the problem. I found this interesting, because physics teachers usually report that their favorite part of their job is watching the students’ “a-ha!” moments. Maybe the rewards of technical problem solving are a motivator for both students and teachers alike? But the process of technical problem solving itself is rarely as motivating.

 

All of this leads me to one particularly interesting question. How do people in STEM learning settings distance themselves from the material? What discursive tools do they use? Who uses these discursive tools? And does the use of these tools change over time? I wonder in particular whether discursive distancing, which often parallels female discursive patterns, is more common among females than males as they progress through their education? Is there any kind of quantitative correlate to the use of discursive distancing? Is it more common among people who believe that they aren’t good at STEM? Is discursive distancing less common among people who pursue STEM careers? Is there a correlation between distancing and test scores?

 

Awkwardness in STEM education is fertile ground for qualitative researchers. To what extent is the learning or solving process emphasized and to what extent is the answer valued above all else? How is mathematical language socialized? The process of solving technical problems is a messy and uncomfortable one. It rarely goes smoothly, and in fact challenges often lead to more challenges. The process of trying and failing or trying and learning is not a sexy or attractive one, and there is rampant concern that focusing on the process of learning robs students of the ability to demonstrate their knowledge in a way that matters to people who speak the traditional languages of math and science.

 

We spoke a little about the phenomena of connected math. It sounds to me very closely parallel to project based learning initiatives in physics. I was left wondering why such a similar teaching process could be valued as a teaching tool for all students in one field and relegated to a teaching tool for struggling students in another neighboring field. I wonder about the similarities and differences between the outcomes of these methods. Much of this may rest on politics, and I suspect that the politics are rooted in deeply held and less questioned beliefs about STEM education.

 

STEM education initiatives have grown quite a bit in recent years, and it’s clear that there is quite a bit of interesting research left to be done.

Upcoming DC Event: Online Research Offline Lunch

ETA: Registration for this event is now CLOSED. If you have already signed up, you will receive a confirmation e-mail shortly. Any sign-ups after this date will be stored as a contact list for any future events. Thank you for your interest! We’re excited to gather with such a diverse and interesting group.

—–

Are you in or near the DC area? Come join us!

Although DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. This lunch will provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives.

Date & Time: August 6, 2013, 12:30 p.m.

Location: Near Gallery Place or Metro Center. Once we have a rough headcount, we’ll choose an appropriate location. (Feel free to suggest a place!)

Please RSVP using this form:

Spam, Personal histories and Language competencies

Over the recent holiday, I spent some time sorting through many boxes of family memorabilia. Some of you have probably done this with your families. It is fascinating, sentimental and mind-boggling. Highlights include both the things that strike a chord and things that can be thrown away. It’s a balance of efficiency and sap.

 

I’m always amazed by the way family memorabilia tells both private, personal histories and larger public ones. The boxes I dealt with last week were my mom’s, and her passion was politics. Even the Christmas cards she saved give pieces of political histories. Old thank you cards provide unknown nuggets of political strategy. She had even saved stirrers and plastic cups from an inauguration!

 

Campaign button found in the family files

Campaign button found in the family files

 

 

My mom continued to work in politics throughout her life, but the work that she did more recently is understandably fresher and more tangible for me. I remember looking through printed Christmas cards from politicians and wondering why she held on to them. In her later years I worried about her tendency to hold on to mail merged political letters. I wondered if her tendency to personalize impersonal documents made her vulnerable to fraud. To me, her belief in these documents made no sense.

 

Flash forward one year to me sorting through boxes of handwritten letters from politicians that mirror the spam she held on to. For many years she received handwritten letters from elected politicians in Washington. At some point, the handwritten letters evolved into typed letters that were hand-corrected and included handwritten sections. These evolved into typed letters on which the only handwriting was the signature. Eventually, even the signatures became printed. But the intention and function of these letters remained the same, even as their typography evolved. She believed in these letters because she had been receiving them for many decades. She believed they were personal because she had seen more of them that were personal than not. The phrases that I believe to be formulaic and spammy were once handwritten, intentional, personal and probably even heartfelt.

 

 

There are a few directions I could go from here:

 

– I better understand why older people complain about the impersonalization of modern society and wax poetic about the old letter writing tradition. I could include a few anecdotes about older family members.

 

– I’m amazed that people would take the time to write long letters using handwriting that may never have been deciphered

 

– I could wax poetic about some of the cool things I found in the storage facility

 

 

But I won’t. Not in this blog. Instead, I’ll talk about competencies.

 

Spam is a manifest of language competencies, although we often dismiss it as a total lack of language competence. In my Linguistics study, we were quickly taught the mantra “difference, not deficiency.” In fact it takes quite a bit of skill to develop spam letters. In survey research, the survey invitation letters that people so often dismiss have been heavily researched and optimized to yield a maximum response rate. In his book The Sociolinguistics of Globalization, Jan Blommaert details the many competencies necessary to create the Nigerian bank scam letters that were so heavily circulated a few years ago. And now I’ve learned that the political letters that I’m so quick to dismiss as thoughtless mail merges are actually part of a deep tradition of political action. Will that be enough for me to hold on to them? No. But I am saving the handwritten stuff. Boxes and boxes of it!

 

 

One day last week, as I drove to the storage facility I heard an interview with Michael Pollan about Food Literacy. Pollan’s point was that the food draughts in some urban areas are not just a function of access (Food draughts are areas where fresh food is difficult to obtain and grocery stores are few and far between, if they’re available at all). Pollan believes that even if there were grocery stores available, the people in these neighborhoods lack the basic cooking skills to prepare the food. He cited a few basic cooking skills which are not basic to me (partly because I’m a vegetarian, and partly because of the cooking traditions I learned from) as a part of his argument.

 

As a linguist, it is very interesting to hear the baggage that people attach to language metaphorically carried over to food (“food illiteracy”). I wonder what value the “difference, not deficiency” mantra holds here. I’m not ready to believe that people in areas subject to food draught are indeed kitchen illiterate. But I wouldn’t hesitate to agree that their food cultures probably differ significantly from Pollan’s. The basic staples and cooking methods probably differ significantly. Pollan could probably make a lot more headway with his cause if, instead of assuming that the people he is trying to help lack any basic cooking skills, he advocated toward a culture change that included access, attainability, and the potential to learn different practical cooking skills. It’s a subtle shift, but an important one.

 

As a proud uncook, I’m a huge fan of any kind of food preparation that is two steps or less, cheap, easy and fresh. Fast food for me involves putting a sweet potato in the microwave and pressing “potato,” grabbing for an apple or carrots and peanut butter, or tossing chickpeas into a dressing. Slow food involves the basic sautéing, roasting, etc. that Pollan advocates. I imagine that the skills he advocates are more practical and enjoyable for him than they are for people like me, whose mealtimes are usually limited and chaotic. What he calls basic is impractical for many of us. And the differences in time and money involved in uncooking and “basics” add up quickly.

 

 

 

So I’ve taken this post in quite a few directions, but it all comes together under one important point. Different language skills are not a lack of language skills altogether. Similarly, different survival skills are not a total lack of survival skills. We all carry unique skillsets that reflect our personal histories with those skills as well as the larger public histories that our personal histories help to compose. We, as people, are part of a larger public. The political spam I see doesn’t meet my expectations of valuable, personal communication, but it is in fact part of a rich political history. The people who Michael Pollan encounters have ways of feeding themselves that differ from Pollan’s expectations, but they are not without important survival skills. Cultural differences are not an indication of an underlying lack of culture.

2013-07-05 11.13.21

 

The curse of the elevator speech

Yesterday I was involved in an innocent watercooler chat in which I was asked what Sociolinguistics is. This should be an easy enough question, because I just got a master’s degree in it. But it’s not. Sociolinguistics is a large field that means different things to different people. For every way of studying language, there are social and behavioral correlates that can also be studied. So a sociolinguist could focus on any number of linguistic areas, including phonology, syntax, semantics, or, in my case, discourse. My studies focus on the ways in which people use language, and the units of analysis in my studies are above the sentence level. Because Linguistics is such a large and siloed field, explaining Sociolinguistics through the lens of discourse analysis feels a bit like explaining vegetarianism through a pescatarian lens. The real vegetarians and the real linguists would balk.

There was a follow up question at the water cooler about y’all. “Is it a Southern thing?” My answer to this was so admittedly lame that I’ve been trying to think of a better one (sometimes even the most casual conversations linger, don’t they?).

My favorite quote of this past semester was from Jan Blommaert: “Language reflects a life, and not just a birth, and it is a life that is lived in a real sociocultural, historical and political space” Y’all has long been considered a southernism, but when I think back to my own experience with it, it was never about southern language or southern identity. One big clue to this is that I do sometimes use y’all, but I don’t use other southern language features along with it.

If I wanted to further investigate y’all from a sociolinguistic perspective, I would take language samples, either from one or a variety of speakers (and this sampling would have clear, meaningful consequences) and track the uses of y’all to see when it was invoked and what function it serves when invoked. My best, uninformed guess is that it does relational work and invokes registers that are more casual and nonthreatening. But without data, that is nothing but an uninformed guess.

This work has likely been done before. It would be interesting to see.
(ETA: Here is an example of this kind of work in action, by Barbara Johnstone)

Revisiting Latino/a identity using Census data

On April 10, I attended a talk by Jennifer Leeman (Research Sociolinguist @Census and Assistant Professor @George Mason) entitled “Spanish and Latino/a identity in the US Census.” This was a great talk. I’ll include the abstract below, but here are some of her main points:

  • Census categories promote and legitimize certain understandings, particularly because the Census, as a tool of the government, has an appearance of neutrality
  • Census must use categories from OMB
  • The distinction between race and ethnicity is fuzzy and full of history.
    • o   In the past, this category has been measured by surname, mothertongue, birthplace
      o   Treated as hereditary (“perpetual foreigner” status)
      o   Self-id new, before interviewer would judge, record
  • In the interview context, macro & micro meet
    • o   Macro level demographic categories
    • o   Micro:
      • Interactional participant roles
      • Indexed through labels & structure
      • Ascribed vs claimed identities
  • The study: 117 telephone interviews in Spanish
    • o   2 questions, ethnicity & race
    • o   Ethnicity includes Hispano, Latino, Español
      • Intended as synonyms but treated as a choice by respondents
      • Different categories than English (Adaptive design at work!)
  • The interviewers played a big role in the elicitation
    • o   Some interviewers emphasized standardization
      • This method functions differently in different conversational contexts
    • o   Some interviewers provided “teaching moments” or on-the-fly definitions
      • Official discourses mediated through interviewer ideologies
      • Definitions vary
  • Race question also problematic
    • o   Different conceptions of Indioamericana
      • Central, South or North American?
  • Role of language
    • o   Assumption of monolinguality problematic, bilingual and multilingual quite common, partial and mixed language resources
    • o   “White” spoken in English different from “white” spoken in Spanish
    • o   Length of time in country, generation in country belies fluid borders
  • Coding process
    • o   Coding responses such as “American, born here”
    • o   ~40% Latino say “other”
    • o   Other category ~ 90% Hispanic (after recoding)
  • So:
    • o   Likely result: one “check all that apply” question
      • People don’t read help texts
    • o   Inherent belief that there is an ideal question out there with “all the right categories”
      • Leeman is not yet ready to believe this
    • o   The takeaway for survey researchers:
      • Carefully consider what you’re asking, how you’re asking it and what information you’re trying to collect
  • See also Pew Hispanic Center report on Latino/a identity

 

 

 ABSTRACT

Censuses play a crucial role in the institutionalization and circulation of specific constructions of national identity, national belonging, and social difference, and they are a key site for the production and institutionalization of racial discourse (Anderson 1991; Kertzer & Arel 2002; Nobles 2000; Urla 1994).  With the recent growth in the Latina/o population, there has been increased interest in the official construction of the “Hispanic/Latino/Spanish origin” category (e.g., Rodriguez 2000; Rumbaut 2006; Haney López 2005).  However, the role of language in ethnoracial classification has been largely overlooked (Leeman 2004). So too, little attention has been paid to the processes by which the official classifications become public understandings of ethnoracial difference, or to the ways in which immigrants are interpellated into new racial subjectivities.

This presentation addresses these gaps by examining the ideological role of Spanish in the history of US Census Bureau’s classifications of Latina/os as well as in the official construction of the current “Hispanic/Latino/Spanish origin” category. Further, in order to gain a better understanding of the role of the census-taking in the production of new subjectivities, I analyze Spanish-language telephone interviews conducted as part of Census 2010.  Insights from recent sociocultural research on the language and identity (Bucholtz and Hall 2005) inform my analysis of how racial identities are instantiated and negotiated, and how respondents alternatively resist and take up the identities ascribed to them.

* Dr. Leeman is a Department of Spanish & Portuguese Graduate (GSAS 2000).