Reflections and Notes from the Sentiment Analysis Symposium #SAS14

The Sentiment Analysis Symposium took place in NY this week in the beautiful offices of the New York Academy of Sciences. The Symposium was framed as a transition into a new era of sentiment analysis, an era of human analytics or humetrics.

The view from the New York Academy of Sciences is really stunning!

The view from the New York Academy of Sciences is really stunning!

Two main points that struck me during the event. One is that context is extremely important for developing high quality analytics, but the actual shape that “context” takes varies greatly. The second is a seeming disconnect between the product developers, who are eagerly developing new and better measures, and the customers, who want better usability, more customer support, more customized metrics that fit their preexisting analytic frameworks and a better understanding of why social media analysis is worth their time, effort and money.

Below is a summary of some of the key points. My detailed notes from each of the speakers, can be viewed here. I attended both the more technical Technology and Innovation Session and the Symposium itself.

Context is in. But what is context?

The big takeaway from the Technology and Innovation session, which was then carried into the second day of the Sentiment Analysis Symposium was that context is important. But context was defined in a number of different ways.

 

New measures are coming, and old measures are improving.

The innovative new strategies presented at the Symposium made for really amazing presentations. New measures include voice intonation, facial expressions via remote video connections, measures of galvanic skin response, self tagged sentiment data from social media sharing sites, a variety of measures from people who have embraced the “quantified self” movement, metadata from cellphone connections (including location, etc.), behavioral patterning on the individual and group level, and quite a bit of network analysis. Some speakers showcased systems that involved a variety of linked data or highly visual analytic components. Each of these measures increase the accuracy of preexisting measures and complicate their implementation, bringing new sets of challenges to the industry.

Here is a networked representation of the emotion transition dynamics of 'Hopeful'

Here is a networked representation of the emotion transition dynamics of ‘Hopeful’

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

This software package is calculating emotional reactions to a Youtube video that is both funny and mean

Meanwhile, traditional text-based sentiment analyses are also improving. Both the quality of machine learning algorithms and the quality of rule based systems are improving quickly. New strategies include looking at text data pragmatically (e.g. What are common linguistics patterns in specific goal directed behavior strategies?), gaining domain level specificity, adding steps for genre detection to increase accuracy and looking across languages. New analytic strategies are integrated into algorithms and complementary suites of algorithms are implemented as ensembles. Multilingual analysis is a particular challenge to ML techniques, but can be achieved with a high degree of accuracy using rule based techniques. The attendees appeared to agree that rule based systems are much more accurate that machine learning algorithms, but the time and expertise involved has caused them to come out of vogue.

 

“The industry as a whole needs to grow up”

I suspect that Chris Boudreaux of Accenture shocked the room when he said “the industry as a whole really needs to grow up.” Speaking off the cuff, without his slides after a mishap and adventure, Boudreaux gave the customer point of view toward social media analytics. He said said that social media analysis needs to be more reliable, accessible, actionable and dependable. Companies need to move past the startup phase to a new phase of accountability. Tools need to integrate into preexisting analytic structures and metrics, to be accessible to customers who are not experts, and to come better supported.

Boudreaux spoke of the need for social media companies to better understand their customers. Instead of marketing tools to their wider base of potential customers, the tools seem to be developed and marketed solely to market researchers. This has led to a more rapid adoption among the market research community and a general skepticism or ambivalence across other industries, who don’t see how using these tools would benefit them.

The companies who truly value and want to expand their customer base will focus on the usability of their dashboards. This is an area ripe for a growing legion of usability experts and usability testing. These dashboards cannot restrict API access and understanding to data scientist experts. They will develop, market and support these dashboards through productive partnerships with their customers, generating measures that are specifically relevant to them and personalized dashboards that fit into preexisting metrics and are easy for the customers to understand and react to in a very practical and personalized sense.

Some companies have already started to work with their customers in more productive ways. Crimson Hexagon, for example, employs people who specialize in using their dashboard. These employees work with customers to better understand and support their use of the platform and run studies of their own using the platform, becoming an internal element in the quality feedback loop.

 

Less Traditional fields for Social Media Analysis:

There was a wide spread of fields represented at the Symposium. I spoke with someone involved in text analysis for legal reasons, including jury analyses. I saw an NYPD name tag. Financial services were well represented. Publishing houses were present. Some health related organizations were present, including neuroscience specialists, medical practitioners interested in predicting early symptoms of diseases like Alzheimer’s, medical specialists interested in helping improve the lives of people with diseases like Autism (e.g. with facial emotion recognition devices), pharmaceutical companies interested in understanding medical literature on a massive scale as well as patient conversation about prescriptions and participation in medical trials. There were traditional market research firms, and many new startups with a wide variety of focuses and functions. There were also established technology companies (e.g. IBM and Dell) with innovation wings and many academic departments. I’m sure I’ve missed many of the entities present or following remotely.

The better research providers can understand the potential breadth of applications  of their research, the more they can improve the specific areas of interest to these communities.

 

Rethinking the Public Image of Sentiment Analysis:

There was some concern that “social” is beginning to have too much baggage to be an attractive label, causing people to think immediately of top platforms such as Facebook and Twitter and belying the true breadth of the industry. This prompted a movement toward other terms at the symposium, including human analytics, humetrics, and measures of human engagement.

 

Accuracy

Accuracy tops out at about 80%, because that’s the limit of inter-rater reliability in sentiment analysis. Understanding the more difficult data is an important challenge for social media analysts. It is important for there to be honesty with customers and with each other about the areas where automated tagging fails. This particular area was a kind of elephant in the room- always present, but rarely mentioned.

Although an 80% accuracy rate is really fantastic compared to no measure at all, and it is an amazing accomplishment given the financial constraints that analysts encounter, it is not an accuracy rate that works across industries and sectors. It is important to consider the “fitness for use” of an analysis. For some industries, an error is not a big deal. If a company is able to respond to 80% of the tweets directed at them in real-time, they are doing quite well, But when real people or weightier consequences are involved, this kind of error rate is blatantly unacceptable. These are the areas where human involvement in the analysis is absolutely critical. Where, honestly speaking, are algorithms performing fantastically, and where are they falling short? In the areas where they fall short, human experts should be deployed, adding behavioral and linguistic insight to the analysis.

One excellent example of Fitness for Use was the presentation by Capital Market Exchange. This company operationalizes sentiment as expert opinion. They mine a variety of sources for expert opinions about investing, and then format the commonalities in an actionable way, leading to a substantial improvement above market performance for their investors. They are able to gain a great deal of market traction that pure sentiment analysts have not by valuing the preexisting knowledge structures in their industry.

 

Targeting the weaknesses

It is important that the field look carefully at areas where algorithms do and do not work. The areas where they don’t represent whole fields of study, many of which have legions of social media analysts at the ready. This includes less traditional areas of linguistics, such as Sociolinguistics, Conversation Analysis (e.g. looking at expected pair parts) and Discourse Analysis (e.g. understanding identity construction), as well as Ethnography (with fast growing subfields, such as Netnography), Psychology and Behavioral Economics. Time to think strategically to better understand the data from new perspectives. Time to more seriously evaluate and invest in neutral responses.

 

Summing Up

Social media data analysis, large scale text analysis and sentiment analysis have enjoyed a kind of honeymoon period. With so many new and fast growing data sources, a plethora of growing needs and applications, and a competitive and fast growing set of analytic strategies, the field has been growing at an astronomical rate. But this excitement has to be balanced out with the practical needs of the marketplace. It is time for growing technologies to better listen to and accommodate the needs of the customer base. This shift will help ensure the viability of the field and free developers up to embrace the spirit of intellectual creativity.

This is an exciting time for a fast growing field!

Thank you to Seth Grimes for organizing such a great event.

 

Advertisement

Planning another Online Research, Offline lunch

I’m planning another Online Research, Offline lunch for researchers in the Washington DC area later this month. The specific date and location are TBA, but it will be toward the end of February near Metro Center.

These lunches are designed to welcome professionals and students involved in online research across a variety of disciplines, fields and sectors. Past attendees have had a wide array of interests and specialties, including usability and interface design, data science, natural language processing, social network analysis, social media monitoring, discourse analysis, netnography, digital humanities and library science.

The goal of this series is to provide an informal venue for a diverse set of researchers to talk with each other and gain a wider context for understanding their work. They are an informal and flexible way to researchers to meet each other, talk and learn. Although Washington DC is a great meeting place for specific areas of online research, there are few informal opportunities for interdisciplinary gatherings of professionals and academics.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

Storytelling about the Past and Predicting the Future: On People, Computers and Research in 2014 and Beyond

My Grandma was a force to be reckoned with. My grandfather was a writer, and he described her driving down the street amidst symphonies. She was beautiful and stubborn, strong willed and sharp. Once a young woman with the good looks of a model, she wore high heels and took daily trips to the gym well into her 90’s. At the age of 94 she managed to run across her house, turn off the water and stand with her hand on her hip in front of the shower before I returned from the next room over with the shampoo I forgot (lest I waste water).

My Grandma, looking amazing

My Grandma, looking amazing

A few years ago I visited her in Florida. She collected work for all of her visitors to do, and we were busy from the moment I arrived. To my surprise, many of the tasks she had gathered involved dealing with customer service and discovering the truth in advertisements. At one point she led me into the local pharmacy with a stack of papers and asked to see the manager. Once she found the manager she began to go through the papers one by one and ask about them. The first paper on the stack was about the Magic Jack. He showed her the package, and she questioned him in depth about how it worked. I was shocked. I’d never thought of a store manager in this role before.

After that trip I began to pay closer attention to the ways in which the people around me dealt with customer service, and I became a kind of customer service liaison for my family. My older family members had an expectation that any customer service agent be both extensively knowledgeable and dependably respectful, but the problems of customer service seemed to have grown beyond this small, personable level to a point where a large network of people with structurally different areas of knowledge act together to form a question answering system. The amount and structure of knowledge necessary has become the focus of the customer service problem, and people everywhere complain about the lack of knowledge, ability and pleasant attitude of the customer service agents they encounter.

This is a problem with many layers and levels to it, and it is a problem that reflects the developing data science industry well. In order to deliver good customer service a great deal of information has to be organized and structured in a meaningful way to allow for optimal extraction. But this layer cannot be everything. The customer service interaction itself needs to be set-up in such a way to allow customers to feel satisfied. People expect personalized, accurate interactions that are structured in a way that is intuitive to them. The customer service experience cannot be the domain of the data scientists. If it is automated, it requires usability experts to develop and test systems that are intuitive and easy to use. If it is done by people, the people need to have access to the expertise necessary for them to do their job and be trained in successful interpersonal interaction. I believe that this whole system could be integrated well under a single goal: to provide timely and direct answers to customer inquiries in 3 steps or less.

The past few years have brought a rapid increase in customization. We have learned to expect the information around us to be customized, curated and preprocessed. We expect customer service to know intuitively what our problems are and answer them with ease. We expect Facebook to know what we want to see and customize our streams appropriately. We expect news sites to be structured to reflect the way we use them. This increase in demand and expectations is the drive behind our hunger for data science, and it will fuel a boom in data and information science positions until we have a ubiquitous underlayer of organized information across all necessary domains.

But data and information science are new fields and not well understood. Our expectations as users exceed the abilities of this fast-evolving field. We attract pioneers who are willing to step into a field that is changing shape beneath their feet as they work. But we ask for too much of a result and expect too much of a result, because these pioneers can’t be everything across all fields. They are an important structural layer of our newly unfolding economy, but in each case, another layer of people are needed in order to achieve the end result.

Usability is an important step above the data and information science layer. Through usability studies, Facebook will eventually learn that people and goals are not constant across all visits. Sometimes I look at Facebook simply to see if I’ve missed any big developments in the lives of my friends and loved ones. Sometimes I want to catch news. Sometimes I’m bored and looking for ridiculous stuff to entertain me. Sometimes I have my daughter next to me and want to show her funny pet pictures that I normally wouldn’t look twice at. Through usability studies, Facebook will eventually learn that users need some control over the information presented to them when they visit.

Through usability studies newspapers will better understand the important practice of headline scanning and develop pay models that work with peoples reading habits. Through qualitative research newspapers will understand their importance as the originators of news about big events with few witnesses, like peace treaties and celebrity births and deaths and the real value of social media for events with large numbers of witnesses and points of view. News media sources are deep in a period of transition where they are learning to better understand dissemination, virality, clicks, page views, reader behavior and reader expectations, and the strengths and weaknesses of social media news sources.

There have been many blog posts (like this one) about Isaac Asimov’s predictions for the future, because he was so right about so many things. At this point we’re at a unique vantage point where his notions of machine programmers and machine tenders are taking deeper shape. This year we will continue to see these changes form and reform around us.

The data Rorschach test, or what does your research say about you?

Sure, there is a certain abundance of personality tests: inkblot tests, standardized cognitive tests, magazine quizzes, etc. that we could participate in. But researchers participate in Rorschach tests of our own every day. There are a series of questions we ask as part of the research process, like:

What data do we want to collect or use? (What information is valuable to us? What do we call data?)

What format are we most comfortable with it in? (How clean does it have to be? How much error are we comfortable with? Does it have to resemble a spreadsheet? How will we reflect sources and transformations? What can we equate?)

What kind of analyses do we want to conduct? (This is usually a great time for our preexisting assumptions about our data to rear their heads. How often do we start by wondering if we can confirm our biases with data?!)

What results do we choose to report? To whom? How will we frame them?

If nothing else, our choices regarding our data reflect many of our values as well as our professional and academic experiences. If you’ve ever sat in on a research meeting, you know that “you want to do WHAT with which data?!” feeling that comes when someone suggests something that you had never considered.

Our choices also speak to the research methods that we are most comfortable with. Last night I attended a meetup event about Natural Language Processing, and it quickly became clear that the mathematician felt most comfortable when the data was transformed into numbers, the linguist felt most comfortable when the data was transformed into words and lexical units, and the programmer was most comfortable focusing on the program used to analyze the data. These three researchers confronted similar tasks, but their three different methods that will yield very different results.

As humans, we have a tendency to make assumptions about the people around us, either by assuming that they are very different or very much the same. Those of you who have seen or experienced a marriage or serious long-term partnership up close are probably familiar with the surprised feeling we get when we realize that one partner thinks differently about something that we had always assumed they would not differ on. I remember, for example, that small feeling that my world was upside down just a little bit when I opened a drawer in the kitchen and saw spoons and forks together in the utensil organizer. It had simply never occurred to me that anyone would mix the two, especially not my own husband!

My main point here is not about my husband’s organizational philosophy. It’s about the different perspectives inherently tied up in the research process. It can be hard to step outside our own perspective enough to see what pieces of ourselves we’ve imposed on our research. But that awareness is an important element in the quality control process. Once we can see what we’ve done, we can think much more carefully about the strengths and weaknesses of our process. If you believe there is only one way, it may be time to take a step back and gain a wider perspective.

Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News

I attended another great CLIP event today, Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News, by Brendan O’Connor, CMU. I’d love to write it up, but I decided instead to share my notes. I hope they’re easy to follow. Please feel free to ask any follow-up questions!

 

Computational Social Science

– Then: 1890 census tabulator- hand cranked punch card tabulator

– Now: automated text analysis

 

Goal: develop methods of predicting, etc conflicts

– events = data

– extracting events from news stories

– information extraction from large scale news data

– goal: time series of country-country interactions

– who did what to whom? in what order?

Long history of manual coding of this kind of data for this kind of purpose

– more recently: rule based pattern extraction, TABARI

– —> developing event types (diplomatic events, aggressions, …) from verb patterns – TABARI hand engineered 15,000 coding patterns over the course of 2 decades —> very difficult, validity issues, changes over time- all developed by political scientists Schrodt 1994- in MUCK (sp?) days – still a common poli sci methodology- GDELT project- software, etc. w/pre & postprocessing

http://gdelt.utdallas.edu

– Sources: mainstream media news, English language, select sources

 

THIS research

– automatic learning of event types

– extract events/ political dynamics

→ use Bayesian probabilistic methods

– using social context to drive unsupervised learning about language

– data: Gigaword corpus (news articles) – a few extra sources (end result mostly AP articles)

– named entities- dictionary of country names

– news biases difficult to take into account (inherent complication of the dataset)(future research?)

– main verb based dependency path (so data is pos tagged & then sub/obj tagged)

– 3 components: source (acting country)/ recipient (recipient country)/ predicate (dependency path)

– loosely Dowty 1990

– International Relations (IR) is heavily concerned with reciprocity- that affects/shapes coding, goals, project dynamics (e.g. timing less important than order, frequency, symmetry)

– parsing- core NLP

– filters (e.g. Georgia country vs. Georgia state) (manual coding statements)

– analysis more focused on verb than object (e.g. text following “said that” excluded)

– 50% accuracy finding main verb (did I hear that right? ahhh pos taggers and their many joys…)

– verb: “reported that” – complicated: who is a valid source? reported events not necessarily verified events

– verb: “know that” another difficult verb

 The models:

– dyads = country pairs

– each w/ timesteps

– for each country pair a time series

– deduping necessary for multiple news coverage (normalizing)

– more than one article cover a single event

– effect of this mitigated because measurement in the model focuses on the timing of events more than the number of events

1st model

– independent contexts

– time slices

– figure for expected frequency of events (talking most common, e.g.)

2nd model

– temporal smoothing: assumes a smoothness in event transitions

– possible to put coefficients that reflect common dynamics- what normally leads to what? (opportunity for more research)

– blocked Gibbs sampling

– learned event types

– positive valence

– negative valence

– “say” ← some noise

– clusters: verbal conflict, material conflict, war terms, …

How to evaluate?

– need more checks of reasonableness, more input from poli sci & international relations experts

– project end goal: do political sci

– one evaluative method: qualitative case study (face validity)

– used most common dyad Israeli: Palestinian

– event class over time

– e.g. diplomatic actions over time

– where are the spikes, what do they correspond with? (essentially precision & recall)

– another event class: police action & crime response

– Great point from audience: face validity: my model says x, then go to data- can’t develop labels from the data- label should come from training data not testing data

– Now let’s look at a small subset of words to go deeper

– semantic coherence?

– does it correlate with conflict?

– quantitative

– lexical scale evaluation

– compare against TABARI (lucky to have that as a comparison!!)

– another element in TABARI: expert assigned scale scores – very high or very low

– validity debatable, but it’s a comparison of sorts

– granularity invariance

– lexical scale impurity

Comparison sets

– wordnet – has synsets – some verb clusters

– wordnet is low performing, generic

– wordnet is a better bar than beating random clusters

– this model should perform better because of topic specificity

 

“Gold standard” method- rarely a real gold standard- often gold standards themselves are problematic

– in this case: militarized interstate dispute dataset (wow, lucky to have that, too!)

Looking into semi-supervision, to create a better model

 speaker website:

http://brenocon.com

 

Q &A:

developing a user model

– user testing

– evaluation from users & not participants or collaborators

– terror & protest more difficult linguistic problems

 

more complications to this project:

– Taiwan, Palestine, Hezbollah- diplomatic actors, but not countries per se

Upcoming DC Event: Online Research Offline Lunch

ETA: Registration for this event is now CLOSED. If you have already signed up, you will receive a confirmation e-mail shortly. Any sign-ups after this date will be stored as a contact list for any future events. Thank you for your interest! We’re excited to gather with such a diverse and interesting group.

—–

Are you in or near the DC area? Come join us!

Although DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. This lunch will provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives.

Date & Time: August 6, 2013, 12:30 p.m.

Location: Near Gallery Place or Metro Center. Once we have a rough headcount, we’ll choose an appropriate location. (Feel free to suggest a place!)

Please RSVP using this form:

Revisiting Latino/a identity using Census data

On April 10, I attended a talk by Jennifer Leeman (Research Sociolinguist @Census and Assistant Professor @George Mason) entitled “Spanish and Latino/a identity in the US Census.” This was a great talk. I’ll include the abstract below, but here are some of her main points:

  • Census categories promote and legitimize certain understandings, particularly because the Census, as a tool of the government, has an appearance of neutrality
  • Census must use categories from OMB
  • The distinction between race and ethnicity is fuzzy and full of history.
    • o   In the past, this category has been measured by surname, mothertongue, birthplace
      o   Treated as hereditary (“perpetual foreigner” status)
      o   Self-id new, before interviewer would judge, record
  • In the interview context, macro & micro meet
    • o   Macro level demographic categories
    • o   Micro:
      • Interactional participant roles
      • Indexed through labels & structure
      • Ascribed vs claimed identities
  • The study: 117 telephone interviews in Spanish
    • o   2 questions, ethnicity & race
    • o   Ethnicity includes Hispano, Latino, Español
      • Intended as synonyms but treated as a choice by respondents
      • Different categories than English (Adaptive design at work!)
  • The interviewers played a big role in the elicitation
    • o   Some interviewers emphasized standardization
      • This method functions differently in different conversational contexts
    • o   Some interviewers provided “teaching moments” or on-the-fly definitions
      • Official discourses mediated through interviewer ideologies
      • Definitions vary
  • Race question also problematic
    • o   Different conceptions of Indioamericana
      • Central, South or North American?
  • Role of language
    • o   Assumption of monolinguality problematic, bilingual and multilingual quite common, partial and mixed language resources
    • o   “White” spoken in English different from “white” spoken in Spanish
    • o   Length of time in country, generation in country belies fluid borders
  • Coding process
    • o   Coding responses such as “American, born here”
    • o   ~40% Latino say “other”
    • o   Other category ~ 90% Hispanic (after recoding)
  • So:
    • o   Likely result: one “check all that apply” question
      • People don’t read help texts
    • o   Inherent belief that there is an ideal question out there with “all the right categories”
      • Leeman is not yet ready to believe this
    • o   The takeaway for survey researchers:
      • Carefully consider what you’re asking, how you’re asking it and what information you’re trying to collect
  • See also Pew Hispanic Center report on Latino/a identity

 

 

 ABSTRACT

Censuses play a crucial role in the institutionalization and circulation of specific constructions of national identity, national belonging, and social difference, and they are a key site for the production and institutionalization of racial discourse (Anderson 1991; Kertzer & Arel 2002; Nobles 2000; Urla 1994).  With the recent growth in the Latina/o population, there has been increased interest in the official construction of the “Hispanic/Latino/Spanish origin” category (e.g., Rodriguez 2000; Rumbaut 2006; Haney López 2005).  However, the role of language in ethnoracial classification has been largely overlooked (Leeman 2004). So too, little attention has been paid to the processes by which the official classifications become public understandings of ethnoracial difference, or to the ways in which immigrants are interpellated into new racial subjectivities.

This presentation addresses these gaps by examining the ideological role of Spanish in the history of US Census Bureau’s classifications of Latina/os as well as in the official construction of the current “Hispanic/Latino/Spanish origin” category. Further, in order to gain a better understanding of the role of the census-taking in the production of new subjectivities, I analyze Spanish-language telephone interviews conducted as part of Census 2010.  Insights from recent sociocultural research on the language and identity (Bucholtz and Hall 2005) inform my analysis of how racial identities are instantiated and negotiated, and how respondents alternatively resist and take up the identities ascribed to them.

* Dr. Leeman is a Department of Spanish & Portuguese Graduate (GSAS 2000).

Instagram is changing the way I see

I recently joined Instagram (I’m late, I know).

I joined because my daughter wanted to, because her friends had, to see what it was all about. She is artistic, and we like to talk about things like color combinations and camera angles, so Instagram is a good fit for us. But it’s quickly changing the way I understand photography. I’ve always been able to set up a good shot, and I’ve always had an eye for color. But I’ve never seriously followed up on any of it. It didn’t take long on Instagram to learn that an eye for framing and color is not enough to make for anything more than accidental great shots. The great shots that I see are the ones that pick deeper patterns or unexpected contrasts out of seemingly ordinary surroundings. They don’t simply capture beauty, they capture an unexpected natural order or a surprising contrast, or they tell a story. They make you gasp or they make you wonder. They share a vision, a moment, an insight. They’re like the beginning paragraph of a novel or the sketch outline of a poem. Realizing that, I have learned that capturing the obvious beauty around me is not enough. To find the good shots, I’ll need to leave my comfort zone, to feel or notice differently, to wonder what or who belongs in a space and what or who doesn’t, and why any of it would capture anyone’s interest. It’s not enough to see a door. I have to wonder what’s behind it. To my surprise, Instagram has taught me how to think like a writer again, how to find hidden narratives, how to feel contrast again.

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn't? What would catch your eye?

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn’t? What would catch your eye?

This kind of change has a great value, of course, for a social media researcher. The kinds of connections that people forge on social media, the different ways in which people use platforms and the ways in which platforms shape the way we interact with the world around us, both virtual and real, are vitally important elements in the research process. In order to create valid, useful research in social media, the methods and thinking of the researcher have to follow closely with the methods and thinking of the users. If your sensemaking process imitates the sensemaking process of the users, you know that you’re working in the right direction, but if you ignore the behaviors and goals of the users, you have likely missed the point altogether. (For example, if you think of Twitter hashtags simply as an organizational scheme, you’ve missed the strategic, ironic, insightful and often humorous ways in which people use hashtags. Or if you think that hashtags naturally fall into specific patterns, you’re missing their dialogic nature.)

My current research involves the cycle between social media and journalism, and it runs across platforms. I am asking questions like ‘what gets picked up by reporters and why?’ and ‘what is designed for reporters to pick up?’ And some of these questions lead me to examine the differences between funny memes that circulate like wildfire through Twitter leading to trends and a wider stage and the more indepth conversation on public facebook pages, which cannot trend as easily and is far less punchy and digestible. What role does each play in the political process and in constituting news?

Of course, my current research asks more questions than these, but it’s currently under construction. I’d rather not invite you into the workzone until some of the pulp and debris have been swept aside…

Still grappling with demographics

Last year I wrote about my changing perspective on demographic variables. My grappling has continued since then.
I think of it as an academic puberty of sorts.

I remember the many crazy thought exercises I subjected myself to as a teenager, as I tried to forge my own set of beliefs and my own place in the world. I questioned everything. At times I was under so much construction that it was a wonder I functioned at all. Thankfully, I survived to enter my twenties intact. But lately I have been caught in a similar thought exercise of sorts, second guessing the use of sociological demographic variables in research.

Two sample projects mark two sides of the argument. One is a potential study of the climate for underrepresented faculty members in physics departments. In our exploration of this subject, the meaning of underrepresented was raised. Indeed there are a number of ways in which a faculty member could be underrepresented or made uncomfortable: gender, race, ethnicity, accent, bodily differences or disabilities, sexual orientation, religion, … At some point, one could ask whether it matters which of these inspired prejudicial or different treatment, or whether the hostile climate is, in and of itself, important to note. Does it make sense to tick off which of a set of possible prejudices are stronger or weaker at a particular department? Or does it matter first that the uncomfortable climate exists, and that personal differences that should be professionally irrelevant are coming into professional play. One could argue that the climate should be the first phase of the study, and any demographics could be secondary. One might be particularly tempted to argue for this arrangement given the small sizes of the departments and hesitation among many faculty members to supply information that could identify them personally.

If that was the only project on my mind, I might be tempted to take a more deconstructionist view of demographic variables altogether. But there is another project that I’m working on that argues against the deconstructionist view- the Global Survey of Physicists.

(Side or backstory: The global survey is kind of a pet project of mine, and it was the project that led me to grad school. Working on it involved coordinating survey design, translation and dissemination with representatives from over 100 countries. This was our first translation project. It began in English and was then translated into 7 additional languages. The translation process took almost a full year and was full of unexpected complications. Near the end of this phase, I attended a talk at the Bureau of Labor Statistics by Yuling Pan from Census. The talk was entitled ‘the Sociolinguistics of Survey Translation.’ I attended it never having heard of Sociolinguistics before. During the course of the talk, Yuling detailed and dissected experiences that paralleled my own into useful pieces and diagnosed and described some of the challenges I had encountered in detail. I was so impressed with her talk that I googled Sociolinguistics as soon as I returned to my office, discovered the MLC a few minutes later. One month later I was visiting Georgetown and working on my application for the MLC. I like to say it was like being swept up off my feet and then engaging in a happy shotgun marriage)

The Global Survey was designed to elicit gender differences in terms of experiences, climate, resources and opportunities, as well as the effects of personal and family constraints and decisions on school and career. The survey worked particularly well, and each dive into the data proves fascinating. This week I delved deeper into the dynamics of one country and saw women’s sources of support erode as they progressed further into school and work, saw the women transition from a virtual parity in school to difficult careers, beginning with their significantly larger chance of having to choose their job because it was the only offer they received, and becoming significantly worse with the introduction of kids. In fact, we found through this survey that kids tend to slow women’s careers and accelerate men’s!

What do these findings say about the use of demographic variables? They certainly validate their usefulness and cause me to wonder whether a lack of focus on demographics would lessen the usefulness of the faculty study. Here I’m reminded that it is important, when discussing demographic variables, to keep in mind that they are not arbitrary. They reflect ways of seeing that are deeply engrained in society. Gender, for example, is the first thing to note about a baby, and it determines a great deal from that point in. Excluding race or ethnicity seems foolish, too, in a society that so deeply engrains these distinctions.

The problem may be in the a priori or unconsidered applications of demographic variables. All too often, the same tired set of variables are dredged up without first considering whether they would even provide a useful distinction or the most useful cuts to a dataset. A recent example of this is the study that garnered some press about racial differences in e-learning. From what I read of the study, all e-learning was collapsed into a single entity, an outcome or dependent variable (as in some kind if measure of success of e-learning), and run by a set of traditional x’s or independent variables, like race and socioeconomic status. In this case, I would have preferred to first see a deeper look into the mechanics of e-learning than a knee jerk rush to the demographic variables. What kind of e-learning course was it? What kinds of interaction were fostered between the students and the teacher, material and other students? So many experiences of e-learning were collapsed together, and differences in course types and learning environments make for more useful and actionable recommendations than demographics ever could.

In the case of the faculty and global surveys as well, one should ask what approaches to the data would yield the most useful analyses. Finding demographic differences leads to what- an awareness of discrimination? Discrimination is deep seeded and not easily cured. It is easy to document and difficult to fix. And yet, more specific information about climate, resources and opportunities could be more useful or actionable. It helps to ask what we can achieve through our research. Are we simply validating or proving known societal differences or are we working to create actionable recommendations? What are the most useful distinctions?

Most likely, if you take the time to carefully consider the information you collect, the usefulness of your analyses and the validity of your hypotheses, you are one step above anyone rotely applying demographic variables out of ill-considered habit. Kudos to you for that!

Total Survey Error: nanny to some, wise elder for some, strange parental friend for others

Total Survey Error and I are long-time acquaintences, just getting to know each other better. Looking at TSE is, for me, like looking at my work in survey research through a distorted mirror to an alternate universe. This week, I’ve spent some time closely reading Groves’ Past, Present and Future of Total Survey Error, and it provided some historical context to the framework, as well as an experienced account of its strengths and weaknesses.

Errors are an important area of study across many fields. Historically, models about error assumed that people didn’t really make errors often. Those attitudes are alive and well in many fields and workplaces today. Instead of carefully considering errors, they are often dismissed as indicators of incompetence. However, some workplaces are changing the way they approach errors. I did some collaborative research on medical errors in 2012 and was introduced to the term HRO or High-Reliability Organization. This is an error focused model of management that assumes that errors will be made, and not all errors can be anticipated. Therefore, every error should be embraced as a learning opportunity to build a better organizational framework.

From time to time, various members of our working group have been driven to create checklists for particular aspects of our work. In my experience, the checklists are very helpful for work that we do infrequently and virtually useless for work that we do daily. Writing a checklist for your daily work is a bit like writing instructions on how you brush your teeth and expecting to keep those instructions updated whenever you make a change of sorts. Undoubtedly, you’ll reread the instructions and wonder when you switched from a vertical to a circular motion for a given tooth. And yet there are so many important elements to our work, and so many areas where people could make less than ideal decisions (small or large). From this need rose Deming, with the first survey quality checklist. After Deming, a few other models arose. Eventually, TSE became the cumulative working framework or foundational framework for the field of survey research.

In my last blog, I spoke about the strangeness of coming across a foundational framework after working in the field without one. The framework is a conceptually important one, separating out sources of errors in ways that make shortcomings and strengths apparent and clarifying what is more or less known about a project.

But in practice, this model has not become the applied working model that its founders and biggest proponents expected it to be. This is for two reasons (that I’ll focus on), one of which Groves mentioned in some detail in this paper and one of which he barely touched on (but likely drove him out of the field).

1. The framework has mathematical properties, and this has led to its more intensive use on aspects of the survey process that are traditionally quantitative. TSE research in areas of sampling, coverage, response and aspects of analysis is quite common, but TSE research in other areas is much less common. In fact, many of the less quantifiable parts of the survey process are almost dismissed in favor of the more quantifiable parts. A survey with a particularly low TSE value could have huge underlying problems or be of minimal use once complete.
2. The framework doesn’t explicitly consider the human factors that govern research behind the scenes. Groves mentioned that the end users of the data are not deeply considered in the model, but neither are the other financial and personal (and personafinancial) constraints that govern much decision making. Ideally, the end goal of research is high quality research that yields a useful and relevant response for as minimal cost as possible. In practice, however, the goal is both to keep costs low and to satisfy a system of interrelated (and often conflicting) personal or professional (personaprofessional?) interests. If the most influential of these interests are not particularly interested in (or appreciative of) the model, practitioners are highly unlikely to take the time to apply it.

Survey research requires very close attention to detail in order to minimize errors. It requires an intimate working knowledge of math and of computer programming. It also benefits from a knowledge of human behavior and the research environment. If I were to recommend any changes to the TSE model, I would recommend a bit more task based detail, to incorporate more of the highly valued working knowledge that is often inherent and unspoken in the training of new researchers. I would also recommend a more of an HRO orientation toward error, anticipating and embracing unexpected errors as a source of additions to the model. And I would recommend some deeper incorporation of the personal and financial constraints and the roles they play (clearly an easier change to introduce than to flesh out in any great detail!). I would recommend a shift of focus, away from the quantitative modeling aspects and to the overall applicability and importance of a detailed, applied working model.

I’ve suggested before that survey research does not have a strong enough public face for the general public to understand or deeply value our work. A model that is better embraced by the field could for the basis for a public face, but the model would have to appeal to practitioners on a practical level. The question is: how do you get members of a well established field who have long been working within it and gaining expertise to accept a framework that grew into a foundational piece independent of their work?