Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News

I attended another great CLIP event today, Statistical Text Analysis for Social Science: Learning to Extract International Relations from the News, by Brendan O’Connor, CMU. I’d love to write it up, but I decided instead to share my notes. I hope they’re easy to follow. Please feel free to ask any follow-up questions!

 

Computational Social Science

– Then: 1890 census tabulator- hand cranked punch card tabulator

– Now: automated text analysis

 

Goal: develop methods of predicting, etc conflicts

– events = data

– extracting events from news stories

– information extraction from large scale news data

– goal: time series of country-country interactions

– who did what to whom? in what order?

Long history of manual coding of this kind of data for this kind of purpose

– more recently: rule based pattern extraction, TABARI

– —> developing event types (diplomatic events, aggressions, …) from verb patterns – TABARI hand engineered 15,000 coding patterns over the course of 2 decades —> very difficult, validity issues, changes over time- all developed by political scientists Schrodt 1994- in MUCK (sp?) days – still a common poli sci methodology- GDELT project- software, etc. w/pre & postprocessing

http://gdelt.utdallas.edu

– Sources: mainstream media news, English language, select sources

 

THIS research

– automatic learning of event types

– extract events/ political dynamics

→ use Bayesian probabilistic methods

– using social context to drive unsupervised learning about language

– data: Gigaword corpus (news articles) – a few extra sources (end result mostly AP articles)

– named entities- dictionary of country names

– news biases difficult to take into account (inherent complication of the dataset)(future research?)

– main verb based dependency path (so data is pos tagged & then sub/obj tagged)

– 3 components: source (acting country)/ recipient (recipient country)/ predicate (dependency path)

– loosely Dowty 1990

– International Relations (IR) is heavily concerned with reciprocity- that affects/shapes coding, goals, project dynamics (e.g. timing less important than order, frequency, symmetry)

– parsing- core NLP

– filters (e.g. Georgia country vs. Georgia state) (manual coding statements)

– analysis more focused on verb than object (e.g. text following “said that” excluded)

– 50% accuracy finding main verb (did I hear that right? ahhh pos taggers and their many joys…)

– verb: “reported that” – complicated: who is a valid source? reported events not necessarily verified events

– verb: “know that” another difficult verb

 The models:

– dyads = country pairs

– each w/ timesteps

– for each country pair a time series

– deduping necessary for multiple news coverage (normalizing)

– more than one article cover a single event

– effect of this mitigated because measurement in the model focuses on the timing of events more than the number of events

1st model

– independent contexts

– time slices

– figure for expected frequency of events (talking most common, e.g.)

2nd model

– temporal smoothing: assumes a smoothness in event transitions

– possible to put coefficients that reflect common dynamics- what normally leads to what? (opportunity for more research)

– blocked Gibbs sampling

– learned event types

– positive valence

– negative valence

– “say” ← some noise

– clusters: verbal conflict, material conflict, war terms, …

How to evaluate?

– need more checks of reasonableness, more input from poli sci & international relations experts

– project end goal: do political sci

– one evaluative method: qualitative case study (face validity)

– used most common dyad Israeli: Palestinian

– event class over time

– e.g. diplomatic actions over time

– where are the spikes, what do they correspond with? (essentially precision & recall)

– another event class: police action & crime response

– Great point from audience: face validity: my model says x, then go to data- can’t develop labels from the data- label should come from training data not testing data

– Now let’s look at a small subset of words to go deeper

– semantic coherence?

– does it correlate with conflict?

– quantitative

– lexical scale evaluation

– compare against TABARI (lucky to have that as a comparison!!)

– another element in TABARI: expert assigned scale scores – very high or very low

– validity debatable, but it’s a comparison of sorts

– granularity invariance

– lexical scale impurity

Comparison sets

– wordnet – has synsets – some verb clusters

– wordnet is low performing, generic

– wordnet is a better bar than beating random clusters

– this model should perform better because of topic specificity

 

“Gold standard” method- rarely a real gold standard- often gold standards themselves are problematic

– in this case: militarized interstate dispute dataset (wow, lucky to have that, too!)

Looking into semi-supervision, to create a better model

 speaker website:

http://brenocon.com

 

Q &A:

developing a user model

– user testing

– evaluation from users & not participants or collaborators

– terror & protest more difficult linguistic problems

 

more complications to this project:

– Taiwan, Palestine, Hezbollah- diplomatic actors, but not countries per se

Planning a second “Online Research, Offline Lunch”

In August we hosted the first Online Research, Offline Lunch for researchers involved in online research in any field, discipline or sector in the DC area. Although Washington DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. These lunches provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives. We kept the first gathering small. But the enthusiasm for this small event was quite large, and it was a great success! We had interesting conversations, learned a lot, made some valuable connections, and promised to meet again.

Many expressed interest in the lunches but weren’t able to attend. If you have any specific scheduling requests, please let me know now. Although I certainly can’t accommodate everyone’s preferences, I will do my best to take them into account.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

 

Data science can be pretty badass, but…

Every so often I’m reminded of the power of data science. Today I attended a talk entitled ‘Spatiotemporal Crime Prediction Using GPS & Time-tagged Tweets” by Matt Gerber of the UVA PTL. The talk was a UMD CLIP event (great events! Go if you can!).

Gerber began by introducing a few of the PTL projects, which include:

  • Developing automatic detection methods for extremist recruitment in the Dark Net
  • Turning medical knowledge from large bodies of unstructured texts into medical decision support models
  • Many other cool initiatives

He then introduced the research at hand: developing predictive models for criminal activity. The control model in this case use police report data from a given period of time to map incidents onto a map of Chicago using latitude and longitude. He then superimposed a grid on the map and collapsed incidents down into a binary presence vs absence model. Each square in the grid would either have one or more crimes (1) or not have any crimes (-1). This was his training data. He built a binary classifier and then used logistic regression to compute probabilities and layered a kernel density estimator on top. He used this control model to compare with a model built from unstructured text. The unstructured text consisted of GPS tagged Twitter data (roughly 3% of tweets) from the Chicago area. He drew the same grid using longitude and latitude coordinates and tossed all of the tweets from each “neighborhood” (during the same one month training window) into the boxes. Then, using essentially a one box=one document for a document based classifier, he subjected each document to topic modeling (using LDA & MALLET). He focused on crime related words and topics to build models to compare against the control models. He found that the predictive value of both models was similar when compared against actual crime reports from days within the subsequent month.

This is a basic model. The layering can be further refined and better understood (there was some discussion about the word “turnup,” for example). Many more interesting layers can be built into it in order to improve its predictive power, including more geographic features, population densities, some temporal modeling to accommodate the periodic nature of some crimes (e.g. most robberies happen during the work week, while people are away from their homes), a better accommodation for different types of crime, and a host of potential demographic and other variables.

I would love to dig deeper into this data to gain a deeper understanding of the conversation underlying the topic models. I imagine there is quite a wealth of deeper information to be gained as well as a deeper understanding of what kind of work the models are doing. It strikes me that each assumption and calculation has a heavy social load attached to it. Each variable and each layer that is built into the model and roots out correlations may be working to reinforce certain stereotypes and anoint them with the power of massive data. Some questions need to be asked. Who has access to the internet? What type of access? How are they using the internet? Are there substantive differences between tweets with and without geotagging? What varieties of language are the tweeters using? Do classifiers take into account language variation? Are the researchers simply building a big data model around the old “bad neighborhood” notions?

Data is powerful, and the predictive power of data is fascinating. Calculations like these raise questions in new ways, remixing old assumptions into new correlations. Let’s not forget to question new methods, put them into their wider sociocultural contexts and delve qualitatively into the data behind the analyses. Data science can be incredibly powerful and interesting, but it needs a qualitative and theoretical perspective to keep it rooted. I hope to see more, deeper interdisciplinary partnerships soon, working together to build powerful, grounded, and really interesting research!

 

Rethinking Digital Democracy- More reflections from #SMSociety13

What does digital democracy mean to you?

I presented this poster: Rethinking Digital Democracy v4 at the Social Media and Society conference last weekend, and it demonstrated only one of many images of digital democracy.

Digital democracy was portrayed at this conference as:

having a voice in the local public square (Habermas)

making local leadership directly accountable to constituents

having a voice in an external public sphere via international media sources

coordinating or facilitating a large scale protest movement

the ability to generate observable political changes

political engagement and/or mobilization

a working partnership between citizenry, government and emergency responders in crisis situations

a systematic archival of government activity brought to the public eye. “Archives can shed light on the darker places of the national soul”(Wilson 2012)

One presenter had the most systematic representation of digital democracy. Regarding the recent elections in Nigeria, he summarized digital democracy this way: “social media brought socialization, mobilization, participation and legitimization to the Nigerian electoral process.”
Not surprisingly, different working definitions brought different measures. How do you know that you have achieved digital democracy? What constitutes effective or successful digital democracy? And what phenomena are worthy of study and emulation? The scope of this question and answer varies greatly among some of the examples raised during the conference, which included:

citizens in the recent Nigerian election

citizens who tweet during a natural disaster or active crisis situation

citizens who changed the international media narrative regarding the recent Kenyan elections and ICC indictment

Arab Spring actions, activities and discussions
“The power of the people of greater than the people in power” a perfect quote related to Arab revolutions on a slide from Mona Kasra

the recent Occupy movement in the US

tweets to, from and about the US congress

and many more that I wasn’t able to catch or follow…

In the end, I don’t have a suggestion for a working definition or measures, and my coverage here really only scratches the surface of the topic. But I do think that it’s helpful for people working in the area to be aware of the variety of events, people, working definitions and measures at play in wider discussions of digital democracy. Here are a few question for researchers like us to ask ourselves:

What phenomenon are we studying?

How are people acting to affect their representation or governance?

Why do we think of it as an instance of digital democracy?

Who are “the people” in this case, and who is in a position of power?

What is our working definition of digital democracy?

Under that definition, what would constitute effective or successful participation? Is this measurable, codeable or a good fit for our data?

Is this a case of internal or external influence?

And, for fun, a few interesting areas of research:

There is a clear tension between the ground-up perception of the democratic process and the degree of cohesion necessary to affect change (e.g. Occupy & the anarchist framework)

Erving Goffman’s participant framework is also further ground for research in digital democracy (author/animator/principal <– think online petition and e-mail drives, for example, and the relationship between reworded messages, perceived efficacy and the reception that the e-mails receive).

It is clear that social media helps people have a voice and connect in ways that they haven’t always been able to. But this influence has yet to take any firm shape either among researchers or among those who are practicing or interested in digital democracy.

I found this tweet particularly apt, so I’d like to end on this note:

“Direct democracy is not going to replace representative government, but supplement and extend representation” #YES #SMSociety13

— Ray MacLeod (@RayMacLeod) September 14, 2013

 

 

Reflections on Social Network Analysis & Social Media Research from #SMSociety13

A dispatch from a quantitative side of social media research!

Here are a few of my reflections from the Social Media & Society conference in Halifax and my Social Network Analysis class.

I should first mention that I was lucky in two ways.

  1. I finished the James Bond movie ‘Skyfall’ as my last Air Canada flight was landing. (Ok, I didn’t have to mention that)
  2. I finished my online course on Social Network Analysis  hours before leaving for a conference that kicked off with an excellent  talk about Networks and diffusion. And then on the second day of the conference I was able to manipulate a network visualization with my hands using a 96 inch touchscreen at the Dalhousie University Social Media Lab  (Great lab, by the way, with some very interesting and freely available tools)

 

This picture doesn't do this screen justice. This is *data heaven*

This picture doesn’t do this screen justice. This is *data heaven*

Social networks are networks built to describe human action in social media environments. They contain nodes (dots), which could represent people, usernames, objects, etc. and edges, lines joining nodes that represent some kind of relationship (friend, follower, contact, or a host of other quantitative measures). The course was a particularly great introduction to Social Network Analysis, because it included a book that was clear and interesting, a set of youtube videos and a website, all of which were built to work together. The instructor (Dr Jen Golbeck, also the author of the book and materials) has a very unique interest in SNA which gives the class an important added dimension. Her focus is on operational definitions and quantitative measures of trust, and because of this we were taught to carefully consider the role of the edges and edge weights in our networks.

Sharad Goel’s plenary at #SMSociety13 was a very different look at networks. He questioned the common notion of viral diffusion online by looking at millions of cases of diffusion. He discovered that very few diffusions actual resemble any kind of viral model. Instead, most diffusion happens on a very small scale. He used Justin Bieber as an example of diffusion. Bieber has the largest number of followers on Twitter, so when it he posts something it has a very wide reach (“the Bieber effect”). However, people don’t share content as often as we imagine. In fact, only a very small proportion of his followers share it, and only a small proportion of their followers share it. Overall, the path is wide and shallow, with less vertical layers than we had previously envisioned.

Goel’s research is an example of Big Data in action. He said that Big Data methods are important when the phenomenon you want to study happens very infrequently (e.g. one in a million), as is the case for actual instances of viral diffusion.

His conclusions were big, and this line of research is very informative and useful for anyone trying to communicate on a large scale.

Sidenote: the term ‘ego network’ came up quite a few times during the conference, but not everyone knew what an ego network is. An ego network begins with a single node and is measured by degrees. A one degree social network looks a bit like an asterisk- it simply shows all of the nodes that are directly connected to the original node. A 1.5 degree network would include the first degree connections as well as the connections between them. A two degree network contains all of the first degree connections to these nodes that were in the one degree network. And so on.

One common research strategy is to compare across ego networks.

My next post will move on from SNA to more qualitative aspects of the conference

Source: https://twitter.com/JeffreyKeefer/status/378921564281921537/photo/1 This was the backdrop for a qualitive panel

Source: https://twitter.com/JeffreyKeefer/status/378921564281921537/photo/1
This was the backdrop for a qualitative panel. It says “Every time you say ‘data driven decision’ a fairy dies.

Upcoming DC Event: Online Research Offline Lunch

ETA: Registration for this event is now CLOSED. If you have already signed up, you will receive a confirmation e-mail shortly. Any sign-ups after this date will be stored as a contact list for any future events. Thank you for your interest! We’re excited to gather with such a diverse and interesting group.

—–

Are you in or near the DC area? Come join us!

Although DC is a great meeting place for specific areas of online research, there are few opportunities for interdisciplinary gatherings of professionals and academics. This lunch will provide an informal opportunity for a diverse set of online researchers to listen and talk respectfully about our interests and our work and to see our endeavors from new, valuable perspectives.

Date & Time: August 6, 2013, 12:30 p.m.

Location: Near Gallery Place or Metro Center. Once we have a rough headcount, we’ll choose an appropriate location. (Feel free to suggest a place!)

Please RSVP using this form:

Representativeness, qual & quant, and Big Data. Lost in translation?

My biggest challenge in coming from a quantitative background to a qualitative research program was representativeness. I came to class firmly rooted in the principle of Representativeness, and my classmates seemed not to have any idea why it mattered so much to me. Time after time I would get caught up in my data selection. I would pose the wider challenge of representativeness to a colleague, and they would ask “representative of what? why?”

 

In the survey research world, the researcher begins with a population of interest and finds a way to collect a representative sample of the population for study. In the qualitative world that accompanies survey research units of analysis are generally people, and people are chosen for their representativeness. Representativeness is often constructed by demographic characteristics. If you’ve read this blog before, you know of my issues with demographics. Too often, demographic variables are used as a knee jerk variable instead of better considered variables that are more relevant to the analysis at hand. (Maybe the census collects gender and not program availability, for example, but just because a variable is available and somewhat correlated doesn’t mean that it is in fact a relevant variable, especially when the focus of study is a population for whom gender is such an integral societal difference.)

 

And yet I spent a whole semester studying 5 minutes of conversation between 4 people. What was that representative of? Nothing but itself. It couldn’t have been exchanged for any other 5 minutes of conversation. It was simply a conversation that this group had and forgot. But over the course of the semester, this piece of conversation taught me countless aspects of conversation research. Every time I delved back into the data, it became richer. It was my first step into the world of microanalysis, where I discovered that just about anything can be a rich dataset if you use it carefully. A snapshot of people at a lecture? Well, how are their bodies oriented? A snapshot of video? A treasure trove of gestures and facial expressions. A piece of graffiti? Semiotic analysis! It goes on. The world of microanalysis is built on the practice of layered noticing. It goes deeper than wide.

 

But what is it representative of? How could a conversation be representative? Would I need to collect more conversations, but restrict the participants? Collect conversations with more participants, but in similar contexts? How much or how many would be enough?

 

In the world of microanalysis, people and objects constantly create and recreate themselves. You consistently create and recreate yourself, but your recreations generally fall into a similar range that makes you different from your neighbors. There are big themes in small moments. But what are the small moments representative of? Themselves. Simply, plainly, nothing more and nothing else. Does that mean that they don’t matter? I would argue that there is no better way to understand the world around us in deep detail than through microanalysis. I would also argue that macroanalysis is an important part of discovering the wider patterns in the world around us.

 

Recently a NY Times blog post by Quentin Hardy has garnered quite a bit of attention.

Why Big Data is Not Truth: http://bits.blogs.nytimes.com/2013/06/01/why-big-data-is-not-truth/

This post has really struck a chord with me, because I have had a hard time understanding Hardy’s complaint. Is big data truth? Is any data truth? All data is what it is; a collection of some sort, collected under a specific set of circumstances. Even data that we hope to be more representative has sampling and contextual limitations. Responsible analysts should always be upfront about what their data represents. Is big data less truthful than other kinds of data? It may be less representative than, say, a systematically collected political poll. But it is what it is: different data, collected under different circumstances in a different way. It shouldn’t be equated with other data that was collected differently. One true weakness of many large scale analyses is the blindness to the nature of the data, but that is a byproduct of the training algorithms that are used for much of the analysis. The algorithms need large training datasets, from anywhere. These sets often are developed through massive web crawlers. Here, context gets dicey. How does a researcher represent the data properly when they have no idea what it is? Hopefully researchers in this context will be wholly aware that, although their data has certain uses, it also has certain [huge] limitations.

 

I suspect that Hardy’s complaint is with the representations of massive datasets collected from webcrawlers as a complete truth from which any analyses could be run and all of the greater truths of the world could be revealed. On this note, Hardy is exactly right. Data simply is what it is, nothing more and nothing less. And any analysis that focuses on an unknown dataset is just that: an analysis without context. Which is not to say that all analyses need to be representative, but rather that all responsible analyses of good quality need to be self aware. If you do not know what the data represents and when and how it was collected, then you cannot begin to discuss the usefulness of any analysis of it.

What is the role of Ethnography and Microanalysis in Online Research?

There is a large disconnect in online research.

The largest, most profile, highest value and most widely practiced side of online research was created out of a high demand to analyze the large amount of consumer data that is constantly being created and largely public available. This tremendous demand led to research methods that were created in relative haste. Math and programming skills thrived in a realm where social science barely made a whisper. The notion of atheoretical research grew. The level of programming and mathematical competence required to do this work continues to grow higher every day, as the fields of data science and machine learning become continually more nuanced.

The largest, low profile, lowest value and increasingly more practiced side of online research is the academic research. Turning academia toward online research has been like turning a massive ocean liner. For a while online research was not well respected. At this point it is increasingly well respected, thriving in a variety of fields and in a much needed interdisciplinary way, and driven by a search for a better understanding of online behavior and better theories to drive analyses.

I see great value in the intersection between these areas. I imagine that the best programmers have a big appetite for any theory they can use to drive their work in a useful and productive ways. But I don’t see this value coming to bear on the market. Hiring is almost universally focused on programmers and data scientists, and the microanalytic work that is done seems largely invisible to the larger entities out there.

It is common to consider quantitative and qualitative research methods as two separate languages with few bilinguals. At the AAPOR conference in Boston last week, Paul Lavarakas mentioned a book he is working on with Margaret Roller which expands the Total Survey Error model to both quantitative and qualitative research methodology. I spoke with Margaret Roller about the book, and she emphasized the importance of qualitative researchers being able to talk more fluently and openly about methodology and quality controls. I believe that this is, albeit a huge challenge in wording and framing, a very important step for qualitative research, in part because quality frameworks lend credibility to qualitative research in the eyes of a wider research community. I wish this book a great deal of success, and I hope that it is able to find an audience and a frame outside the realm of survey research (Although survey research has a great deal of foundational research, it is not well known outside of the field, and this book will merit a wider audience).

But outside of this book, I’m not quite sure where or how the work of bringing these two distinct areas of research can or will be done.

Also at the AAPOR conference last week, I participated in a panel on The Role of Blogs in Public Opinion Research (intro here and summary here). Blogs serve a special purpose in the field of research. Academic research is foundational and important, but the publish rate on papers is low, and the burden of proof is high. Articles that are published are crafted as an argument. But what of the bumps along the road? The meditations on methodology that arise? Blogs provide a way for researchers to work through challenges and to publish their failures. They provide an experimental space where fields and ideas can come together that previously hadn’t mixed. They provide a space for finding, testing, and crossing boundaries.

Beyond this, they are a vehicle for dissemination. They are accessible and informally advertised. The time frame to publish is short, the burden lower (although I’d like to believe that you have to earn your audience with your words). They are a public face to research.

I hope that we will continue to test these boundaries, to cross over barriers like quantitative and qualitative that are unhelpful and obtrusive. I hope that we will be able to see that we all need each other as researchers, and the quality research that we all want to work for will only be achieved through the mutual recognition that we need.

Digital Democracy Remixed

I recently transitioned from my study of the many reasons why the voice of DC taxi drivers is largely absent from online discussions into a study of the powerful voice of the Kenyan people in shaping their political narrative using social media. I discovered a few interesting things about digital democracy and social media research along the way, and the contrast between the groups was particularly useful.

Here are some key points:

  • The methods of sensemaking that journalists use in social media is similar to other methods of social media research, except for a few key factors, the most important of which is that the bar for verification is higher
  • The search for identifiable news sources is important to journalists and stands in contrast with research methods that are built on anonymity. This means that the input that journalists will ultimately use will be on a smaller scale than the automated analyses of large datasets widely used in social media research.
  • The ultimate information sources for journalists will be small, but the phenomena that will capture their attention will likely be big. Although journalists need to dig deep into information, something in the large expanse of social media conversation must capture or flag their initial attention
  • It takes some social media savvy to catch the attention of journalists. This social media savvy outweighs linguistic correctness in the ultimate process of getting noticed. Journalists act as intermediaries between social media participants and a larger public audience, and part of the intermediary process is language correcting.
  • Social media savvy is not just about being online. It is about participating in social media platforms in a publicly accessible way in regards to publicly relevant topics and using the patterned dialogic conventions of the platform on a scale that can ultimately draw attention. Many people and publics go online but do not do this.

The analysis of social media data for this project was particularly interesting. My data source was the comments following this posting on the Al Jazeera English Facebook feed.

fb

It evolved quite organically. After a number of rounds of coding I noticed that I kept drawing diagrams in the margins of some of the comments. I combined the diagrams into this framework:

scales

Once this framework was built, I looked closely at the ways in which participants used this framework. Sometimes participants made distinct discursive moves between these levels. But when I tried to map the participants’ movements on their individual diagrams, I noticed that my depictions of their movements rarely matched when I returned to a diagram. Although my coding of the framework was very reliable, my coding of the movements was not at all. This led me to notice that oftentimes the frames were being used more indexically. Participants were indexing levels of the frame, and this indexical process created powerful frame shifts. So, on the level of Kenyan politics exclusively, Uhuru’s crimes had one meaning. But juxtaposed against the crimes of other national leaders’ Uhuru’s crimes had a dramatically different meaning. Similarly, when the legitimacy of the ICC was questioned, the charges took on a dramatically different meaning. When Uhuru’s crimes were embedded in the postcolonial East vs West dynamic, they shrunk to the degree that the indictments seemed petty and hypocritical. And, ultimately, when religion was invoked the persecution of one man seemed wholly irrelevant and sacrilegious.

These powerful frame shifts enable the Kenyan public to have a powerful, narrative changing voice in social media. And their social media savvy enables them to gain the attention of media sources that amplify their voices and thus redefine their public narrative.

readyforcnn

Instagram is changing the way I see

I recently joined Instagram (I’m late, I know).

I joined because my daughter wanted to, because her friends had, to see what it was all about. She is artistic, and we like to talk about things like color combinations and camera angles, so Instagram is a good fit for us. But it’s quickly changing the way I understand photography. I’ve always been able to set up a good shot, and I’ve always had an eye for color. But I’ve never seriously followed up on any of it. It didn’t take long on Instagram to learn that an eye for framing and color is not enough to make for anything more than accidental great shots. The great shots that I see are the ones that pick deeper patterns or unexpected contrasts out of seemingly ordinary surroundings. They don’t simply capture beauty, they capture an unexpected natural order or a surprising contrast, or they tell a story. They make you gasp or they make you wonder. They share a vision, a moment, an insight. They’re like the beginning paragraph of a novel or the sketch outline of a poem. Realizing that, I have learned that capturing the obvious beauty around me is not enough. To find the good shots, I’ll need to leave my comfort zone, to feel or notice differently, to wonder what or who belongs in a space and what or who doesn’t, and why any of it would capture anyone’s interest. It’s not enough to see a door. I have to wonder what’s behind it. To my surprise, Instagram has taught me how to think like a writer again, how to find hidden narratives, how to feel contrast again.

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn't? What would catch your eye?

Sure this makes for a pretty picture. But what is unexpected about it? Who belongs in this space? Who doesn’t? What would catch your eye?

This kind of change has a great value, of course, for a social media researcher. The kinds of connections that people forge on social media, the different ways in which people use platforms and the ways in which platforms shape the way we interact with the world around us, both virtual and real, are vitally important elements in the research process. In order to create valid, useful research in social media, the methods and thinking of the researcher have to follow closely with the methods and thinking of the users. If your sensemaking process imitates the sensemaking process of the users, you know that you’re working in the right direction, but if you ignore the behaviors and goals of the users, you have likely missed the point altogether. (For example, if you think of Twitter hashtags simply as an organizational scheme, you’ve missed the strategic, ironic, insightful and often humorous ways in which people use hashtags. Or if you think that hashtags naturally fall into specific patterns, you’re missing their dialogic nature.)

My current research involves the cycle between social media and journalism, and it runs across platforms. I am asking questions like ‘what gets picked up by reporters and why?’ and ‘what is designed for reporters to pick up?’ And some of these questions lead me to examine the differences between funny memes that circulate like wildfire through Twitter leading to trends and a wider stage and the more indepth conversation on public facebook pages, which cannot trend as easily and is far less punchy and digestible. What role does each play in the political process and in constituting news?

Of course, my current research asks more questions than these, but it’s currently under construction. I’d rather not invite you into the workzone until some of the pulp and debris have been swept aside…