Call for Abstracts for 11th Annual International Conference on Stigma

I am excited to share the call for abstracts for a very special conference that bridges community members, practitioners and researchers. This year’s conference will be online and spread over the course of a week. It brings a cathartic opportunity for community members to share their experiences and a unique opportunity for researchers to engage directly with community members and practitioners about their work.

 

CALL FOR ABSTRACTS

11th ANNUAL INTERNATIONAL CONFERENCE ON STIGMA

Conference Theme: “Faces of Stigma

 

Virtually Hosted by Howard University Monday, November 16, 2020 – Friday, November 20, 2020 (9 AM– 4 PM EST)

Deadline for Submission: Friday, September 18, 2020 by 5:00pm (EST)

 

The goals of this virtual conference are to increase awareness of the stigma of HIV and other health conditions and to explore interventions to eradicate this stigma. The conference also serves to educate healthcare providers and the general public about stigma as both a human rights violation and a major barrier to prevention and treatment of illnesses. We are looking for original research that addresses HIV or other mental or physical health-related stigma to be presented as a VIRTUAL POSTER during the conference virtual poster session. Abstracts that focus on the effects of intersectional stigma, or how different layers of stigma (e.g., race, poverty, mental health) affect individuals or communities of people are particularly encouraged. The Best Scientific Abstract Award recipient and the second-place scientific abstract will have the opportunity to provide a BRIEF VIRTUAL PRESENTATION of their work on November 19, 2020 in addition to their participation in the virtual poster session held throughout the week of the conference. Monetary prizes will be given for the top three scientific abstracts. The Best Scientific Abstract Award recipient will receive a $500 prize, the second-place scientific abstract will receive a $200 prize, and the third-place scientific abstract will receive a $100 prize.

Abstract Guidelines:  Submit an abstract, with a maximum of 300 words, to Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at ssultana@brockport.edu, by 5:00pm (EST) on Friday, September 18, 2020.  Please include the full name, position/job title, affiliation and email address of each contributing author at the top of the page along with the abstract title.  Author information and the abstract title are not included in the 300-word count.  First author or another presenter must register for the conference if the abstract is accepted.  The first author (or another presenter) of the winning abstracts must virtually attend the conference to receive the prizes.  Students are welcome to submit abstracts and attend the conference!

Notifications will be sent by October 16, 2020.  These are virtual poster presentations only, with the exception of the Best Scientific Abstract Award winner and the second-place scientific abstract winner, which are also brief virtual oral presentations.

For questions about abstracts, contact Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at ssultana@brockport.edu.  For general questions about the conference contact Patricia Houston at phouston@howard.edu.

Bridging Research, Community and Practice

I want to share a call for abstracts for a very special conference. It is rare that an event bring together researchers, practitioners and community members:

 

CALL FOR ABSTRACTS

9th ANNUAL INTERNATIONAL CONFERENCE ON STIGMA

Conference Theme: “Bridging Research, Community and Practice”

 

Howard University, Washington, DC, Friday, November 16, 2018 (8 AM– 5 PM)

Deadline for Submission: Friday, September 14, 2018 by 5:00pm (EST)

 

The overarching goals of this conference are to increase awareness of the stigma of HIV and other health conditions and to explore interventions to eradicate this stigma. The conference also serves to educate healthcare providers and the general public about stigma as both a human rights violation and a major barrier to prevention and treatment of illnesses. We are looking for original work that addresses HIV or other health-related stigma (such as mental illness) to be presented as a POSTER during the conference poster session.  The Best Scientific Abstract Award recipient and the second-place scientific abstract will have the opportunity to provide a BRIEF PRESENTATION of their work in addition to the poster session. Monetary prizes will be given for the top three scientific abstracts. The Best Scientific Abstract Award recipient will receive a $500 prize, the second-place scientific abstract will receive a $200 prize, and the third-place scientific abstract will receive a $100 prize.

 

Abstract Guidelines:  Submit an abstract, with a maximum of 300 words, to Victoria Hoverman at vicki.hoverman@gmail.com and Shirin Sultana at shirin.sultana@bison.howard.edu, by 5:00pm (EST) on Friday, September 14, 2018.  Please include the full name, position/job title, affiliation and email address of each contributing author at the top of the page along with the abstract title.  Author information and the abstract title are not included in the 300-word count. First author or presenter must register for the conference if the abstract is accepted.  Notifications will be sent by October 15, 2018. These are poster presentations only, with the exception of the Best Scientific Abstract Award winner and the second-place scientific abstract winner, which are also brief oral presentations.  The first author of the winning abstracts must attend the conference to receive the prizes (or be willing to let an attending author or other representative accept the prize). Students are welcome!

 

For questions about abstracts, contact Victoria Hoverman at vicki.hoverman@gmail.com and/or Shirin Sultana at shirin.sultana@bison.howard.edu.  For general questions about the conference contact Patricia Houston at phouston@howard.edu.

The future doesn’t belong to big or small data. It belongs to the disruptors.

Research is evolving fast. There is less support and more doubt for traditional methods, a fast- changing set of expectations from end-users, and a fast-evolving field of nontraditional methods and approaches. The future of research does not belong to big data or small data. It belongs to the disruptors. It belongs to those that can recognize and challenge the assumptions underlying their methodologies. The future belongs to creative approaches, connected data, and collaboration.

 

Research requires listening and understanding.

In order to create research that is useful, there needs to be a deep understanding of end-users, clients, and the context of the products we create. This requires listening, understanding, and creating opportunities to learn more, both by representing end users and clients more directly in the development process and by qualitative research methods. Qualitative research provides methods of collecting and analyzing information about people, in-person, virtually and through behavioral data sources, and it must provide a vitally important role in evolving research methods.

 

Research requires ever-changing analytic capabilities and creative, open minds.

We live in an era when data is plentiful. But the data looks different from what we saw in the past. We need capable and versatile technical workers who are able to process data. And we need the creativity to put the data to use in ways that benefit end-users.

 

Research must embrace diversity.

Creative strategy and good user-focus can’t spring from echo chambers. We need to connect to divergent experiences and views early and often in order to create good products. People with divergent views can raise questions earlier in the development process and allow us to integrate holistic solutions for problems we could not have thought of alone. Diverse experiences allow us to be more creative because they provide more material to inspire us. And diversity is crucial for us to successfully compete in the global marketplace.

 

Research design must be iterative.

If we want to create new ways of analyzing and connecting data, we have to be free to experiment with new methods, test new methods, and allow end-users to test proposed solutions. Often what we create doesn’t function the way that we expect it to. In an era where data does not need to be designed and collected, we have the flexibility to find creative ideas (“ugly babies”), nurture them, test them out, and tweak what doesn’t work.

 

Silos no longer make sense in research

It no longer makes sense to separate end users from developers or quantitative from qualitative. The best disruptive, creative potential lies in the mingling of methods and people. The most useful products are the ones that can be created collaboratively.

 

Research can be agile.

Agile development has become standard practice in much of the software development world, but it makes sense for research as well. Agile teams can involve end-users, UX researchers, quantitative methodologists and qualitative methodologists. Research can be built by agile, creative teams that feel free to question and inspire each other.

 

Creating high performing teams has never mattered more.

There is a growing body of great research about what makes a highly effective team. Effective teams are empathetic and open. They consider each other’s interests. They are practical and focused on the end product. They are comfortable asking questions and brainstorming solutions. They work collaboratively, and they celebrate their accomplishments.

 

The future of research is bigger than any one person or silo. It requires us to come together in new ways. I already see some firms moving in this direction- kudos to them. A new era is here, and I’m excited for us all!

Reporting on the AAPOR 69th national conference in Anaheim #aapor

Last week AAPOR held it’s 69th annual conference in sunny (and hot) Anaheim California.

Palm Trees in the conference center area

My biggest takeaway from this year’s conference is that AAPOR is a very healthy organization. AAPOR attendees were genuinely happy to be at the conference, enthusiastic about AAPOR and excited about the conference material. Many participants consider AAPOR their intellectual and professional home base and really relished the opportunity to be around kindred spirits (often socially awkward professionals who are genuinely excited about our niche). All of the presentations I saw firsthand or heard about were solid and dense, and the presenters were excited about their work and their findings. Membership, conference attendance, journal and conference submissions and volunteer participation are all quite strong.

 

At this point in time, the field of survey research is encountering a set of challenges. Nonresponse is a growing challenge, and other forms of data and analysis are increasingly en vogue. I was really excited to see that AAPOR members are greeting these challenges and others head on. For this particular write-up, I will focus on these two challenges. I hope that others will address some of the other main conference themes and add their notes and resources to those I’ve gathered below.

 

As survey nonresponse becomes more of a challenge, survey researchers are moving from traditional measures of response quality (e.g. response rates) to newer measures (e.g. nonresponse bias). Researchers are increasingly anchoring their discussions about survey quality within the Total Survey Error framework, which offers a contextual basis for understanding the problem more deeply. Instead of focusing on an across the board rise in response rates, researchers are strategizing their resources with the goal of reducing response bias. This includes understanding response propensity (who is likely not to respond to the survey? Who is most likely to drop out of a panel study? What are some of the barriers to survey participation?), looking for substantive measures that correlate with response propensity (e.g. Are small, rural private schools less likely to respond to a school survey? Are substance users less likely to respond to a survey about substance abuse?), and continuous monitoring of paradata during the collection period (e.g. developing differential strategies by disposition code, focusing the most successful interviewers on the most reluctant cases, or concentrating collection strategies where they are expected to be most effective). This area of strategizing emerged in AAPOR circles a few years ago with discussions of nonresponse propensity modeling, a process which is surely much more accessible than it sounds, but it has really evolved into a practical and useful tool that can help any size research shop increase survey quality and lower costs.

 

Another big takeaway for me was the volume of discussions and presentations that spoke to the fast-emerging world of data science and big data. Many people spoke of the importance of our voice in the realm of data science, particularly with our professional focus on understanding and mitigating errors in the research process. A few practitioners applied error frameworks to analyses of organic data, and some talks were based on analyses of organic data. This year AAPOR also sponsored a research hack to investigate the potential for Instagram as a research tool for Feed the Hungry. These discussions, presentations and activities made it clear that AAPOR will continue to have a strong voice in the changing research environment, and the task force reports and initiatives from both the membership and education committees reinforced AAPOR’s ability to be right on top of the many changes afoot. I’m eager to see AAPOR’s changing role take shape.

“If you had asked social scientists even 20 years ago what powers they dreamed of acquiring, they might have cited the capacity to track the behaviors, purchases, movements, interactions, and thoughts of whole cities of people, in real time.” – N.A.  Christakis. 24 June 2011. New York Times, via Craig Hill (RTI)

 

AAPOR a very strong, well-loved organization and it is building a very strong future from a very solid foundation.

 

 

2014-05-16 15.38.17

 

MORE DETAILED NOTES:

This conference is huge, so I could not possibly cover all of it on my own, so I will try to share my notes as well as the notes and resources I can collect from other attendees. If you have any materials to share, please send them to me! The more information I am able to collect here, the better a resource it will be for people interested in the AAPOR or the conference-

 

Patrick Ruffini assembled the tweets from the conference into this storify

 

Annie, the blogger behind LoveStats, had quite a few posts from the conference. I sat on a panel with Annie on the role of blogs in public opinion research (organized by Joe Murphy for the 68th annual AAPOR conference), and Annie blew me away by live-blogging the event from the stage! Clearly, she is the fastest blogger in the West and the East! Her posts from Anaheim included:

Your Significance Test Proves Nothing

Do panel companies manage their panels?

Gender bias among AAPOR presenters

What I hate about you AAPOR

How to correct scale distribution errors

What I like about you AAPOR

I poo poo on your significance tests

When is survey burden the fault of the responders?

How many survey contacts is enough?

 

My full notes are available here (please excuse any formatting irregularities). Unfortunately, they are not as extensive as I would have liked, because wifi and power were in short supply. I also wish I had settled into a better seat and covered some of the talks in greater detail, including Don Dillman’s talk, which was a real highlights of the conference!

I believe Rob Santos’ professional address will be available for viewing or listening soon, if it is not already available. He is a very eloquent speaker, and he made some really great points, so this will be well worth your time.

 

Let’s talk about data cleaning

Data cleaning has a bad rep. In fact, it has long been considered the grunt work of the data analysis enterprise. I recently came across a piece of writing in the Harvard Business Review that lamented the amount of time data scientists spend cleaning their data. The author feared that data scientists’ skills were being wasted on the cleaning process when they could be using their time for the analyses we so desperately need them to do.

I’ll admit that I haven’t always loved the process of cleaning data. But my view of the process has evolved significantly over the last few years.

As a survey researcher, my cleaning process used to begin with a tall stack of paper forms. Answers that did not make logical sense during the checking process sparked a trip to the file folders to find the form in question. The forms often held physical evidence of a indecision on the part of the respondent, such as eraser marks or an explanation in the margin, which could not have been reflected properly by the data entry person. We lost this part of the process when we moved to web surveys. It sometimes felt like a web survey left the respondent no way to communicate with the researcher about their unique situations. Data cleaning lost its personalized feel and detective story luster and became routine and tedious.

Despite some of the affordances of the movement to web surveys, much of the cleaning process stayed routed in the old techniques. Each form has its own id number, and the programmers would use those id numbers for corrections

if id=1234567, set var1=5, set var7=62

At this point a “good programmer” would also document the changes for future collaborators

*this person was not actually a forest ranger, and they were born in 1962
if id=1234567, set var1=5, set var7=62

Making these changes grew tedious very quickly, and the process seemed to drag on for ages. The researcher would check the data for a potential errors, scour the records that could hold those errors for any kind of evidence of the respondent’s intentions, and then handle each form one at a time.

My techniques for cleaning data have changed dramatically since those days. My goal is to use id numbers as rarely as possible, but instead to ask myself questions like “how can I tell that these people are not forest rangers?” The answer to these questions evokes a subtley different technique:

* these people are not actually forest rangers
if var7=35 and var1=2 and var10 contains ‘fire fighter’, set var1=5)

This technique requires honing and testing (adjusting the precision and recall), but I’ve found it to be far more efficient, faster, more comprehensive and, most of all- more fun (oh hallelujah!). It makes me wonder whether we have perpetually undercut the quality of the data cleaning we do simply because we hold the process in such low esteem.

So far I have not discussed data cleaning for other types of data. I’m currently working on a corpus of Twitter data, and I don’t see much of a difference in the cleaning process. The data types and programming statements I use are different, but the process is very close. It’s an interesting and challenging process that involves detective work, a better and growing understanding of the intricacies of the dataset, a growing set of programming skills, and a growing understanding of the natural language use in your dataset. The process mirrors the analysis to such a degree that I’m not really sure why it would be such a bad thing for analysts to be involved in data cleaning.

I’d be interested to hear what my readers have to say about this. Is our notion of the value and challenge of data cleaning antiquated? Is data cleaning a burden that an analyst should bear? And why is there so little talk about data cleaning, when we could all stand to learn so much from each other in the way of data structuring code and more?

Professional Identity: Who am I? And who are you?

Last night I acted as a mentor at the annual Career Exploration Expo sponsored by my graduate program. Many of the students had questions about developing a professional identity. This makes sense, of course, because graduate school is an important time for discovering and developing a professional identity.

People enter our program (and many others) With a wide variety of backgrounds and interests. They choose from a variety of classes that fit their interests and goals. And then they try to map their experience onto job categories. But boxes are difficult to climb into and out of, and students soon discover that none of the boxes is a perfect fit.

I experienced this myself. I entered the program with an extensive and unquestioned background in survey research. Early in my college years (while I was studying and working in neuropsychology) I began to manage a clinical dataset in SPSS. Working with patients and patient files was very interesting, but to my surprise working with data using statistical software felt right to me much in the way that Ethiopian meals include injera and Japanese meals include rice (IC 2006 (1997) Ohnuki Tierney Emiko). I was actually teased by my friends about my love of data! This affinity served me well, and I enjoyed working with a variety of data sets while moving across fields and statistical programming languages.

But my graduate program blew my mind. I felt like I had spent my life underwater and then discovered the sky and continents. I discovered many new kinds of data and analytic strategies, all of which were challenging and rewarding. These discoveries inspired me to start this blog and have inspired me to attend a wide variety of events and read some very interesting work that I never would have discovered on my own. Hopefully followers of this blog have enjoyed this journey as much as I have!

As a recent graduate, I sometimes feel torn between worlds. I still work as a survey researcher, but I’m inspired by research methods that are beyond the scope of my regular work. Another recent graduate of our program who is involved in market research framed her strategy in a way that really resonated with me: “I give my customers what they want and something else, and they grow to appreciate the ‘something else.'” That sums up my current strategy. I do the survey management and analysis that is expected of me in a timely, high quality way. But I am also using my newly acquired knowledge to incorporate text analysis into our data cleaning process in order to streamline it, increasing both the speed and the quality of the process and making it better equipped to handle the data from future surveys. I do the traditional quantitative analyses, but I supplement them  with analyses of the open ended responses that use more flexible text analytic strategies. These analyses spark more quantitative analyses and make for much better (richer, more readable and more inspired) reports.

Our goal as professionals should be to find a professional identity that best capitalizes on  our unique knowledge, skills and abilities. There is only one professional identity that does all of that, and it is the one you have already chosen and continue to choose every day. We are faced with countless choices about what classes to take, what to read, what to attend, what to become involved in, and what to prioritize, and we make countless assessments about each. Was it worthwhile? Did I enjoy it? Would I do it again? Each of these choices constitutes your own unique professional self, a self which you are continually manufacturing. You are composed of your past, your present, and your future, and your future will undoubtedly be a continuation of your past and present. The best career coach you have is inside of you.

Now your professional identity is much more uniquely or narrowly focused that the generic titles and fields that you see in the professional marketplace. Keep in mind that each job listing that you see represents a set of needs that a particular organization has. Is this a set of needs that you are ready to fill? Is this a set of needs that you would like to fill? You are the only one who knows the answers to these questions.

Because it turns out that you are your best career coach, and you have been all along.

Great description of a census at Kakuma refugee camp

It’s always fun for a professional survey researcher to stumble upon a great pop cultural reference to a survey. Yesterday I heard a great description of a census taken at Kakuma refugee camp in Kenya. The description was in the book I’m currently reading: What Is the What by Dave Eggers (great book, I highly recommend it!). The book itself is fiction, loosely based on a true story, so this account likely stems from a combination of observation and imagination. The account reminds me of some of the field reports and ethnographic findings in other intercultural survey efforts, both national (US census) and inter or multinational.

To set the stage, Achak is the main character and narrator of the story. He is one of the “lost boys” of Sudan, and he found his way to Kakuma after a long and storied escape from his war-ravaged hometown. At Kakuma he was taken in by another Denka man, named Gop, who is acting as a kind of father to Achak.

What is the What by Dave Eggers

What is the What by Dave Eggers

“The announcement of the census was made while Gop was waiting for the coming of his wife and daughters, and this complicated his peace of mind. To serve us, to feed us, the UNHCR and Kakuma’s many aid groups needed to know how many refugees were at the camp. Thus, in 1994 they announced they would count us. It would only take a few days, they said. To the organizers I am sure it seemed a very simple, necessary, and uncontroversial directive. But for the Sudanese elders, it was anything but.

—What do you think they have planned? Gop Chol wondered aloud.

I didn’t know what he meant by this, but soon I understood what had him, and the majority of Sudanese elders, greatly concerned. Some learned elders were reminded of the colonial era, when Africans were made to bear badges of identification on their necks.

—Could this counting be a pretext of a new colonial period? Gop mused.—It’s very possible.
Probable even!

I said nothing.

At the same time, there were practical, less symbolic, reasons to oppose the census, including the fact that many elders imagined that it would decrease, not increase, our rations. If they discovered there were fewer of us than had been assumed, the food donations from the rest of the world would drop. The more pressing and widespread fear among young and old at Kakuma was that the census would be a way for the UN to kill us all. These fears were only exacerbated when the fences were erected.

The UN workers had begun to assemble barriers, six feet tall and arranged like hallways. The fences would ensure that we would walk single file on our way to be counted, and thus counted only once. Even those among us, the younger Sudanese primarily, who were not so worried until then, became gravely concerned when the fences went up. It was a malevolent-looking thing, that maze of fencing, orange and opaque. Soon even the best educated among us bought into the suspicion that this was a plan to eliminate the Dinka. Most of the Sudanese my age had learned of the Holocaust, and were convinced that this was a plan much like that used to eliminate the Jews in Germany and Poland. I was dubious of the growing paranoia, but Gop was a believer. As rational a man as he was, he had a long memory for injustices visited upon the people of Sudan.

—What isn’t possible, boy? he demanded.—See where we are? You tell me what isn’t possible at this time in Africa!

But I had no reason to distrust the UN. They had been feeding us at Kakuma for years. There was not enough food, but they were the ones providing for everyone, and thus it seemed nonsensical that they would kill us after all this time.

—Yes, he reasoned,—but see, perhaps now the food has run out. The food is gone, there’s no more money, and Khartoum has paid the UN to kill us. So the UN gets two things: they get to save food, and they are paid to get rid of us.

—But how will they get away with it?

—That’s easy, Achak. They say that we caught a disease only the Dinka can get. There are always illnesses unique to certain people, and this is what will happen. They’ll say there was a Dinka plague, and that all the Sudanese are dead. This is how they’ll justify killing every last one of us.
—That’s impossible, I said.

—Is it? he asked.—Was Rwanda impossible?

I still thought that Gop’s theory was unreliable, but I also knew that I should not forget that there were a great number of people who would be happy if the Dinka were dead. So for a few days, I did not make up my mind about the head count. Meanwhile, public sentiment was solidifying against our participation, especially when it was revealed that the fingers of all those counted, after being counted, would be dipped in ink.

—Why the ink? Gop asked. I didn’t know.

—The ink is a fail-safe measure to ensure the Sudanese will be exterminated.

I said nothing, and he elaborated. Surely if the UN did not kill us Dinka while in the lines, he theorized, they would kill us with this ink on the fingers. How could the ink be removed? It would, he thought, enter our bodies when we ate.

—This seems very much like what they did to the Jews, Gop said.

People spoke a lot about the Jews in those days, which was odd, considering that a short time before, most of the boys I knew thought the Jews were an extinct race. Before we learned about the Holocaust in school, in church we had been taught rather crudely that the Jews had aided in the killing of Jesus Christ. In those teachings, it was never intimated that the Jews were a people still inhabiting the earth. We thought of them as mythological creatures who did not exist outside the stories of the Bible. The night before the census, the entire series of fences, almost a mile long, was torn down. No one took responsibility, but many were quietly satisfied.

In the end, after countless meetings with the Kenyan leadership at the camp, the Sudanese elders were convinced that the head count was legitimate and was needed to provide better services to the refugees. The fences were rebuilt, and the census was conducted a few weeks later. But in a way, those who feared the census were correct, in that nothing very good came from it. After the count, there was less food, fewer services, even the departure of a few smaller programs. When they were done counting, the population of Kakuma had decreased by eight thousand people in one day.

How had the UNHCR miscounted our numbers before the census? The answer is called recycling.

Recycling was popular at Kakuma and is favored at most refugee camps, and any refugee anywhere in the world is familiar with the concept, even if they have a different name for it. The essence of the idea is that one can leave the camp and re-enter as a different person, thus keeping his first ration card and getting another when he enters again under a new name. This means that the recycler can eat twice as much as he did before, or, if he chooses to trade the extra rations, he can buy or otherwise obtain anything else he needs and is not being given by the UN—sugar, meat, vegetables. The trading resulting from extra ration cards provided the basis for a vast secondary economy at Kakuma, and kept thousands of refugees from anemia and related illnesses. At any given time, the administrators of Kakuma thought they were feeding eight thousand more people than they actually were. No one felt guilty about this small numerical deception.

The ration-card economy made commerce possible, and the ability of different groups to manipulate and thrive within the system led soon enough to a sort of social hierarchy at Kakuma.”

Planning another Online Research, Offline lunch

I’m planning another Online Research, Offline lunch for researchers in the Washington DC area later this month. The specific date and location are TBA, but it will be toward the end of February near Metro Center.

These lunches are designed to welcome professionals and students involved in online research across a variety of disciplines, fields and sectors. Past attendees have had a wide array of interests and specialties, including usability and interface design, data science, natural language processing, social network analysis, social media monitoring, discourse analysis, netnography, digital humanities and library science.

The goal of this series is to provide an informal venue for a diverse set of researchers to talk with each other and gain a wider context for understanding their work. They are an informal and flexible way to researchers to meet each other, talk and learn. Although Washington DC is a great meeting place for specific areas of online research, there are few informal opportunities for interdisciplinary gatherings of professionals and academics.

Here is a form that can be used to add new people to the list. If you’re already on the list you do not need to sign up again. Please feel free to share the form with anyone else who may be interested:

Great readings that might shake you to your academic core? I’m compiling a list

In the spirit of research readings that might shake you to your academic core, I’m compiling a list. Please reply to this thread with any suggestions you have to add. They can be anything from short blog posts (microblog?) to research articles to books. What’s on your ‘must read’ list?

Here are a couple of mine to kick us off:

 

Charles Goodwin’s Professional Vision paper

I don’t think I’ve referred to any paper as much as this one. It’s about the way our professional training shapes the way we see the things around us. Shortly after reading this paper I was in the gym thinking about commonalities between the weight stacks and survey scales. I expect myself to be a certain relative strength, and when that doesn’t correlate with the place where I need to place my pin I’m a little thrown off.

It also has a deep analysis of the Rodney King verdict.

 

Revitalizing Chinatown Into a Heterotopia by Jia Lou

This article is based on a geosemiotic analysis of DC’s Chinatown. It is one of the articles that helped me to see that data really can come in all forms

 

After method: Mess in Social Science Research by John Law

This is the book that inspired this list. It also inspired this blog post.

 

On Postapocalyptic Research Methods and Failures, Honesty and Progress in Research

I’m reading a book that I like to call “post-apocalyptic research methodology.” It’s ‘After Method: Mess in Social Science Research’ by John Law. At this point the book reads like a novel. I can’t quite imagine where he’ll take his premise, but I’m searching for clues and turning pages. In the meantime, I’ve been thinking quite a bit about failure, honesty, uncertainty and humility in research.

How is the current research environment like a utopian society?

The research process is often idealized in public spaces. Whether the goal of the researcher is to publish a paper based on their research, present to an audience of colleagues or stakeholders about their research, or market the product of their research, all researchers have a vested interest in the smoothness of the research process. We expect to approach a topic, perform a series of time-tested methods or develop innovative new methods with strong historical traditions, apply these methods as neatly as possible, and end up with a series of strong themes that describe the majority of our data. However, in Law’s words “Parts of the world are caught in our ethnographies, our histories and our statistics. But other parts are not, and if they are then this is because they have been distorted into clarity.” (p. 2) We think of methods as a neutral middle step and not a political process, and this way of thinking allows us to focus on reliability and validity as surface measures and not inherent questions. “Method, as we usually imagine it, is a system for offering more or less bankable guarantees.” (p. 9)

Law points out that research methods are, in practice, very limited in the social sciences “talk of method still tends to summon up a relatively limited repertoire of responses.” (p. 3) Law also points out that every research method is inherently political. Every research method involves a way of seeing or a way of looking at the data, and that perspective maps onto the findings it yields. Different perspectives yield different findings, whether they are subtly or dramatically different. Law’s central assertion is that methods don’t just describe social realities but also help to create them. Recognizing the footprint of our own methods is a step toward better understanding our data and results.

In practice, the results that we focus on are largely true. They describe a large portion of the data, ascribing the rest of the data to noise or natural variation. When more of our data is described in our results, we feel more confident about our data and our analysis.

Law argues that this smoothed version of reality is far enough from the natural world that it should perk our ears. Research works to create a world that is simple and falls into place neatly and resembles nothing we know, “’research methods’ passed down to us after a century of social science tend to work on the assumption that the world is properly to be understood as a set of fairly specific, determinate, and more or less identifiable processes.” (p. 5) He suggests instead that we should recognize the parts that don’t fit, the areas of uncertainty or chaos, and the areas where our methods fail. “While standard methods are often extremely good at what they do, they are badly adapted to the study of the ephemeral, the indefinite and the irregular.” (p. 4). “Regularities and standardizations are incredibly powerful tools, but they set limits.” (p. 6)

Is the Utopia starting to fall apart?

The current research environment is a bit different from that of the past. More people are able to publish research at any stage without peer review using media like blogs. Researchers are able to discuss their research while it is in progress using social media like Twitter. There is more room to fail publicly than there ever has been before, and this allows for public acknowledgment of some of the difficulties and challenges that researcher’s face.

Building from ashes

Law briefly introduces his vision on p. 11 “My hope is that we can learn to live in a way that is less dependent on the automatic. To live more in and through slow method, or vulnerable method, or quiet method. Multiple method. Modest method. Uncertain method. Diverse method.”

Many modern discussions of about management talk about the value of failure as an innovative tool. Some of the newer quality control measures in aviation and medicine hinge on the recognition of failure and the retooling necessary to prevent or limit the recurrences of specific types of events. The theory behind these measures is that failure is normal and natural, and we could never predict the many ways in which failure could happen. So, instead of exclusively trying to predict or prohibit failure, failures should be embraced as opportunities to learn.

Here we can ask: what can researchers learn from the failures of the methods?

The first lesson to accompany any failure is humility. Recognizing our mistakes entails recognizing areas where we fell short, where our efforts were not enough. Acknowledging that our research training cannot be universal, that applying research methods isn’t always straightforward and simple, and that we cannot be everything to everyone could be an important stage of professional development.

How could research methodology develop differently if it were to embrace the uncertain, the chaotic and the places where we fall short?

Another question: What opportunities to researchers have to be publicly humble? How can those spaces become places to learn and to innovate?

Note: This blog post is dedicated to Dr Jeffrey Keefer @ NYU, who introduced me to this very cool book and has done some great work to bring researchers together