MOOC’s, Libraries, Online learning and Thirsting for knowledge

Let me begin by telling you a story.

The story began when I was in high school searching for the right college. My mom and I took a road trip the summer after my junior year of college. We took our time and covered quite a bit of ground. I discovered Hot97 in New York and Pepto Bismol in North Carolina. I fell in love with upstate NY. After our return, I began the application and interview process. The most memorable moment came during my interview with a representative from Cornell. She asked if I had any burning questions, and I decided to go ahead and ask her a question that had really been nagging at me: What is the difference between a class at Cornell and a class at a community college? She was shocked and deeply offended. She told me that anyone could get a great education anywhere they could find a library, and obviously I wasn’t right for Cornell.

This exchange has haunted me ever since. I do love to read, sure, but a library alone could never create the magic that a classroom can create. And the most magical classes happen when the students are engaged, interested, attentive, involved, participating, excited and following through with the homework. Part of this magic comes from the teacher. A great teacher can cultivate this kind of environment with ease, but most really struggle when it doesn’t happen organically.

I’m not sure everyone would agree that classrooms can be magical. I may have been spoiled with great classes. I’ve just finished a masters’ program where I loved the classes, loved the reading and loved the assignments, but I’m not sure that every student would approach school with as much relish. I love learning.

Tomorrow I begin an educational experiment. I will start a course in Social Network Analysis from Statistics.com. This is a paid course, and I’ve chosen to be held responsible for my work (you can choose whether or not to submit homework for grading). Next month, the experiment will deepen when I begin my first MOOC. The MOOC is a data analysis class that teaches R. I’m very eager to learn R and to revisit some statistical methods that I haven’t been able to use much. The experiment will not be pure, because three of my coworkers have decided to attend the class as well. We’ll be fortunate enough to experience part of the course in-person.

I’m not sure how I feel about distance education before beginning this experiment. Learning is something that I really love to do in-person. But so many things that happen online can be evaluated the same way. I recently read articles and commentary about a controversial paper on Twitter research SSRN-id2235423. The research is fodder for some great discussion, but many commenters on the news articles simply chose to trash Twitter. They bemoaned the 140 character limit so strongly that one would think that Twitter is a land of Paris Hilton’s and cats. I’d like the critics to know that yes, you can find Paris Hilton and cats on Twitter or just about anywhere else online. But you can also find something deeper, something that interests you. I recently introduced my nephew to Twitter. He’s a news junkie of sorts, and he was fascinated to see how much emerging news and quality commentary was available. The weekly #wjchat’s alone are reason to follow Twitter (#wjchat is a weekly methods chat between social media journalists) The reach of people on Twitter is unparalleled, and the ability to follow specific areas of interest in deeply engaged ways is also unparalleled. When used correctly, Twitter is a powerful tool.

Online learning as well as the potential to be a powerful tool. But it will require engagement from the people involved. We will need to suspend our natural hesitancy and develop the necessary competencies. I really hope that my classmates will be willing to embrace the experience!

Advertisement

More Takeaways from the DC-AAPOR/WSS Summer Conference

Last week I shared my notes from the first two sessions of the DC-AAPOR/ WSS Summer conference preview/review. Here are the rest of the notes, covering the rest of the conference:

Session 3: Accessing and Using Records

  • Side note: Some of us may benefit from a support group format re: matching administrative records
  • AIR experiment with incentives & consent to record linkage: $2 incentive s/t worse than $0. $20 incentive yielded highest response rate and consent rate earlies t in the process, cheaper than phone follow-up
    • If relevant data is available, $20 incentive can be tailored to likely nonrespondents
    • Evaluating race & Hispanic origin questions- this was a big theme over the course of this conference. The social constructiveness of racial/ethnic identity doesn’t map well to survey questions. This Census study found changes in survey answers based on context, location, social position, education, ambiguousness of phenotype, self-perception, question format, census tract, and proxy reports. Also a high number of missing answers.

Session 4: Adaptive Design in Government Surveys

  • A potpourri of quotes from this session that caught my eye:
    • Re: Frauke Kreuter “the mother of all paradata”
      Peter Miller: “Response rates is not the goal”
      Robert Groves: “The way we do things is unsustainable”
    • Response rates are declining, costs are rising
    • Create a dashboard that works for your study. Include the relevant cars you need in order to have a decision maing tool that is tailored/dynamic and data based
      • Include paradata, response data
      • Include info re: mode switching, interventions
      • IMPORTANT: prioritize cases, prioritize modes, shift priorities with experience
      • Subsample open cases (not yet respondes)
      • STOP data collection at a sensible point, before your response bias starts to grow exponentially and before you waste money on expensive interventions that can actually work to make your data less representative
    • Interviewer paradata
      • Chose facts over inference
      • Presence or absence of key features (e.g. ease of access, condition of property)
        • (for a phone survey, these would probably include presence or absence of answer or answering mechanism, etc.)
        • For a household survey, household factors more helpful than neighborhood factors
    • Three kinds of adaptive design
      • Fixed design (ok, this is NOT adaptive)- treat all respondents the same
      • Preplanned adaptive- tailor mailing efforts in advance based on response propensity models
      • Real-time adaptive- adjust mailing efforts in response to real-time response data and evolving response propensities
    • Important aspect of adaptive design: document decisions and evaluate success, re-evaluate future strategy
    • What groups are under-responding and over-responding?
      • Develop propensity models
      • Design modes accordingly
      • Save $ by focusing resources
    • NSCG used adaptive design

Session 5: Public Opinion, Policy & Communication

  • Marital status checklist: categories not mutually exclusive- checkboxes
    • Cain conducted a meta-analysis of federal survey practices
    • Same sex marriage
      • Because of DOMA, federal agencies were not able to use same sex data. Now that it’s been struck down, the question is more important, has funding and policy issues resting on it
      • Exploring measurement:
        • Review of research
        • Focus groups
        • Cognitive interviews
        • Quantitative testing ß current phase
  • Estimates of same sex marriage dramatically inflated by straight people who select gender incorrectly (size/scope/scale)
  • ACS has revised marriage question
  • Instead of mother, father, parent 1, parent 2, …
    • Yields more same sex couples
    • Less nonresponse overall
    • Allow step, adopted, bio, foster, …
    • Plain language
      • Plain Language Act of 2010
      • See handout on plain language for more info
      • Pretty much just good writing practice in general
      • Data visualization makeovers using Tufte guidance
        • Maybe not ideal makeovers, but the data makeover idea is a fun one. I’d like to see a data makeover event of some kind…

Session 7: Questionaire Design and Evaluation

  • Getting your money’s worth! Targeting Resources to Make Cognitice Interviews Most Effective
    • When choosing a sample for cognitive interviews, focus on the people who tend to have the problems you’re investigating. Otherwise, the likelihood of choosing someone with the right problems is quite low
    • AIR experiment: cognitive interviews by phone
      • Need to use more skilled interviewers by phone, because more probing is necessary
      • Awkward silences more awkward without clues to what respondent is doing
      • Hard to evaluate graphics and layout by phone
      • When sharing a screen, interviewer should control mouse (they learned this the hard way)
      • ON the Plus side: more convenient for interviewee and interviewer, interviewers have access to more interviewees, data quality similar, or good enough
      • Try Skype or something?
      • Translation issues (much of the cognitive testing centered around translation issues- I’m not going into detail with them here, because these don’t transfer well from one survey to the next)
        • Education/internationall/translation: They tried to assign equivalent education groups and reflect their equivalences in the question, but when respondents didn’t agree to the equivalences suggested to them they didn’t follow the questions as written

Poster session

  • One poster was laid out like candy land. Very cool, but people stopped by more to make jokes than substantive comments
  • One poster had signals from interviews that the respondent would not cooperate, or 101 signs that your interview will not go smoothly. I could see posting that in an interviewer break room…

Session 8: Identifying and Repairing Measurement and Coverage Errors

  • Health care reform survey: people believe what they believe in spite of the terms and definitions you supply
  • Paraphrased Groves (1989:449) “Although survey language can be standardized, there is no guarantee that interpretation will be the same”
  • Politeness can be a big barrier in interviewer/respondent communication
  • Reduce interviewer rewording
  • Be sure to bring interviewers on board with project goals (this was heavily emphasized on AAPORnet while we were at this conference- the importance of interviewer training, valuing the work of the interviewers, making sure the interviewers feel valued, collecting interviewer feedback and restrategizing during the fielding period and debriefing the interviewers after the fielding period is done)
  • Response format effects when measuring employment: slides requested

Takeaways from the DC AAPOR & WSS Summer Conference Preview/Review 2013

“The way we do things is unsustainable” – Robert Groves, Census

This week I attended a great conference sponsored by DC-AAPOR. I’m typing up my notes from the sessions to share, but there are a lot of notes. This covers the morning sessions on day 1.

We are coming to a new point of understanding with some of the more recent developments in survey research. For the first time in recent memory, the specter of limited budgets loomed large. Researchers weren’t just asking “How can I do my work better?” but “How can I target my improvements so that my work can be better, faster, and less expensive?”

Session 1: Understanding and Dealing with Nonresponse

  • Researchers have been exploring the potential of nonresponse propensity modeling for a while. In the past, nonresponse propensities were used as a way to cut down on bias and draw samples that should yield to a more representative response group.
  • In this session, nonresponse propensity modeling was seen as a way of helping to determine a cutoff point in survey data collection.
  • Any data on mode propensity for individual respondents (in longitudinal surveys) or groups of respondents can be used to target people in their likely best mode from the beginning, instead of treating all respondents to the same mailing strategy. This can drastically reduce field time and costs.
  • Prepaid incentives have become accepted best practice in the world of incentives
  • Our usual methods of contact are continually less successful. It’s good to think outside the box. (Or inside the box: one group used certified UPS mail to deliver prepaid incentives)
  • Dramatic increases in incentives dramatically increased response rates and lowered field times significantly
  • Larger lag times in longitudinal surveys led to a larger dropoff in response rate
  • Remember Leverage Salience Theory- people with a vested interest in a survey are more likely to respond (something to keep in mind when writing invitations, reminders, and other respondent materials, etc.)
  • Nonresponse propensity is important to keep in mind in the imputation phase as well as the mailing or fielding phase of a survey
  • Re-engaging respondents in longitudinal surveys is possible. Recontacting can be difficult, esp. finding updated contact information. It would be helpful to share strategies re: maiden names, Spanish names, etc.

 

Session 2: Established Modes & New Technologies

  • ACASI>CAPI in terms of sensitive info
  • Desktop & mobile respondents follow similar profiles, vary significantly from distribution of traditional respondent profiles
  • Mobile respondents log frequent re-entries onto the surveys, so surveys must allow for saved progress and reentry
  • Mobile surveys that weren’t mobile optimized had the same completion rates as mobile surveys that were optimized. (There was some speculation that this will change over time, as web optimization becomes more standard)
  • iPhones do some mobile optimization of their own (didn’t yield higher complete rates, though, just a prettier screenshot)
  • The authors of the Gallup paper (McGeeney & Marlar) developed a best practices matrix- I requested a copy
  • Smartphone users are more likely to take a break while completing a survey (according to paradata based on OS)
  • This session boasted a particularly fun presentation by Paul Schroeder (abt SRBI) about distracted driving (a mobile survey! Hah!) in which he “saw the null hypothesis across a golden field, and they ran toward each other and embraced.” He used substantive responses, demographics, etc. to calculate the ideal number of call attempts for different survey subgroups. (This takes me back to a nonrespondent from a recent survey we fielded with a particularly large number of contact attempts, who replied to an e-mail invitation to ask if we had any self-respect left at that point)