Thursday, May 25, 2017

Language and thought: Shifting the axis of the Whorfian debate

A summary of the changing axes of the debate over effects of language on cognition. (Click to see larger).

This spring I've been teaching in Stanford's study abroad program in Santiago, Chile. It's been a wonderful experience to come back to a city where I was an exchange student, and to navigate the challenges of living in a different language again – this time with a family. My course here is called "Language and Thought" (syllabus), and it deals with the Whorfian question of the relationship between cognition and language. I proposed it because effects of language on thought are often high in the mind of people having to navigate life in a new language and culture, and my own interest in the topic came out of trying to learn to speak other languages.

The exact form of the question of language and thought is one part of the general controversy surrounding this topic. But in Whorf's own words, his question was
Are our own concepts of 'time,' 'space,' and 'matter' given in substantially the same form by experience to all men, or are they in part conditioned by the structure of particular languages? (Whorf, 1941)
This question has personal significance for me since I got my start in research working as an undergraduate RA for Lera Boroditsky on a project on cross-linguistic differences in color perception, and I later went on to study cross-linguistic differences in language for number as part of my PhD with Ted Gibson.

Wednesday, February 15, 2017

Damned if you do, damned if you don't

Here's a common puzzle that comes up all the time in discussions of replication in psychology. I call it the stimulus adaptation puzzle. Someone is doing an experiment with a population and they use a stimulus that they created to induce a psychological state of interest in that particular population. You would like to do a direct replication of their study, but you don't have access to that population. You have two options: 1) use the original stimulus with your population, or 2) create a new stimulus designed to induce the same psychological state in your population.

One example of this pattern comes from RPP, the study of 100 independent replications of psychology studies from 2008. Nosek and E. Gilbert blogged about one particular replication, in which the original study was run with Israelis and used as part of its cover story a description of a leave from a job, with one reason for the leave being military service. The replicators were faced with the choice of using the military service cover story in the US where their participants (UVA undergrads) mostly wouldn't have the same experience, or modifying to create a more population-suitable cover story. Their replication failed. D. Gilbert et al. then responded that the UVA modification, a leave due to a honeymoon, was probably responsible for the difference in findings. Leaving aside the other questions raised by the critique (which we responded to), let's think about the general stimulus adaptation issue.

If you use the original stimulus with a new population, it may be inappropriate or incongruous. So a failure to elicit the same effect is explicable that way. On the other hand, if you use a new stimulus, perhaps it is unmatched in some way and fails to elicit the intended state as well. In other words, in terms of cultural adaptation of stimuli for replication, you're damned if you do and damned if you don't. How do we address this issue?

Thursday, January 26, 2017

Paper submission checklist


It's getting to be CogSci submission time, and this year I am thinking more about trying to set uniform standards for submission. Following my previous post on onboarding, here's a pre-submission checklist that I'm encouraging folks in my lab to follow. Note that, as described in that post, all our papers are written in RStudio using R Markdown, so the paper should be a single document that compiles all analyses and figures into a single PDF. This process helps deal with much of the error-checking of results that used to be the bulk of my presubmission checking.

Paper writing*

  • Is the first paragraph engaging and clear to an outsider who doesn't know this subfield?
  • Are multiple alternative hypotheses stated clearly in the introduction and linked to supporting prior literature?
  • Does the paragraph before the first model/experiment clearly lay out the plan of the paper?
  • Does the abstract describe the main contribution of the paper in terms that are accessible to a broad audience?
  • Does the first paragraph of the general discussion clearly describe the contributions of the paper to someone who hasn't read the results in detail? 
  • Is there a statement of limitations on the work (even a few lines) in the general discussion?

Friday, January 20, 2017

How do you argue for diversity?

During the last couple of months I have been serving as a member of my department's diversity committee, charged with examining policies relating to diversity in graduate and faculty recruitment. I have always put a value on the personal diversity of the people I worked with. But until this experience, I hadn't thought about how unexamined my thinking on this topic was, and I hadn't explicitly tried to make the case for diversity in our student population. So I was unprepared for the complexity of this issue.* As it turns out, different people have tremendously different intuitions on how to – and whether you should – argue for diversity in an educational setting.

In this post, I want to enumerate some of the arguments for diversity I've collected. I also want to lay out some of the conflicting intuitions about these arguments that I have encountered. But since diversity is an incredibly polarizing issue, I also want to be sure to give a number of caveats. First, this blogpost is about the topic of other people’s responses to arguments for diversity; I’m not myself making any of these arguments here. I do personally care about diversity and personally find some of these arguments more and less compelling, but that’s not what I’m writing about. Second, all of this discussion is grounded in the particular case of understanding diversity in the student body of educational institutions (especially in graduate education). I don’t know enough about workplace issues to comment. Third, and somewhat obviously, I don’t speak for anyone but myself. This post doesn’t represent the views of Stanford, the Stanford psych department, or even the Stanford Psych diversity committee.

Tuesday, January 3, 2017

Onboarding

Reading twitter this morning I saw a nice tweet by Page Piccinini, on the topic of organizing project folders:
This is exactly what I do and ask my students to do, and I said so. I got the following thoughtful reply from my old friend Adam Abeles:
He's exactly right. I need some kind of onboarding guide. Since I'm going to have some new folks joining my lab soon, no time like the present. Here's a brief checklist for what to expect from a new project.

Friday, November 4, 2016

Don't bar barplots, but use them cautiously

Should we outlaw the the commonest visualization in psychology? The hashtag #barbarplots has been introduced as part of a systematic campaign to promote a ban on bar graphs. The argument is simple: barplots mask the distributional form of the data, and all sorts of other visualization forms exist that are more flexible and precise, including boxplots, violin plots, and scatter plots. All of these show the distributional characteristics of a dataset more effectively than a bar plot.

Every time the issue gets discussed on twitter, I get a little bit rant-y; this post is my attempt to explain why. It's not because I fundamentally disagree with the argument. Barplots do mask important distributional facts about datasets. But there's more we have to take into account.

Friday, July 22, 2016

Preregister everything

Which methodological reforms will be most useful for increasing reproducibility and replicability? I've gone back and forth on this blog about a number of possible reforms to our methodological practices, and I've been particularly ambivalent in the past about preregistration, the process of registering methodological and analytic decisions prior to data collection. In a post from about three years ago, I worried that preregistration was too time-consuming for small-scale studies, even if it was appropriate for large-scale studies. And last year, I worried whether preregistration validates the practice of running (and publishing) one-offs, rather than running cumulative study sets. I think these worries were overblown, and resulted from my lack of understanding of the process.

Instead, I want to argue here that we should be preregistering every experiment do. The cost is extremely low and the benefits – both to the research process and to the credibility of our results – are substantial. Starting in the past few months, my lab has begun to preregister every study we run. You should too.

The key insights for me were:
  1. Different preregistrations can have different levels of detail. For some studies, you write down "we're going to run 24 participants in each condition, and exclude them if they don't finish." For others you specify the full analytic model and the plots you want to make. But there is no study for which you know nothing ahead of time. 
  2. You can save a ton of time by having default analytic practices that don't need to be registered every time. For us these live on our lab wiki (which is private but I've put a copy here).  
  3. It helps me get confirmation on what's ready to run. If it's registered, then I know that we're ready to collect data. I especially like the interface on AsPredicted, that asks coauthors to sign off prior to the registration going through. (This also incidentally makes some authorship assumptions explicit).