“Given the complexities of modern society, both domestically and internationally, the development of sophisticated tools in the computational social sciences will assume increasing importance as we address challenges in diplomacy, natural resource allocation, the migration of populations owing to drought, famine, and war, and a range of other issues."
– Kim L. Boyer, Interim Dean of UAlbany’s College of Engineering and Applied Sciences.
You may have read in late September that the ratio of women receiving Royal Society funding has “plummeted from one in three in 2010 to one in 20 this year.” While the Society also awards the Dorothy Hodgkin Fellowships to early career women researchers, this award exists to boost women’s participation in science, not to augment or mask the issues in the Society’s mainstream Fellowship program.
The Royal Society was silent for a couple of days after its list of fellows list was made public, despite a large outcry by the scientific community on social media and opinion columns in the media. The Society President, Sir Paul Nurse, finally announced an investigation a couple of days after the fact. The question is: why did the Society wait until it was made public to assess their program?
I want to stress that while I’m using the Royal Academy’s Fellowship outcomes as a case study, the issue I am illustrating is the reactionary treatment of gender bias in all fields of Science, Technology, Engineering and Mathematics (STEM). The point here is to tease out institutional patterns and to make the case that institutional approaches are needed to address gender inequality. While this point may seem obvious, the fact is that inequality in science, as with other spheres of social life, is still treated as a surprise. This is because, on the whole, organisations (and society in general) remains reactionary to addressing gender inequality. Diversity is an afterthought, when it should be a proactive and ongoing project at the organisational and societal levels.
This is the first in a series of articles I’m writing on why the scientific community, inclusive of various disciplines, needs to re-examine its position on the problem of inequality in STEM. The picture I am building up is one of methodological rigour and interdisciplinary collaboration in order to better work towards gender inclusion.
The Milgram Experiment, which supposedly shows that all human beings are capable of participating in torture under the watchful eye of an authority figure, has captivated popular culture for half a century. Why is that, given that there are finer social science studies out there? This post describes the experiment as well as another famous psychology experiment, the Stanford Prisoner Experiment. I critique these studies as well as exploring the public’s fascination with them, despite their methodological flaws. I provide a case study of how popular culture reproduces the Milgram Experiment as a universal “truth” about humanity’s innate propensity towards “evil.” The truth is that the Milgram Experiment is highly flawed and it tell us very little about our genetic predisposition for torture. What the Milgram Experiment does show, however, is that storytelling falls back on simplistic narrative about good and evil. Social science, in this case psychology and neuroscience, is just another plot device to reproduce the basic notion that “good people” can be made to do “bad things.” The social reality is much more complex and disturbing because it forces us to re-examine the relationship between obedience, culture and social interaction.
When most people think of labs, they imagine scientists in white coats staring into microscopes, carrying around beakers of bubbling chemicals, and holding test tubes over Bunsen burners. In social science, the reality is much more mundane. It’s usually just a room full of computers with software that may or may not be useful and may or may not be up to date. Even less compelling are the labs associated with statistical methods classes. The last couple years my own classes have been the worst case scenario–I just get up and lecture about how my students should use some particular piece of software to apply the methods we’ve been learning in the “lecture” part of the class. It doesn’t have to be this way.
Over the next few months I will have the opportunity to teach two new methods classes and completely re-invent how I incorporate labs. I had lunch with Mayur Desai the other day and I think he does a great job with labs in his classes and he’s inspired most (but not all) of the ideas here. This is what I’m thinking:
No lectures. None. Students enter the lab and get their assignment and spend the rest of the class trying to complete it.
Each assignment starts with a data set (preferably real) and a blank screen–that is, I don’t give them any code. Their job is to answer a substantive question by applying methods we’ve covered recently to the data.
Students work in pairs and take turns driving. I think this keeps students focused and they can teach each other. It also means only half the class has to have laptops if I want to implement a lab in a regular classroom.
I’m around to answer questions. In this way, it’s very different from a problem set where getting stuck on something dumb for hours at a time is a common occurrence. Struggling with problem is good for learning, but banging your head against a wall isn’t an efficient use of time.
The end product should be similar to results they might find in a published paper. Sometimes I’ll provide an empty table they must fill in and other times they will produce their own tables of results from scratch.
There should be opportunities for quicker/more advanced students to do more. One size does not fit all.
While it’s possible to use any statistical analysis tool in a lab successfully, I do think some packages are better than others. Most students already know Microsoft Excel and doing basic analyses (even regression) using it is easy, but you really hit a wall when you want to do anything even a little sophisticated. SAS is powerful, but there is a steep learning curve. My plan is to use Stata. You can browse your data in a spreadsheet style interface. You can play with commands through the menus and when you choose one, it shows you the command-line equivalent. You can work interactively at the command-line or build programs (using those same commands) in an editor. And the documentation is excellent and available online.
“"Human social science is stereotyped as the land of fuzzy concepts and fuzzier minds, with hydra-headed jargon lurking in the shadows that will paralyze you with the poison of vagueness and ambiguity before you even have a chance to try and figure out what in the hell they might mean…The stereotype has some truth to it. But in part – only in part – the world salad is excusable. The real world, filled as it is with variable human intentionalities, is messy…If the phenomena are messy, so will be their representation in the language of science, however noble is the scientist’s struggles to be precise. Ambiguity and vagueness are characteristics of the phenomena. Without those characteristics, the human social world would implode.“”
““Like others of my generation, for me a Ph.D. in the social sciences meant that results were only meaningful if full of numbers, chi squares, and cluster diagrams that had a statistical significance of .05. Although there was something very seductive about artfully uncovering elegant patterns in this matter, the relative trust in a scientific method and distrust of the ‘art’ of studying human behaviour never sat well with me. I watched my scientist housemate start an experiment by getting rid of the “noise.” Yet I found that the noise, the outliers that blew away my 0.05 level of confidence, was where some of the most interesting information lay. I felt an almost tangible beauty in the patterns, especially ones that outliers helped foreground; surely they were part of the story””
— Ellen Pader (p. 161) in Dvora Yannow and Peregrine Schwartz-Shea, eds. “Interpretation and Method: Emperical Research Methods and the Interpretive Turn.” Armonk, NY: M.E. Sharpe.
It seems to me that, rather than trying to answer questions when you don’t have the necessary data to do it, perhaps you should ask different questions. Certainly, we all do the best with the data we can get, but it is never alright to draw conclusions that your data don’t support—and Regenerus’ data simply cannot answer the question he set out to ask. And when your research questions the legitimacy of people’s families—my family—I demand higher research standards.
In Sociology Lens, an insightful blogger known only as Amanda critiques the new study by sociologist Mark Regenerus. Regenerus has published a paper arguing that children from heterosexual parents are better off than children raised by lesbian and gay parents. I recently posted that academic research actually shows that this is not true. Studies actually show that children of LGBTQI families are slightly better off than kids from heterosexual families with respect to aspiring to more progressive gender roles. In other respects they are similar, when you factor in class differences.
Amanda notes that Regenerus’ research on gay and lesbian families has produced contradictory findings due to the study’s poorly conceived methodology. Simply put: Regenerus’ methods for data collection do not match his research questions (meaning the methods are invalid). Regenerus defines homosexuality according to anyone who has had a same-sex experience, without taking into account their subjective identities or family experiences. Regenerus has not controlled for the fact that some children from gay and lesbian families are being raised in single parent households. This generally puts any child at an economic disadvantage when compared to dual parent households. Amanda argues Regenerus’ findings are tinged with homophobia, possibly influenced by Regenerus’ ties to the Christian site that hosts his blog.
Sociologists are not above having their politics influence their research interests – including you, me and everyone else. We do not have to agree with one another; however we are trained to make our assumptions explicit and to have our methods match our research questions. I know many sociologists who conduct studies that go against my political and personal beliefs and yet I can engage in useful and challenging discussion because the data and methods warrant attention. Crappy science still warrants attention, but for all the wrong reasons. What a shame that Regenerus’ lax methodology will only fuel public fear and misunderstanding, rather than making a contribution to empirically-informed debate.
Last year, I read about anthropologist Jeremy Narby’s participant observation field research with the Ashaninca, an indigenous group living in the Peruvian Amazon. His research is detailed in the book, The Cosmic Serpent: DNA and the Origins of Knowledge, as well as the follow up,Intelligence in Nature. I’ve thought a lot about this research since. Narby’s research focuses on the way Western scienceconstructs medical knowledge in ways that do not accommodate mystical experiences from Other cultures. Western medicine has come to adopt the Ashaninca’s knowledge of rare plants, as they have been proven to positively affect health. Nevertheless, Western scientists refuse to take into consideration how the Ashaninca gain this knowledge because it is derived through drug-induced hallucinations. This is in spite of the fact that these hallucinations come from the same plant ecosystem that Western science is eager to plunder. How do we reconcile this knowledge divide? Narby argues that the Ashaninca’s understanding of plants and ‘alternative medicine’ must be understood in concert with their pathways to this knowledge. This includes the hallucinations which are used to commune with nature.