A Quora thread recently caught my eye. Titled, How do we restore trust in science?, I was curious to see, once again, the conflation of trust in science with the idea that all science is politically and economically motivated by “big pharma” companies and by politicians. I reproduce my answers to the original question and my response further below. I start by pulling apart the interconnected ideas of trust, funding, belief in science and political influences on science. The public should hold scientists, politicians and private industry accountable for Research and Development. This is an important discussion, but it often happens in a vaccum. Researchers address research demands in closed journals. Research ethics is part of our training. The reality of these issues, however, are not really as the public imagines it.
Question of trust
I wrote about trust in science a little while a go, noting that lack of trust in science is connected to the conflation of beliefs, values and attitudes. I summarise this further below, but essentially, the general public trusts science that resonates with their personal world view especially when it validates their social status. Conversely, people are more likely to distrust scientific studies when the evidence undermines their social position and their sense of belonging.
The question on Quora was this:
How do we know who to believe anymore? Medical students are given lunch by pharmaceutical companies. Scientists are bought by industry. I rue the day that politics and factionalism invaded the objective analysis and conclusions which Madame Marie Curie, Jonas Salk and their ilk provided in their unquenchable desire to learn more and do more.
This question was inspired by the political brou ha ha in this question Have just read a stat that says only 35% of Iowa Republicans believe in science — either evolution or climate change. How do we restore the belief in science among Americans?
So here we see a common misconception: the public framing of science disbelief against the backdrop of (undefined) politics. A common logic is employed: scientists’ unethical conduct is stated as fact, but without any evidence. “Medical students are given lunch by pharmaceutical companies.” Really? How many “medical students,” where and how often? Which companies do this? What proportion of each group does this relate to? What proportion of scientists do “medical students” represent? “Scientists are bought by industry.” Again – evidence is lacking. What does this phrase mean? It evokes an image of corruption, as if private companies buy and sell individuals, or fund their training or their research. I’ll show soon this is a problematic assumption.
The statement also harks back to some imagined golden era of science: “Madame Marie Curie, Jonas Salk and their ilk provided in their unquenchable desire to learn more and do more.” By calling on famous, revered names, there is a romantic appeal to an uncorrupted, pure state of science at a time before funding corrupted it all. In fact, science is expensive, time-consuming and resource intensive. Training the average scientist involves decades of education, which has never been free and equally accessible to all. Further education accumulates various expenses that need to be managed. Conducting studies costs time and money, from experiments to building prototypes to designing models to carrying out surveys to literature reviews to ethnographies. Our equipment is expensive, whether it’s in a lab or even software used in private practice. Scientists need to eat. Even the famed Curie had to feed herself and her family; she needed to have her time compensated for somehow. Publishing and presenting our findings is another expense, even in the electronic age. Science has always needed benefactors, whether they come in the guise of public funding or philanthropy. The image of the righteous scientist toiling away in a lab, presumably in a white coat, pristinely cocooning themselves from the evil influences of the world is an anachronism.
Moving to the next premise in the Quora post, this person has taken one piece of research, and transposed its meaning: “only 35% of Iowa Republicans believe in science.” This statistic actually equates the political ideology of one groups (Republicans) in one location (Iowa, USA) with lack of belief in specific science issues. Then they extrapolate this to the lack of beliefs amongst all Americans. The inference is that, because the author, this one particular individual, distrusts science, everyone must distrust science. That’s not true. As I’ve previously shown, trust and distrust varies according to socio-economic status, especially correlating with education and affluence.
Actually the statistic (“35% of Republicans”), as it is quoted, is worrisome and needs to be addressed through stronger public education. There’s no question about this. But it also tells us that the majority of Iowa Republicans – 65% do not believe this. This is selective reporting. It takes a minority statistic, out of context, and presents it as evidence of a broad phenomena. It ignores the evidence that this is a minority view that yes, must be addressed, but which is not dominant amongst this sub-group.
The question “belief in science” is twisted. Two measures, belief in evolution and belief in climate change, are taken as proxy for belief in science. That’s not the whole picture. In fact, climate change sceptics and creationists often appeal to science in other fields – but they too twist the evidence. They cherry pick evidence to suit their political aims. This is a political tactic of politicians, and unfortunately, it is the same thing we see amongst sub-sections of the public who support conspiracy theories about science. This Quora thread is an example of the selective use of science to suit a personal belief and political ideology.
The motivations are different. Politicians have institutional power to abuse science evidence to serve their personal and political views. Where political parties have dominant power, they can use their beliefs about science to change legislation. The can make an attempt to undermine legal protection for the reproductive rights of women. They can twist data on educational outcomes of Indigenous youth in the guise of “protection,” a paternalistic guise that perpetuates colonialist practices. Heck they can even excise entire islands from Australia’s geographic region to negate protection of asylum seekers.
Individuals do not have this legal power, but they do have power in numbers. When lots of individuals twist science to feed their politics, this perpetuates a poor understanding of science. The misuse and abuse of science evidence does not speak to the way in which scientists conduct their research or how Government funding influences the interests of scientists.
I also note that not all politicians abuse science to deny people access to education and human rights. Some politicians lead the defence of science funding. Politics is obviously about serving specific party agendas, but I’ve shown previously there’s a difference between doing this for the civic rights of all, and doing this to the detriment of social progress.
My previously cited article shows that scientists are sometimes influenced by their personal beliefs when it comes to specific science topics. In another example, I’ve discussed how a neurobiologist decided he could use his blog on the prestigious Scientific American to refute a social science study on the social causes of diabetes. As I showed, personal opinion alone is not enough to argue against scientific evidence.
The issues need to be teased apart: the operations of research funding, the questions scientists ask, whether or not we are using valid and reliable methods to measure our key concepts, and how we report our findings – these are all separate issues.
Funding does shape research to the extent to which funding affects which projects get off the ground. The closer a project is to national research priorities, the more likely it is to be funded. Seniority and prestige, such as the reputation of the lead investigators and their institutions, also affect research funding. A study finds Australian scientists spent the equivalent of 550 years in 2012 writing research grants to do research. Only one fifth of these projects are funded. What this shows is that it take a lot of effort for a study to be funded, and the majority of this effort is unsuccessful.
This presents many problems for scientists, especially in an era when research funding is being squeezed down. In Australia, our current Government has an anti-science agenda, having demolished the Climate Change Commission and the position of Science Minister. Research funding is under threat in Australia, as it is elsewhere.
Again – let’s separate the politics and the funding process, from the practice, before we examine their relationship.
I’ve previously shown that a significant minority of Australians and Americans have a poor understanding of basic science concepts. Yet I’ve also noted we need to be careful how we measure belief in science, trust in scientists, practical knowledge of science and public engagement with science. These are all separate, albeit often interconnected issues. The relationship of politics and private industry funding of science is another issue again.
Below are my answers to the Quora thread.
Misunderstanding trust
The idea that scientists should be inherently mistrusted because they’re “bought” by industry or swayed by big pharma is misguided. Research shows that while the public now trusts scientists less than they did in the 1960s, scientists are still more trusted than politicians, journalists and other public institutions. I recently wrote about this topic on the LSE Impact Blog. I showed that the evidence for the waning trust in science is about people’s personal beliefs, values and attitudes.
Research shows that people who mistrust science will spend a lot of time and energy refuting science findings they don’t like, especially when this threatens or unsettles their personal world-view. People will not argue against science that they think supports their place in the world. Neither view is especially helpful. Science isn’t about proving or disproving subjective ideals. It’s about answering questions that will improve the world we live in.
Rather than giving into personal biases, research findings need to be assessed through critical thinking. This means weighing up evidence, being aware of our personal bias, and being flexible enough to change our position if the evidence is valid and reliable. For example, we need to ask:
- Do the research questions and methods match the aims and hypotheses?
- Does this make sense in light of other studies? What are the limitations of the sample and methodology?
- Are my own biases on this topic affecting my reading of the analysis?
Without proper science training, it’s hard to answer these questions, but what we can do is to weigh up what other experts say about new research.
Much of scientific writing is hard for the public to access because it’s hidden behind a pay-to-read system, and it’s written in specialist language (jargon) that the lay public can’t read. This is why we need more scientists writing directly for the public, using plain language, rather than relying on the media to pick up press releases.
Social media is extending the outreach of scientists to the public. If you see a science story being reported by the media, try reaching out to scientists directly! There are various robust science communities out there, including Science on Google+ which I help to moderate. You’d be surprised how approachable scientists can be once you connect with them!
Unravelling myth
Another person replied to my above comment, again drawing on the conspiracy theory that big pharma and politicians tell scientists what to say. The comment is:
Just because scientists are more trusted than big pharm or politicians does not make the fact that scientists are mistrusted because of big pharm and politics any less valid.
I do agree however that science communication is a realm that needs to be developed. And scientific communicators are on the rise, albeit slowly. Unfortunately most scientists are lacking in public communication, which causes a need for communicators.
Here is my response:
Thanks for your comment. The idea that big pharma and politics are running research is not borne out by evidence. It’s an urban myth that needs to stop. Different science fields have different levels of investment by private industry – clinical trials and manufacturing are the key areas where private industry invests the most funding, but in terms of research output, this is does not represent the majority of science and research activity.
The majority of research that scientists work on is “pure research.” That is, developing new ideas, testing theories and empirical research to grow new fields of study (see this USA overview). “Applied research” funding is the work done to develop patents, machinery, medicines and building other technologies. This funding is more costly because products are being developed and produced. Still, we need more “pure” research to be funded. Only 20% of research projects get funded.
Most research is funded by Government. This report from Australia describes the system. Scientists declare funding and conflict of interest on all publications and on their annual reports, which are publicly available. So the cloak and daggers idea that some people have about scientists bending to industry or political interests simply reflects a lack of understanding about how funding works and how research is conducted.
In fact, research funding has declined by Government and industry. This is especially the case in the UK.
As for for improving scientific communication – absolutely, this needs to be part of all science training, especially at the postgraduate level! It’s hard for scientists at the moment because there are few incentives to do public outreach. Outreach takes time away from other work and there is no recognition for this work. But as more scientists engage, hopefully this will change soon!
Moving forward
What are we do about the conflation of trust in scientists, politics, funding structure, belief in science and the perceived motives of scientists? It’s obvious that we’re talking in circles. Sections of the public uncritically buy into the conspiracy theories that all scientists are being wined and dined by private industry, and that industry collaboration is always a bad thing. Some scientific disciplines see that they are above such rebukes. Sociology is a stand out here. As you can see in the comments on my LSE article, and in my responses, social scientists are quick to denounce the natural sciences for being untrustworthy. We do this as if our own practices are above such critiques.
Much of sociology is set up as a critique of the abuse of scientific methods, such as the unethical treatment of minorities, the mentally ill and the subjugation of women. We are also profoundly influenced by the philosophy of science. We start our training by being asked to deal with questions such as, What is the nature of knowledge? How does culture shape what we know and how we learn it? How does history constrain how we come to understand what is “true”?
Our contribution to this field is important, because we call attention to the link between historical practices of science and how they reverberate in the present day. We also question the role of medical practitioners in inadvertently perpetuating harm against vulnerable populations, such as refugees, ethnic minorities and disadvantaged groups. I plan to publish a series exploring this later this year.
Nevertheless, despite our contribution, the social sciences are not a monolith. There is diversity amongst each of the social sciences, as there is in all other disciplines. Postcolonialism, Queer Studies and Intersectionality Theory all arise as a critique against mainstream social science, which is dominated by – and overwhelmingly benefits – White academics. Jessie Daniels, a White American feminist, has been exploring the damage caused by White feminism, which has historically been, and continues to be, wilfully ignorant of the problems and activism of women of colour.
Here lies the problem. We want to reduce a complex set of ideas – belief, practices and motivations of science – into a neat package. We want to be able to denounce or defend “science” with respect to our personal position (sceptic, scientist, sociologist, feminist, Greens supporter, and so on). This is a political positioning in and of itself. In order to move forward, we first need a stronger understanding of how personal and political beliefs shape our ideas of what “science” is and what constitutes good/bad/immoral/ethical scientific practice.
Second, we also need to separate myth from fact – how does scientific funding actually work? What is the role of Government? How does industry funding actually work? This needs to be better communicated to the public.
Third, we need to have a productive conversation about ethical issues in science: how do scientists understand their ethical obligations? How are we trained in different disciplines? Does our training adequately reflect the real life demands of doing science? As I mentioned, I will soon take this up in a later post, but essentially, scientists are professionals for a reason. We commit to a long period of education and training. We back up our research with evidence. This evidence can – and should – be critically examined by different audiences, but this is only warranted where the critique comes from an informed perspective. Making statements about scientists’ being “in the pockets” of big pharma couldn’t be further from the reality of how most science happens.
I started this analysis by saying the public has a reasonable expectation to discuss scientific funding and trust, but not without proper dialogue. These discussions happen in isolation, or they resort into mud slinging, which we see often in Science on Google+. Our Community has around 252,000 members and, in addition, our page is followed by 279,000 people. We attract our fair share of conspiracy theorists who irk up that all science is under the thumb of companies, but they are unwilling to listen to the evidence. They argue from a place of emotion and from personal belief, but they do not turn that critical gaze on their own beliefs. Their mistrust is misplaced.
So how do we move forward? I often write about the need for stronger public outreach by scientists. But the question remains, what is the best forum for this public engagement? Publishing in scientific journals is tantamount to simply talking to one another. This results in limited cross-disciplinary dialogue, so it’s hard for scientists to work collaboratively. Publishing in mainstream media has a long history in science, but the media still act as gatekeepers. Some collective efforts, such as publishing in The Conversation, have found a strong audience, but their readers are highly educated and relatively wealthy. Academics and researchers write in blogs but these are usually for specialist audiences. My blog is an example: I use it to write about sociology that matters to me and most of my followers are people interested in sociology.
Collective, interdiscipinary efforts are key. Communities like Science on Google+ are also limited to interest groups (people interested in science) but as it caters to a broad range of disciplines, it creates a bigger platform for people to learn about different scientific perspectives. Yet as I’ve mentioned, there are certain hot topics that are almost guaranteed to descend into chaos. Anything related to research funding is one of those topics. So how do we do this outreach better?
I see that we need to go back to the basics. We can’t make non-experts into scientists. Despite the support for citizen science, which I support in principle, we cannot turn the lay public into science experts. In order to be a science expert you need the formal training and qualifications. Nothing can substitute for the in-depth and close supervision by expert supervisors. But we can increase scientific literacy.
I worked in an interdisciplinary team. We initially faced many teething problems, all of which were related to the basics of research. We could not turn a physicist into a sociologist, and we couldn’t turn a psychologist into a computer modeller. But we could increase awareness about the basic thinking that informs our practice. The process was slow and painful. It meant carefully explaining key concepts, regular discussion of our theories and methods, and most of all creating a safe space to ask questions and debate ideas. That was the hardest. All of us, without exception, wanted to retreat into our own disciplines. It was easier for the social scientists to collaborate (sociologists with psychologists for example) but it was really hard for the social scientists to work with the natural scientists. We had to work at it constantly. It made me a stronger sociologist as a result.
Getting 20 scientists to collaborate is harder than getting a 242K Community to have one productive discussion on science funding, let alone engaging a broader section of society to better understand how trust in science is influenced by social factors. The challenge we face today is nothing new. Philosophers have long debated the nature of truth, and the methods for knowing. Intellectuals in different countries have grappled with the best way to improve scientific engagement. The state of public knowledge has long been the subject of critique and analysis.
The best method for addressing public trust in science is for scientists to lead and continue participating in the discussion. The goal is not to make us experts in all the sciences, or even in one discipline. The goal is to learn to ask better questions. To learn the basics of scientific methods so as to have useful discussions.
An earlier version of thsi post was first published on Quora.