Welcome back to Universe World. This month we travel to Switzerland, where we meet Mike Schäfer, professor of science communication at the University of Zurich. With a wide research expertise, ranging from media discourse on the human genome to the public perception of climate change, he recently started to investigate artificial intelligence (AI) and its role in science communication, especially in the case of generative AI, the (in)famous technology that can be prompted to generate text, images and a variety of other media.
As we always do with our guests, let’s start from your path: what’s your current role and what led you there?
I’m a professor of science communication at the University of Zurich. Which means I do research and teach on science communication, especially on public science communication in legacy media and social media. Who communicates there, what role do scientists and scientific institutions play there, how do public debates look and develop, where do people get information about science from, and what effects has it on knowledge, attitudes, trust and behaviour – these are typical questions in my work. By training I am a communications scholar and sociologist. And I’ve been working on communication about biotechnology, astrophysics, climate change and recently AI for about 20 years now.
Having researched multiple arenas where science and the public meet, what would you say are the most pressing issues in science communication nowadays?
That’s a big question with many answers, depending on how you look at things. A pressing issue in my field is certainly the economic crisis of science journalism.
Journalism is in an economic crisis in general, with audiences shrinking, subscription numbers going down and large chunks of advertisement revenue going to the big tech companies. And for specialised desks in media houses, like science journalism, which did not even exist in many media houses to begin with, this often means fewer resources, less time to do research, more precarious working conditions, even layoffs and closing down entire science desks.
At a time when science is really important, when a lot of information about science is pushed into the public realm by many different stakeholders, not all reliable, and when we don’t really have other intermediaries that curate information at scale according to established quality criteria, that’s a problem.
But there are other pressing issues, of course: the lack of training for scientists who want to communicate in many institutions. Lacking social, psychological and legal support for those who do communicate and negative backlash. A lack of evaluations. Sometimes also the lack of knowledge about research on science communication which can provide an evidence base for practical activities.
You recently published an essay in the Journal of Science Communication titled “The Notorious GPT: science communication in the age of artificial intelligence” in which you reflect on the responses to generative AI in the science communication community. Since the rise of tools such as ChatGPT, the generative AI technology optimised for conversation with humans, there has been a wide range of reactions, from the rather optimist to the very pessimist ones. Should we, as science communicators, be excited or wary about the onset of these new tools?
First of all, we all have to accept that the technology is there and not going away, proposed moratoriums and planned regulations notwithstanding. I’m pretty sure this technology will stay with us. So we have to get used to it being around, and we should try it out and play around with it. Not only with ChatGPT, either, but also with other forms of generative AI, such as ChatGPTs competitors – Bard or Anthropic Claude, or AI-based image creation tools like DALL.E, Midjourney or Stable Diffusion. And in doing so, we should try to get a realistic idea about the potentials and pitfalls, individually and also as a research community.
Often, when technologies develop or are rolled out in public, the amplitude of positive and negative takes is very large, often too large. Highly optimistic views are then contrasted with dystopian fears. Getting a realistic picture is important, because right now, the mid- and long-term implications for science communication are tough to predict.
I would. I think that generative AI – which includes but goes beyond ChatGPT – is a gamechanger for science communication, both for its practice and research on it.
On the one hand, this has to do with its sheer technological potential that produces human-like responses to user queries in real-time, at scale and despite its undeniable problems, in impressive quality.
But social scientific research has shown over and over that such a pronounced technological potential is a necessary but not sufficient precondition for large societal impact. Technologies also have to be taken up and used by large parts of the public and across fields. And we’ve seen that already with ChatGPT. Within weeks, we had a hundred million people using it for research, in teaching, in marketing, for writing speeches, art and poetry, programming etc.
Accuracy – or the lack thereof – is one of the main concerns about the use of generative AI in science communication. In astronomy, there was a notable example just a few months ago with the debut of Google Bard, one of ChatGPT’s competitors. When prompted about the telescope that made the first direct image of an exoplanet, it answered that it was the James Webb Space Telescope, launched in 2021. Which is wrong: this is a result that dates back to 2004, obtained with the European Southern Observatory’s Very Large Telescope in Chile. Do you think we’re destined to, quoting words from your essay, “drown in a sea of approximated, mediocre information”?
I think this is a concern, yes. But I don’t think we are destined to drown. Again: the trajectory of technologies is always co-constructed by social and societal factors. Public concerns and complaints can drive the providers of these models to get better, and we’ve already seen considerable improvements from the GPT-3 to the GPT-3.5 and GPT-4 model that Microsoft’s Bing search engine is using. So the models will get better. And scholars have developed tools like Perplexity.AI that combine GPT with Google Scholar to provide replies that are better embedded in relevant scholarship.
That said, there are real concerns here: one is that we don’t know much about the training data that fuels these commercial, proprietary models. They are largely black boxes. We also don’t know which biases the training data, and thus the models, might have, but we can be pretty sure there are considerable ones. At the same time, it will be difficult to impossible for the scholarly community to produce open-source models that can really compete with the commercial ones, which would be better for researchers and more transparent overall.
And a second concern is that while ChatGPT and similar tools are usually trained to not produce problematic content deliberately, they can already be prompted to produce problematic content by clever users. And the models can even be hacked to remove the implemented inhibitions – “jailbreaking” is the technical term – and then used to produce “wrongness at scale”, as a colleague called it.
The fact that generative AI, as your colleague Eric Ulken puts it, “can get more wrong, faster — and with greater apparent certitude and less transparency — than any innovation in recent memory” is indeed concerning. What about other concerns, for example its impact on the job market?
Obviously, it is difficult to assess the mid- and long-term impact properly yet, but there are studies suggesting that generative AI could do between 15 and 50 percent of worker tasks faster at the same quality in the future. Particularly ones that have to do with the creation of more standardised content, such as writing, visualisation, illustration, but maybe also programming and other tasks. And when we look at the financial situation of scientific and higher education institutions in some countries or the economic crisis of journalism and science journalism in many countries, this is a reason for concern.
Given the concerns, is there, in your opinion, a way to embrace generative AI in science communication while tackling its challenges?
Oh, absolutely. There is undeniably a lot of positive potential here as well. Generative AI can be used to be creative, although this creativity is a stochastic approximation to patterns in pre-existing training data, of course. We’ve used it in a workshop, for example, to generate a project name, and acronym and a project logo.
It can also provide other content. It has a huge translational ability for summarising research publications and findings that can be used for texts, social media posts, posters, Wikipedia entries. Savvy users can already use generative AI tools to blend different modalities like text, imagery and sound together. Some have produced simple games without having programming skills.
Generative AI has other advantages, too. Users can communicate with it interactively and iteratively, asking again and again, or demanding more detail, a simpler language or illustrations and examples if they don’t understand things. Until they get the kind of answer they are satisfied with. At least in principle, generative AI enables dialogical science communication at scale, which has so far often been limited to small groups.
Besides AI, what research projects are you currently working on?
I am working on AI already, looking at the future visions about AI in China, Germany and the US and analysing how these visions impact regulation. And I will deepen that focus in the future.
Apart from that, we are doing the Science Barometer Switzerland, a regular population survey about the Swiss’ attitudes towards science and research and their sources of information about it. We are doing research on institutional science communication by universities. And I am very interested in contrarian attitudes towards science, such as science-related populism and conspiracy theories.
And I try to reach out to society with my research as well, in media appearances, events, brochures, recommendations for stakeholders and on social media as well.
What major differences – if any – do you see in the field of science communication across Europe?
The science communication landscapes are actually quite different in different European countries. This has to do with the respective understanding of science communication in the different contexts, with the different position, size or funding of scientific institutions, with the strengths or weaknesses of the news media and of public service broadcasting, with national traditions of science communication and the resulting path dependencies and other factors.
How does a small country like Switzerland fit in the picture?
Switzerland itself is a rather special case. It has a number of excellent universities, large research centres like CERN, a well resourced funding system that also provides funding for science communication efforts, many foundations, and a relatively diverse media system with a fairly strong public broadcaster. This makes for a strong and diverse science communication ecosystem overall, with additional, pronounced differences between the three linguistic regions. I can’t really claim causality here, but I can at least say that this corresponds to the relatively positive attitudes of the Swiss population towards science, even though science-related populism and conspiracy theories exist in Switzerland as well.
What are the most exciting and most difficult parts in your job?
As I get older – and I am over 45 now! (smiles) – I find that the things I get excited about change. Nowadays, it is not so much the next paper or the new project, even though they can still be exciting and I am really excited to dig into research on generative AI. But what excites me more and more is seeing junior colleagues from my team go on and excel in the scientific community.
What’s most difficult? Cleary saying no again and again to requests and offers from colleagues. And choosing the right ones to say no to. Figuring out where to say no, and where not, will remain a lifelong challenge and learning process, I suspect.
Who are the scientists, thinkers and authors that inspired you the most along this journey? And finally, any favourite books you would like to recommend to our readers?
A lot of social theorists influenced my work – people like Pierre Bourdieu, Jürgen Gerhards, Jürgen Habermas, Karin Knorr-Cetina and others. Ulrike Feld, a professor of Science and Technology Studies (STS), got me interested in analysing science from a social science perspective when I studied in Vienna 20 years ago. But as a reading recommendation I would probably point away from science, at least a bit, and to science fiction: “The Three Body Problem” by Cixin Liu is awesome and blends STEM and social science in a really clever and engaging way. Was a real pageturner for me.