Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.


Expert judgment in climate science: the elephant in the room

by Mason Majszak and Julie Jebeile, University of Bern, Switzerland

As a climate scientist, you are occasionally asked to give expert judgments. When you contribute to IPCC reports. When you provide input to impact researchers or local decision makers. When you are interviewed by a journalist to comment on climate issues. If you sometimes feel uncomfortable because you are aware that your judgments are subjective and for that reason diverge from others, we have a message for you: subjectivity in expert judgment is not only necessary but desirable, as it gives the expert an original scientific perspective. With this in mind, we also have a message for decision makers: striving for consensus as a means for attaining some sense of objectivity may not be crucial, while having a diversity of judgments, motivated by distinct perspectives, can be more useful. Here, we explore the ingredients of an expert judgment and demonstrate how each of these ingredients can either promote consensus or a diversity of views, as summarized in Figure 1. We focus here on the production of individual judgments and won’t be discussing the production of collective judgments (attained, for example, through deliberations or aggregations like voting).

Figure 1. The ingredients of expert judgment mold the scientific perspective. They are ordered
here from the most foundational scientific ingredient to the most subjective and individual one;
thus, the graph does not represent a timeline. Each ingredient either reduces the range of possible
perspectives or expands the diversity of views. The blue color refers to perspectives within the
same epistemic community and the black refers to perspectives belonging to different communities
(e.g. oceanography and atmospheric physics); in the latter case, theoretical and tacit knowledge
can differ and create divergent perspectives.

Imagine that the public (composed of citizens, stakeholders, policy makers) want to know whether the 2021 summer floods in Europe occurred due to climate change. If Expert A is interviewed, they might claim that the frequency of these specific events happening, calculated by the available ensemble of general circulation models, was high or not null. However, if Expert B is interviewed, they might instead point to a causal relationship, based on thermodynamics, and link social vulnerabilities with weather events that happened in the past. Certainly, two prominent experts. Two subjectivities. Two different (albeit not necessarily contradictory) expert judgments. More broadly, one can see that a range of topics related to climate change, from tipping points to geoengineering, is subject to contrasting judgments (see Hulme ed. 2019).

The reason why different judgments are equally valuable is that they follow distinct scientific perspectives. On the one hand, Expert A’s scientific perspective promotes the construction of more complex, dynamical models with higher resolution (see e.g. Palmer 2014). On the other hand, Expert B’s perspective is based on a storyline approach, which focuses on the causal relationship and intelligibility for efficient public communication (see e.g. Shepherd et al. 2018). Here, the scientific perspective taken by an individual orientates the research program the expert pursues, while also driving the expert’s judgments, as it carries a certain vision on how their scientific domain can and should be (see Majszak and Jebeile 2023). But how can two scientists, sharing the same (level of) education and being engrained in the culture of their common discipline, have two different perspectives on a subject matter and how can/should the public choose one perspective over another (and thereby one judgment over another)? To answer this question, let us examine the ingredients used to produce expert judgment, and explain how they mold the scientific perspective, where an expert has incredibly valuable insights thanks to this perspective.

The common basis of expert judgment is theoretical knowledge which encapsulates the scientific domain at hand. It is generally gained by scientists through their formal education and is materialized in textbooks. A preeminent example would be the Navier-Stokes equations, used to describe fluid dynamics in the oceans and atmosphere. Theoretical knowledge is considered by the corresponding epistemic community (i.e. researchers working on similar topics within the same field) as the most solid and probably the most objective knowledge available – although biases can affect theoretical knowledge as documented in feminist history and philosophy of science (see Anderson 2020 for an overview of the field). Here, in cases where the same set of theoretical knowledge is used to produce the perspective, the resulting perspectives converge towards consensus. However, when experts belong to distinct epistemic communities, where individuals may not share the same theoretical knowledge, a divergence of perspectives can occur.

In order to contribute to fundamental research and produce expertise, tacit knowledge is indispensable. This is the know-how required to run a computer simulation, design a parameterization, or draw projections on a visual map. It is attained by scientists after immersion in, and continued work within, their specific epistemic community (Collins and Evans 2007). This knowledge is as inexplicable as a natural language, and only attainable through practice. It gives an impression of subjectivity as it is part of the scientific culture at play but is not dictated by theory. As tacit knowledge is generally shared by all the members of an epistemic community, tacit knowledge tends to align scientific perspectives between the members. Similar to theoretical knowledge, when individuals come from distinct epistemic communities their resulting tacit knowledge will be different, in turn causing a divergence in scientific perspectives between these individuals.

Additionally, intuition allows scientists to overcome empirical uncertainty regarding the actual state of the world or situation. Intuition, being epistemically ampliative, helps scientists to come to an idea they could not completely justify through written arguments. It is an internal process, occurring in the mind of an expert and in that sense is highly subjective, as one cannot layout a “logical method of having new ideas” (Popper 1959). Because it belongs to the individual and is incommunicable to others, intuition expands the range of scientific perspectives adopted by experts and results in potential divergence of views among peers.

Finally, values usually play a role in the production of expert judgment. They influence the preferences, interests and priorities of experts, and consequently the scientific perspective they choose to adopt (as it is well documented in standpoint theory, see Harding 2015). For example, this might be the trust that science always benefits from technological advancement, encouraging one to promote kilometer-scale modelling, or this might be the preference for intelligibility and communicability to the public, making one opt for causal narratives of climate events. Arguably these values guide distinct perspectives, one privileging the development of higher resolution climate models, the other promoting storylines. Theoretical knowledge, tacit knowledge and intuition are grounded in scientific theories, know-how and experience, and thereby are generally seen as the most unproblematic ingredients of expert judgement, although each of them comes with some degree of subjectivity. By contrast, values are strongly subjective, they are part of the private sphere. Yet, as soon as they are explicated transparently, they can be managed and should not be seen as fundamentally problematic (Pulkkinen et al. 2022). Since values are supposedly not consistently held across the epistemic community, they act toward further divergence in scientific perspectives.

As a result of these subjective ingredients there is a diversity of judgments. Either these judgments converge with each other (this situation is not represented in Figure 1) or they are different. If they converge, we generally speak of consensus. Consensus among experts is often considered as an indicator for trust, but it can be reached for bad reasons such as common biases (Miller 2013). Yet, when obtained following independent trajectories, convergence can be the sign of genuine and trustworthy consensus. If the judgments diverge, this should not systematically be interpreted as disagreement or conflict among experts. The judgments simply follow different perspectives which are themselves motivated by the highlighted ingredients. Thus, an event can be explained in terms of causes following the storyline approach, and this explanation can be complementary to an estimated probability of this event occurring based on the multi-model ensemble approach. In this way, a plurality of judgments can be particularly useful as they provide different qualitative views on the same subject matter. Therefore, instead of demanding consensus for all topics, policymakers should learn how to take advantage of the opportunity that the plurality of perspectives offers and they should know how to make decisions based on this plurality.

There are limits of course. Some social and psychological mechanisms may restrain attempts at producing novel insights in a domain. For instance, scientists might adjust their judgment to those of their colleagues and peers; this is called social conformity (Weatherall and O’Connor 2021). Furthermore, an epistemic community can be rather socially homogeneous, where values may not diverge that much within its members. Therefore, promotion of independence, social diversity and transparency are important for science itself and for the services it can offer to the society.


Anderson, E. (2020). “Feminist Epistemology and Philosophy of Science”, The Stanford
Encyclopedia of Philosophy,Edward N. Zalta (ed.),

Collins, H., & Evans, R. (2007). Rethinking Expertise. The University of Chicago Press.

Harding, S. (2015). Objectivity and diversity: another logic of scientific research. The University
of Chicago Press.

Hulme, M. (Ed.). (2019). Contemporary Climate Change Debates. Routledge.

Majszak, M. M., & Jebeile, J. (2023). Expert judgment in climate science: How it is used and
how it can be justified. Studies in History and Philosophy of Science, 100, 32–38.

Miller, B. (2012). When is consensus knowledge based? Distinguishing shared knowledge from
mere agreement. Synthese, 190(7), 1293–1316.

Palmer, T. (2014). Climate forecasting: Build high-resolution global climate models. Nature,
515(7527), 338–339.

Popper, K. (1959). The logic of scientific discovery. Hutchinson & Co. (1959).

Pulkkinen, K., Undorf, S., Bender, F., Wikman-Svahn, P., Doblas-Reyes, F., Flynn, C., Hegerl,
G. C., Jönsson, A., Leung, G.-K., Roussos, J., Shepherd, T. G., & Thompson, E. (2022).
The value of values in climate science. Nature Climate Change, 12, 4–6.

Shepherd, T. G., Boyd, E., Calel, R. A., Chapman, S. C., Dessai, S., Dima-West, I. M., Fowler,
H. J., James, R., Maraun, D., Martius, O., Senior, C. A., Sobel, A. H., Stainforth, D. A.,
Tett, S. F. B., Trenberth, K. E., van den Hurk, B. J. J. M., Watkins, N. W., Wilby, R. L.,
& Zenghelis, D. A. (2018). Storylines: an alternative approach to representing uncertainty
in physical aspects of climate change. Climatic Change, 151(3-4), 555–571.

Weatherall, J. O., & O’Connor, C. (2020). Conformity in scientific networks. Synthese, 198.

Related Posts
Back to top