Volume 7, No. 1, Art. 22 – January 2006

Learning to Use and Assess Advice about Risk

Matt Twyman, Clare Harries & Nigel Harvey

Abstract: People often learn about the levels of risk associated with different activities through advice, and their use and assessment of such advice may depend on factors such as the identity of the advisor, and the perceived quality of that advice. EARLE and CVETKOVICH (1999) demonstrated that explicit verbal estimates of trust in advisors correlate with perceived shared values between advisor and advisee. Here we apply that finding to a risk communication paradigm. EARLE and CVETKOVICH's findings were replicated in two experiments, in which participants were given advice about a range of risky activities. However, declared trust in advice sources did not correlate with how much those sources were used in making risk judgments. Relative measures of use and assessment of advisors were also found to bear different relationships to the accuracy of advice. Use of advisors was not reflected in explicit verbal estimates of trust in those advisors.

Key words: risk communication, metacognition, implicit trust, advice

Table of Contents

1. The Basis of Trust in Advice

2. Implicit and Explicit Trust

3. Learning to Use and Assess Advice About Risk

Acknowledgement

References

Authors

Citation

 

1. The Basis of Trust in Advice

In order for an individual to learn about the risks associated with any particular activity, it is important that advisors communicate information about those risks effectively, and trust in those advisors is important for effective risk communication (e.g. BELLA 1987; SLOVIC 1993; CVETKOVICH & LÖFSTEDT 1999). Any advisor (such as a governmental agency or consumer advice organisation) must maintain trust in order to be able to maintain its audience. Without trust, risk information can no longer be effectively communicated. For example, during the Bovine Spongiform Encephalopathy crisis the UK government gave poor advice which lost the public's trust. After the government had given assurances that it was safe to eat British beef, the beef was linked to human cases of variant Creutzfeldt-Jakob disease. The apparent lack of honesty or competence on the part of the government caused many people to later ignore its advice in other areas of importance, such as vaccinations for young children. [1]

When someone distrusts advice that they have been given, they may distrust the advisor's motivation, competence, or both. Social psychologists working on social dilemmas (e.g. DAWES 1980; BOHNET, FREY & HUCK 2001), often adopting a social constructionist perspective on trust, have in the past investigated the circumstances under which we trust a person or organisation not to act deliberately against our interests. Judgment and decision-making researchers such as SLOVIC (1993, 1997) have also studied the factors which determine whether trust will be placed in advice, although the emphasis in such research is taken off the advisor's motives. An advisor may be trusted or distrusted depending on factors other than motivation, such as the advisor's values or capacity to give accurate advice. [2]

EARLE and CVETKOVICH (1999) have argued that trust is assigned on the basis of the relationship between the values of the advisor (which can be an individual or organisation) and those of the receiver of the advice. An advisor who shares one's values will be trusted more than an advisor with dissimilar values. People's subjective assessments of value similarity are said by EARLE and CVETKOVICH to be based on "value-bearing narratives" produced by the advisor. "People tend to trust other people and institutions that 'tell stories' that interpret the world in the same way they do" (EARLE & CVETKOVICH 1999, pp.9-10). [3]

EARLE & CVETKOVICH carried out an experiment in which people read a simulated newspaper story about nuclear waste management by a US federal agency. Participants then answered a short questionnaire designed to measure how similar their values were to the agency and rated the extent to which they would trust the agency. The correlation between the similarity-of-values index obtained from the questionnaire and the trust rating was 0.66. In a second similar experiment, it was 0.68. Together with various other studies (e.g. ARAD & CARNEVALE 1994; CLARY, SNYDER, RIDGE, MIENE & HAUGEN 1994), this work shows that people say that they trust others more when they judge others' values and motives to be similar to their own. However, it is important to emphasize that it does not show that people actually trust advisors more when their values are similar to the judge's own. In other words, EARLE and CVETKOVICH's findings only show what people say when faced with advisors who share their values, not what they actually do. In order to assess the possible discrepancy between people's stated and actual trust in advisors, we turn to research on metacognition and advice taking. [4]

2. Implicit and Explicit Trust

Comparison between what people say and what they actually do is a prevalent theme in metacognition research. Metacognition is thought about thought, often characterised in terms of the phenomenon of "self insight". Much of the debate regarding metacognition has centred on the question of whether humans have any more insight into certain of their own mental states, than they have insight into other people's mental states (e.g. DENNETT 1991; GOPNIK 1993; HARVEY, TWYMAN, & HARRIES in preparation). [5]

Judgment and decision-making research which is concerned with metacognitive processes has generally focussed on the extent to which people are aware of the strategic "policies" which guide their decision-making behaviour. For example, HARRIES, EVANS, DENNIS and DEAN (1996) found that doctors believe their treatment decisions are affected by various factors that they do not, in fact, take into account when making those decisions. As a result, they say that they need information that they do not actually use (HARRIES, EVANS & DENNIS 2000). Such questions of self-insight are also central to areas of metacognitive research such as implicit learning and higher order thought theory, which provide a framework for thinking about processes which underlie the potential for people to say they trust an advisor when in fact they do not (or to say they have no trust in an advisor when their behaviour says otherwise). [6]

Higher order thought (HOT) theory (e.g. ARMSTRONG 1968; CARRUTHERS 1992, 1993, 1996; ROSENTHAL 1986, 1993) proposes that in order for a person to be consciously aware of the contents of a thought, they must have a second, non-conscious, higher order thought about that thought. In other words, the simple perceptual/representational state of seeing a red apple would not be enough to create a conscious visual experience of the apple - for that, one requires a further meta-representation along the lines of "I am seeing a red apple". The reason that one's subjective experience does not seem to be cluttered with a proliferation of thoughts-about-thoughts is that the second order thought is itself not conscious, and that third order thoughts (which would make it so) are relatively rare. [7]

In order to study situations in which people often say one thing and do another, researchers in the field of implicit learning have taken up the conceptual tools of HOT theory (e.g. DIENES & PERNER 1996; 1999; 2001). Implicit knowledge or processes are said to exist without being associated with concomitant explicit (conscious, or verbalisable) mental states. For example, most people are unable to explicitly state the rules by which they can tell a grammatical sentence in their native language from a non-grammatical one, and yet they can demonstrate the knowledge by making the distinction in practice (the example of natural language prompted REBER 1967, to create the artificial grammar learning paradigm of implicit learning). [8]

The kind of experimental tools that the field of implicit learning has developed in order to study the differences between what people say and what they do are related to measures of confidence calibration employed in the judgment and decision-making literature. For example, a person's confidence in their ability to perform some task might be measured on the same scale as their actual performance (such as a percentage scale, where 50% denotes chance performance when choosing between two options). That person's confidence is said to be well calibrated if, when scoring (say) 65% correct performance, they believe that their performance is at approximately 65% (or that any given answer is approximately 65% likely to be correct). [9]

From such confidence and performance scores it is possible to construct measures of implicit and explicit knowledge underlying task performance, according to specific criteria. For example, DIENES and BERRY (1997) have defined implicit knowledge as demonstrated by cases where a person claims to have no task-relevant knowledge (despite having demonstrated such knowledge through performance), or where there is no relationship between a person's confidence and accuracy. In such cases a person can be said to have no conscious insight into the knowledge underlying their performance (in other words, they lack metaknowledge about their own performance-relevant knowledge). [10]

Using tools similar to those employed in the implicit learning literature, it is possible to dissociate "implicit" and "explicit" trust in an advisor who is attempting to communicate risk information. Any disparity between observed patterns of implicit and explicit trust (such as saying that you do not trust an advisor on the basis of their values or motives, but then following their advice because it is accurate and unbiased) would both provide further understanding of the basis upon which advisors can most effectively influence people to actually use the information communicated (rather than simply saying they will), and open the way for the application of conceptual tools from other areas of investigation to the field of risk communication. [11]

3. Learning to Use and Assess Advice About Risk

In advice-taking experiments carried out by cognitive psychologists (e.g. ASHTON 1986; BUDESCU & RANTILLA 2000; HARVEY & FISCHER 1997; HARVEY, HARRIES & FISCHER 2000; HARRIES & HARVEY 2000; YANIV 1997) participants are provided with a range of estimates of some numerical value (e.g. sales forecasts, or the risk of death associated with various activities) by different advisors. After the participant has used these pieces of advice to make their own estimate, the true value is then revealed to them and the process repeated. People tend to use the median of their advisors' opinions as an initial estimate. Later, as they gain experience of their advisors, participants come to recognise that some advisors are more accurate than others. As a result, participants learn to take a weighted average of their advisors' opinions. However, although people's judgments improve with practice, they do not do so by as much as "rational" Bayesian statistical norms suggest they should do. [12]

Although placement of trust in advisors may not be a wholly rational process, the advice-taking research shows that it is empirically based on evidence of advisors' competence in communicating accurate and unbiased information. Such an empirical basis for the development of trust is not inconsistent with the findings of EARLE and CVETKOVICH (1999), who showed only that people claim to trust advisors whose values and motives are compatible with their own. The two conclusions can be reconciled by recognition of the possibility that people may say one thing, and do something else entirely. Such an apparent inconsistency is indeed expected by implicit learning theorists, who would interpret it in terms of a lack of metacognitive insight into the mental states that drive one's own decision-making behaviour. O'NEILL (2002) emphasised this distinction between stated (explicit) and actual (implicit) trust, by drawing attention to people's tendency to claim that they no longer trust the vendors of certain products (such as supermarkets selling genetically modified foods) while continuing to buy goods from those vendors. [13]

TWYMAN, HARVEY and HARRIES (in preparation) conducted two advice-taking experiments in which participants were given advice about the risk of death associated with a range of activities, from two advice sources for each of four behavioural domains (occupation, transport, recreation, and drug taking). One of the two advice sources was a government agency (a different agency specific to each domain), and the other source was a consumer advice organisation. One advisor gave more accurate advice about the risk of death by condition, but both advisors were unbiased in that their inaccuracies did not tend in any particular direction (i.e. systematically under- or over-estimating the risk of death). As in the standard advice-taking paradigm described earlier, participants made their own estimates of the risk of death associated with each activity after seeing the estimates of both advisors. During the first (learning) phase of each experiment, participants were shown the actual risk value after making their own estimates, although no such feedback was shown during the second (testing) phase. At the end of both experiments, all participants completed the similarity-of-values scales devised by EARLE and CVETKOVICH (1999). [14]

In the first experiment, the government advisor was either the "good" or the "bad" advisor. The advice for each source was derived from a statistical perturbation of historical risk data, and the sample from which the poor advisor's information was drawn had greater variance than that of the better advisor. Relative measures of explicit and implicit trust in the government were created, with explicit trust being based on participants' statements of trust in the advisors, and implicit trust being based on the relationship between the participants' judgments and the advice given by both sources. [15]

In both experiments, participants learned to use advice on the basis of an advisor's accuracy, whereas assessment of one's own trust in that advisor was based upon the perceived similarity of the advisor's values to the participant's own. In all cases, explicit and implicit trust in advice sources were affected differently by changes in the quality of the advice given. EARLE and CVETKOVICH's (1999) findings were therefore replicated by the two experiments described by TWYMAN et al., but EARLE and CVETKOVICH's account of trust placement fails to tell the whole story. As in the implicit learning research described previously, there is a difference between what people say and what they do, which suggests that they do not have direct access to the mental states underpinning their decision-making behaviour. [16]

Moreover, one's behaviour and metacognitive beliefs appear to respond differently to differences in the accuracy characteristics of advice, which in turn implies that they do not arise from the same wellspring. In other words, in some cases our metacognitive beliefs about our own mental states may be as inferential as the beliefs we have about other people's mental states, providing no "privileged access" to the causes of our behaviour. One important implication for effective risk communication is that if an advisor wants to know if it has the trust of it's audience, it should look at what people do, in addition to what they say. [17]

Acknowledgement

This work was supported by Economic and Social Research Council Grant R000230114.

References

Arad, Sharon & Carnevale, Peter J. (1994). Partisanship effects in judgments of fairness and trust in third parties in the Palestinian-Israeli conflict. Journal of Conflict Resolution, 38, 423-452.

Armstrong, David M. (1968). A materialist theory of mind. Routledge: London.

Ashton, Robert H. (1986). Combining the judgments of advisors: How many and which ones? Organizational Behavior and Human Decision Processes, 38, 405-414.

Bella, David A. (1987). Engineering and erosion of trust. Journal of Professional Issues in Engineering, 113, 117-129.

Bohnet, Iris, Frey, Bruno S. & Huck, Steffen (2001). More order with less law: On contract enforcement, trust, and crowding. American Political Science Review, 95, 131-144.

Budescu, David V. & Rantilla, Adrian K. (2000). Confidence in aggregation of expert opinions. Acta Psychologica, 104, 371-398.

Carruthers, Peter (1992). Consciousness and concepts. Proceedings of the Aristotelian Society, 67, 41-59.

Carruthers, Peter (1993). Language, thought, and consciousness. Unpublished manuscript, Department of Philosophy, University of Sheffield.

Carruthers, Peter (1996). Language, thought, and consciousness. Cambridge: Cambridge University Press.

Clary, E. Gil, Snyder, Mark, Ridge, Robert D., Miene, Peter K. & Haugen, Julie A. (1994). Matching messages to motives in persuasion: A functional approach to promoting volunteerism. Journal of Applied Social Psychology, 24, 129-149.

Cvetkovich, George, & Löfstedt, Ragnar (Eds.) (1999). Social trust and the management of risk. London: Earthscan.

Dawes, Robyn (1980). Social dilemmas. Annual Review of Psychology, 31, 169-193.

Dennett, Daniel (1991). Consciousness explained. Boston: Little, Brown, & Company.

Dienes, Zoltan & Berry, Diane (1997). Implicit learning: Below the subjective threshold. Psychonomic Bulletin and Review, 4(1), 3-23.

Dienes, Zoltan & Perner, Joseph (1996). Implicit knowledge in people and connectionist networks. In Goffrey Underwood (Ed.), Implicit Cognition (pp.227-255). Oxford: Oxford University Press.

Dienes, Zoltan & Perner, Joseph (1999). A theory of implicit and explicit knowledge. Behavioural and Brain Sciences, 22, 735-755.

Dienes, Zoltan & Perner, Joseph (2001). When knowledge is unconscious because of conscious knowledge and vice versa. Proceedings of the Twenty-third Annual Conference of the Cognitive Science Society, 1-4 August, Edinburgh, Scotland.

Earle, Tim C. & Cvetkovich, George (1999). Social trust and culture in risk management. In George Cvetkovich & Ragnar Löfstedt (Eds.), Social trust and the management of risk (pp.9-21). London: Earthscan.

Gopnik, Alison (1993). How we know our minds: The illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences, 16, 1-14.

Harries, Clare & Harvey, Nigel (2000). Taking advice, using information and knowing what you are doing. Acta Psychologica, 104, 399-416.

Harries, Clare, Evans, Jonathon StBT. & Dennis, Ian (2000). Measuring doctors' self-insight into their treatment decisions. Applied Cognitive Psychology, 14, 455-477.

Harries, Clare, Evans, Jonathon StBT., Dennis, Ian & Dean, John (1996). A clinical judgment analysis of prescribing decisions in general practice. Le Travail Humain, 59, 87-111.

Harvey, Nigel & Fischer, Ilan (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70, 117-133.

Harvey, Nigel & Harries, Clare (1999). Using advice and assessing its usefulness. In Jay F. Nunamaker (Ed.), Collaboration Systems and Technology (CD-ROM). Piscataway, NJ: IEEE Publications.

Harvey, Nigel, Harries, Clare & Fischer, Ilan (2000). Using advice and assessing its quality. Organizational Behavior and Human Decision Processes, 85, 252-273.

Harvey, Nigel, Twyman, Matt & Harries, Clare (in preparation). Judging risk acceptability for self and others.

O'Neill, Onora (2002). A question of trust. Cambridge: Cambridge University Press.

Reber, Arthur S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 6, 855-863.

Rosenthal, David M. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329-359.

Rosenthal, David M. (1993). Thinking that one thinks. In Martin Davies & Glyn W. Humphreys (Eds.), Consciousness: Psychological and philosophical essays (pp.197-223). Oxford: Blackwell.

Slovic, Paul (1993). Perceived risk, trust, and democracy. Risk Analysis, 13, 675-682.

Slovic, Paul (1997). Trust, emotion, sex, politics and science: Surveying the risk assessment battlefield. In Max Bazerman, David M. Messick, Ann E. Tenbrunsel & Kimberly A. Wade-Benzoni (Eds.), Environment, ethics and behaviour (pp.277-313). San Francisco: New Lexington Press.

Twyman, Matt, Harvey, Nigel & Harries, Clare (in preparation). A question of trust: Do we place it where we say we do?

Yaniv, Ilan (1997). Weighting and trimming: Heuristics for aggregating judgments and uncertainty. Organizational Behavior and Human Decision Processing, 69, 237-249.

Authors

Matt TWYMAN is a Research Fellow in the Department of Psychology at University College London. His research interests are in the application of theories of consciousness and metacognition to learning, judgment, and decision making paradigms (e.g. self-insight into trust placement during advice-based decision making). He is a co-organiser of the London Judgment and Decision Making group's seminar series.

Contact:

Matt Twyman

Department of Psychology
University College London
Gower Street
London WC1E 6BT
UK

E-mail: m.twyman@ucl.ac.uk

 

Clare HARRIES is a lecturer in the Department of Psychology at University College London and a Research Fellow of the ESRC Centre for Economic Learning and Social Evolution. She teaches applied decision making and risk communication. Her research interests are in the applied and theoretical aspects of judgment and decision making (medical decision making, metacognition and self-insight, judgmental forecasting and advice-based decision making). She is an active member of the London Judgment and Decision Making group's seminar series.

Contact:

Clare Harries

Department of Psychology
University College London
Gower Street
London WC1E 6BT
UK

E-mail: clare.harries@ucl.ac.uk

 

Nigel HARVEY is Professor of Judgment and Decision Research at University College London. He is a Research Fellow of the ESRC Centre for Economic Learning and Social Evolution. He is a past president of the European Association for Decision Making. With Derek KOEHLER, he co-edited the 2004 Blackwell Handbook of Judgment and Decision Making. His current research on trust and on judgment in the processing of evidence is funded by the ESRC and the Leverhulme Trust.

Contact:

Nigel Harvey

Department of Psychology
University College London
Gower Street
London WC1E 6BT
UK

E-mail: n.harvey@ucl.ac.uk

Citation

Twyman, Matt, Harries, Clare & Harvey, Nigel (2006). Learning to Use and Assess Advice about Risk [17 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 7(1), Art. 22, http://nbn-resolving.de/urn:nbn:de:0114-fqs0601220.

Forum Qualitative Sozialforschung / Forum: Qualitative Social Research (FQS)

ISSN 1438-5627

Creative Common License

Creative Commons Attribution 4.0 International License