In a lot of my research, both professionally and personally, I come across the use and abuse of the word “trust.” I’ll lay my cards on the table clearly for those of you reading this and wondering what prompted this post, and to be clear, these are my personal opinions. Mostly, I’ve been mulling a research question over the years around human machine teaming (HMT) and animal analogies. But that is putting the cart ahead of the horse. Before I can fully answer – or attempt to formulate an answer – to this research question I tend to fall into many rabbit holes, learning as much as I can about various phenomenon. In the HMT literature, one of the oft cited solutions to getting human beings to work with machines –as teammates and not tools—is “trust.”
I’m no different in this respect. David Danks and I wrote about this back in 2017-2018. We described that trust is a concept and comes in different forms (thick and thin). Our use of these terms isn’t new, and follows trends in sociology, political science and even economics – all of which are undergirded in some ways by psychology. But even this paper, almost 10 years ago, isn’t satisfying for questions about trust. In fact, the older I get and the more I see “trust” as a solution to problems around not just HMT, but use of autonomous systems (as tools), use or abuse of artificial intelligence (AI) systems, AI-enabled autonomous systems, and any other potential technological artifact, the more I distrust the use of the word trust. I would like to maybe kick my younger self for using the word to begin with. **also - note bene- there are so many citations for some of this stuff, I just am not going to do that, but contact me should you want some direction for citations**
I’ve spoken publicly about my frustrations with the word, different podcasts and such. And, it isn’t just me. Many folks in my neck of the woods also tend to move away from “trust” and now speak about “justified confidence.” This move is better. It is better for a couple reasons, one it level sets that we are talking about what reasons we would need (i.e. to be justified) to have a form or level of confidence in the behavior of something. Confidence also tends to track with how we statistically measure things… patterns and such… through confidence intervals and testing regimes. But I digress.
Today, I wanted to talk about why “trust” is a suitcase term, and why suitcases, while good for packing your clothes and shoes and toiletries, are bad for formulating specific, useful, or nuanced approaches to everything else (policy, law, engineering, you name it). In fact, in much of my research, I find that most research doesn’t even understand the way in which they use the word (or concept) trust. Sure, they may provide a definition, but really don’t understand it. I’m not going to throw anyone in particular under the bus here. Instead, I’ll put forward a couple of things that have been bothering me… as I attempt to take this diverse set of stuff (or less gracefully “a mess”) and make sense of it so that we can pursue better research, science, development, policy formation, legal regulation, and normative prescriptions. Big ask… I know.
But let’s just take a couple items from the pile. When we use the word “trust” it is often used as a:
- property
- class
- subjective feeling/ attitude
- object
- phenomenon (cultural or cognitive construct)
- action
- relationship
Likewise, academic research on the subject of “trust” also simultaneously treats trust as the independent variable and dependent variable in models of explanation. For example, trust is sometimes called a “latent variable” doing all the work… though I don’t think many people use that term correctly. On one side “trust” as the “latent variable” is some somewhat stable set of beliefs, attitudes, cognitive states, character traits, etc. that then “causes” individuals – it’s usually always individuals folks—to act in particular ways. On the other side, these actions now take on higher order abstractness akin more to qualities, and it is the presence or absence of these qualities that then cause an individual to “trust.” This isn’t a death knell. Figuring out causal direction is hard. But, when folks do not seem to know that they are not measuring what they think they are measuring, or they have completely misunderstood the causal arrows in their own thinking, that’s not so great.
Then there are the usual caveats about the unit of analysis. Who is doing the trusting? Who (or what) is the trusted? There are of course many different units of analysis. We could be talking about human beings qua humans. In short, *every* single human carries these capabilities or capacities (barring the usual caveats on physical, psychological, emotional “abnormalities.” Those are scare quotes for a reason). That approach would say that it is something more biological and physiological, something about our neurological structures, etc. that make us this way. (Like, we are “hard wired for cooperation”.) Or there are the individual level arguments that say it isn’t about the biology and neuroscience but about the ways in which social ties, community, belief structures, lived experiences, etc. shape our beliefs about trust and trusting. Here we move from the “objective” side of science to the “subjective” side of feeling, belief and emotion. Of course there is also the most likely candidate that it is both and a false dichotomy (neither nature nor nurture alone, but that’s messy for someone seeking parsimony).
Then we move from the unit of the individual to that of more than one. Oh no! We can have dyads… we can have triads… we can have small groups or big groups… we can have associations… we can have organizations… we can have societies… we can have states… and the list goes on. A lot of work on trust lives in this area. Organizational behavior is a big one. But so too is sociology, political science, economics, etc. We can even look inside the organization back down to the individuals and characterize the relationships between those individuals within the organization – say a hierarchy. Again, much ink spilt on superior and subordinate relationships and the way “trust” – or “distrust” raises its head here. Here too we see literature on “teams.” Usually, however, the teams in question are all humans.
Which leads me to the next question in the unit of analysis, and one that is not without its own arguments and commitments: the metaphysical assumptions about agency. When I use this term I mean the capacity to be held responsible for one’s (in)actions. This can be legal responsibility, where we have legal agency and legal responsibility or liability, or moral agency, where we have moral responsibility. Adult humans (often) are the legal and moral agents we talk about. But, there are other types of entities that aren’t human and have agency: corporations, states, etc. Less debatable, we say corporations can have agency (its why you can sue them, for example), and more debatable we say that there is a thing called “corporate moral agency.” I wrote about this a *long* time ago in my book on humanitarian intervention and the philosophy of Immanuel Kant. The debate is whether really you think a collective is something different than the sum of its parts (like, the Board of a company is still made up of people… humans… so the moral agency still resides with them, or not).
This set of questions when it comes to “teaming” isn’t usually at issue. For the most part, when we talk about a “team” we infer that it is a group of humans doing something together—often voluntarily—in pursuit of some common goal or good. But… the notion of HMT, where machines are not “tools” – that is objects—but now “teammates,” changes that metaphysical assumption. I’m not going to open that can of worms here, now. All that we have to think of is that the “team” is the unit of analysis, and we cannot ignore the question of agency within the team.
Ok, so we have thus far thought about:
- what trust is… (property, class, action, belief, etc.)
- whether trust is the cause or the effect of (insert your hobby horse here)
- who or what is doing/receiving the thing called trust, that is the unit of analysis
- what (tangentially) the metaphysical or legal status of that entity is
Now, if you can visualize this… we have a bit of a mess and it is almost impossible to tease out or test adequately. But, yet, we continue to just throw up the word trust as the solution to some problem.
But what is the problem? That is a different question. For example, if my problem is that society no longer “trusts” its government, that seems pretty bad. Could lead to political violence, civil war, etc. Measuring social trust (well) is tricky though.
Or maybe my problem is that the performance of my organization is tanking, and I need to study the relationships between the constituent members of my organization. I look at “leadership” and “management” and then the ones who are managed or led. I can talk about culture, communication, transparency, etc. I can look at power relationships in hierarchies, as well as what role “respect” or “fairness” may play. Lots of things here where “trust” plays a role.
Or maybe I want to talk about whether I’ll use something. This is sort of where HMT comes into play, though, weirdly the “use” argument seems out of place if we are talking about machines as “teammates” that are not “tools.” We are perfectly morally allowable to use tools, but using non-tools… i.e. not things (like people) is morally questionable at best. So we have an argument to use something but that it isn’t a thing to use… it is very conceptually convoluted. Rather seems to be an argument about why I should form a relationship (working or otherwise) with this agent.
In any case I digress again. Let’s just say it’s not HMT, and it is an argument about use of things. OK, well, then we are back in the trust saddle… but why would I invoke trust to use a thing? Back into those tricky questions again about the thingness of trust (property, class, belief, etc.) and the unit of analysis to at least get me going, and then again, need to figure out my causal arrows… then I need some variables to explain… oye vey.
What about reputation? That is usually a good one. Company A makes widget W. I can “trust” company A based on a wide variety of factors. Some may say it is competence… they are good at what they do. Their stuff works. But others can point to silly things like “they are cool” and so I want to be “in.” I want to be seen using things from Company A because it helps me to fit in with folks I want to fit in with, or it signals to others that I am a particular way…
Or maybe Company A is not just competent but when they mess something up, they are really good at taking responsibility for that. Their “reliability” ratings are backed with guarantees that they will make things right for you, should something go wrong. They fix their mistakes, come clean, etc. That ticks some boxes for me. But those behaviors are not about their ability to make good stuff… those are about other processes, rules, requirements, laws even.
Or maybe they also do really good stuff besides make the thing they want me to use. Maybe they cure diseases or donate all their profit. That is maybe important too. Or maybe as part of their competency they are really transparent about how they build and test their stuff. And other companies and institutions follow them or give them accolades (or recognition). I may not have the technical expertise to know how good they are (competent or able) but a whole bunch of other people who are competent say that it’s all good.
Yet what is doing the work here? Is it my biology, my social values, my early childhood experiences? Is it my belonging to a group of folks like me? Is it power? Recognition? Belonging? And why, oh why, should it all be just wrapped up in one word? Trust?
I could go on and on here because there are libraries full of this stuff. There are different disciplines that look at different aspects. But what seems to me to be the clearest thing about discussions about trust is that we should be highly skeptical of the term itself. I would wager that a well scoped research question, clearly defined, operationalized with the most appropriate measurements we can take, and then appropriately caveated for all sorts of things… may yield some intriguing findings. But those findings aren’t going to be parsimonious enough for a bumper sticker. And remember, even justified confidence faces challenges… cause’ conman comes from “confidence.”