Is trust 'good'?
Intuitively a lot of folks might think, "yes". Today I'll share why this is wrong, or at least oversimplified, and provide a more nuanced picture for you to consider and act on.
On many occasions, the media, politicians and business leaders have suggested that “trust is at an all time low” or that we “need more trust”. The somewhat implicit claim here seems to be that more trust is normative (basically ‘morally good’, or more accurately, “the phenomenon in human societies of designating some actions or outcomes as good, desirable, or permissible, and others as bad, undesirable, or impermissible.” So trust is good / desirable).
I’ve already described what trust is, its basic mechanics and some of the limitations that relate to the relationship between cognition, automaticity, limits to knowlege, the (lack of) transferability for interpersonal to person to organisation trust etc. (my last post on ‘trusting AI’ is a useful read).
In essence (a basic working definition that is useful, but does have limits):
Trust is the belief that Party A has in the trustworthiness of Party B, within some specific context.
I’ve then described what it means to be trustworthy. Or rather, I’ve shared a model that attempts to describe the characteristics that most prominently feature in the literature and seem to most directly impact the belief one party has in another’s trustworthiness.
We have also covered the spectrum of trust states.
With the basics out of the way, we can now ask; Is trust the way we describe it here, with all of its known limitations, an intrinsically good thing? And therefore, are the claims these folks seem to be making valid?
To this we have to - based on the best available balance of evidence, including the literature that focuses on the downsides of misplaced or excessive trust - answer “no” or “it depends”.
No likely falls short of useful. So let’s focus on a tad of nuance.
Trust tends to open people up to cooperation (it is sometimes thought of as the sociological ‘cause’ of cooperation), which often occurs in environments where there is plenty of uncertainty (and thus ‘risk’, given that risk really just represents the spectrum of uncertainty one faces) and often a level of vulnerability (many scholars argue that without vulnerability, there is no need for trust. So for something to phenomenologically classify as trust, certain ‘conditions’ need to be met).
This belief (trust) that encourages cooperation can lead to anything from great benefits through to terrible harms, depending on many different factors.
What really matters here is whether the trust is intelligently placed / merited.
So it’s not more trust that we want, it’s a social, political and economic context in which trustworthiness is rewarded, whilst a lack of trustworthiness in penalised.
There are various different ways in which this can play out interpersonally. But my work focuses on what organisations can do to be more worthy of trust, and through that, deliver net beenfits to society.
So, if you’re reading this with your work hat on, there are two simple things to do (Mohahahahah! Dr. Evil laugh given the simplicity mostly lies in the suggestion. The actual work can be tough and take a while, particularly given system constraints):
Do the intentional work to be trustworthy.
Give the market - your customers, partners, investors etc. - good evidence (at the right time and in the right context) that you are worthy of said trust.
Now, this doesn’t necessarily overcome all of the epistemological limitations of trust in a person to organisational setting. But, at least in my experience, I’d like to argue it can go a long way.
Organisations that do this create relational contexts within which it’s easier for others to gain access to good quality information, assess whether or not you (as an organisation) are worthy of trust in a given context, and act on that belief.
If this is done well, it’s likely that trust will be well placed more often that not. If this is done poorly, or if there are attempts to trust hack, it’s likely that trust will be poorly placed (because people see the signals, don’t have the time or capacity to fully verify, rely on said signals, establish a trust state that represents their belief, then act in ways informed by that belief). This opens people up to myriad downside risks.
So, is there a moral angle to trust? Probably not (although there is again some nuance. Specifically in the context of trusting first, which has been studied in the literature and seemingly has an impact a positive impact on relational dynamics. One could therefore argue, from various orientations, that a high propensity to trust can actually influence the trustworthiness of another party. Therefore this is ‘good’ etc. More on this another time).. But there is certainly a moral angle to trustworthiness. Regardless of your normative stance - in this case I refer to normative theories such as deontology, virtue ethics and consequentialism - acting in alignment to the 7 qualities of trustworthiness I consistently highlight is likely a morally good thing.
This is something we can continue to explore over time. For now, let’s move away from the claim that more trust = good. It’s not that simple. Instead we need to focus on designing systems that incentivise trustworthiness, whilst also disincentivising a lack of trustworthiness.
Happy trustworthiness by design folks.