Should OpenAI have social license to operate?
Probably not, and this debacle is just the latest in a series of unfortunate events
We all the know the story. Altman gets fired. Shortly after, it’s announced he may return.
If you want some detailed overview of the timeline, you won’t find that reporting here. Instead, I’d like to discuss some of the bigger issues, namely, whether or not Open AI should be ‘afforded’ social license to operate (SLO).
Let’s quickly define terms using a basic framework I work with:
Ethics (occurs largely within the organisation): The deliberative process of reflecting on our first order (moral) beliefs in our attempt to align our decisions, actions and their likely consequences towards that which is ‘good’ and ‘right’ (often described in relation to our purpose, values and principles).
Trustworthiness (occurs largely within the organisation): The qualities (these are sometimes referred to as trust antecedents in the literature) a given party exhibits, specifically benevolence (intent to act in the public’s best interest), integrity (acting in alignment to promises, values and principles, regardless of whether the spotlight is on or not) and competence (consistently delivering value promises in alignment to relevant expectations).
Trust (occurs largely outside of the organisation): The willingness one party (a customer let’s say) has to be relationally vulnerable (to an organisation / business) based on positive expectations (the belief the customer has in the organisations trustworthiness, which is also impacted by the customer’s personality, disposition, cultural context, expectation framing and many other factors ).
Reputation (occurs largely outside of the organisation): The opinion that people in general have about the organisation and its effect on the world.
Social license to operate (occurs largely outside of the organisation): The ongoing acceptance of a company or industry's standard business practices and operating procedures by its employees, stakeholders, and the general public.
I’m writing a much more detailed article for the RSA on all of the above. So stay tuned for that.
All boards care about SLO. One could even argue that its the key outcome boards are concerned with. It’s something like the end result of ‘good’ (not even gonna get into the nuance of this today) corporate governance (the process through which a typically small and selective group sets an organisations strategy, identified and describes possible mitigations for various corporate risks etc.).
I’m also not going to get into the ‘inherently’ problematic nature of the modern corporation in useful detail today. I’ve done that before, and will continue to do that. Not doing so here is merely pragmatic.
Most boards, however, make the mistake of focusing on the wrong things when they do this. The mistake they make is focusing on trust and reputation, and not ethics and trustworthiness. They cannot even close to control or directly influence the former, but can directly influence the latter.
I’ll argue, after over a decade focused on this stuff, that this 180 is one of the key things boards can do to contribute to genuinely positive organisational transformation (something the world necessarily needs given what we will simply refer to today as the metacrisis). This forces them to look hard in the mirror, consider where they are today, map this to some type of genuinely preferable future, then work together to describe how they might close the gap between the two states.
This process starts with the organisations value system. This is ‘operationalised’ through the living process of doing ethics. Doing ethics well will enhance the organisations trustworthiness. The ways in which folks interact with the organisational features of trustworthiness are very likely to positively influence trust states. Trust states are tightly correlated with reputation. Trust and reputation together are effectively the conditions for SLO.
Now, we know that Open AI has had an odd(ish) history. They’ve jumped around all over the place, between different structures, different explicit value systems etc.
I think that the organisations benevolence is absolutely in question right now. Their integrity, well, it’s hard to even know where to start with this. And their competence… To their credit they have delivered some significant technical contributions. But how this value delivery maps to value expectations, and how this relates to benevolence and integrity is a different question (these qualities of trustworthiness exist in something like an n dimensional hyperspace. It’s not some linear, dot list type thing).
I’ve written this in about 11 minutes whilst waiting for some rice and beluga lentils to cook. What I hope is that this orients some of the discourse triggered by recent events into a more purposeful direction.
We should be questioning Open AI’s benevolence. We should be questioning their integrity. And we should be holding them both responsible and accountable for value promises.
Oh, and for all the boards out there, stop focusing on what might be acceptable. Let’s be real, this basically equates to what you can get away with. This is not what the world needs. The world needs courageous leadership. The world needs real connection, open collaboration and effective coordination. The world needs institutions that are explicitly orienting their every action towards human and planetary flourishing. The world needs social preferability, not social acceptance.
Last thing before I sign off, for all the folks thinking AI will solve ‘all of our problems’, let me introduce Nate Hagens latest. Enjoy!
If you smell what I’m cooking…