“Oh yeah, well, we used to do some of that (ethics). But now we’re just focused on this* (our approach to AI risk management and project governance).”
*Note, as I’ve harped on about for so long, that this orientation assumes ought.
This comment is from a conversation I participated in recently about ‘Trustworthy AI’. I was asking about the ‘doing’ of ethics, and where that fit into the overall approach.
In this case it didn’t.
The reality is it usually doesn’t fit anywhere.
Ethicophobia is alive and very (un)well.
Overcoming the fear of doing ethics
Earlier in the year, I was invited to speak at a webinar for Academy Xi. This is the recording.
And this is not unique to the organisation I was speaking to. This is systemic. It’s the norm not the exception.
Now, you may rebut, the process of doing ethics (an ongoing process to be sure) is part of how we ‘do’ governance*. And that may well be true in certain instances. Typically, however, governance is driven by documented policies, procedures and (largely deterministic) processes that lack flexibility and are rarely exapted (all of which is of course bound up in a certain historicity, series of ‘affordances, and various macro ‘system failures’, such as the coveted multi-polar trap). The process of doing ethics, really doing ethics (remember, not simply referencing principles or completing a checklist), is uncertain, challenging, messy, imperfect and ‘uncontrollable’ in important ways.
*Governance can be rather diverse. If the way you live governance differs from how I’m describing, thank you. We very likely need more of this. So please keep doing your thing and move on from my critical commentary.
Such a process requires us to actually ask big questions. It requires us to reflect on what truly matters. It requires us to consider multiple perspectives, hold said perspectives in productive tension and justify thoughtful (working) tradeoffs with a dynamic balance of courage and humility, ideally working through the spectrum of wide-boundary consequences as well as practically possible through the process.
We must then continue this process over time, because it is never simply done.
Now, here in Australia, because the Government has taken a specific stance on AI Ethics (which, to their credit, is based on the huge body of work by hundreds of collaborators at IEEE known as Ethically Aligned Design. The principles, if taken seriously, are solid), a lot of orgs have (I’m sorry to put this so bluntly, but know it needs to be said) all but stopped critically thinking.
It’s as if these principles (and various principles informed workflows or practices) are the end. But they are not. They are the beginning.
You may conceive of a sociotechnical development process with something like Australia’s AI Ethics principles as your grounding reference (because you ‘have to’, or you may agree, the result of thoughtful deliberation, that it’s a genuinely good starting point). But you then have to really do the work:
What type of organisation are we today?
Where have we come from?
What type of organisation do we aspire to be?
Who do we (actually) serve?
What unique value can we offer them?
Is this the type of value the world really needs?
Does the type of value we aspire to offer—in the context of our most important (real) constraints—require us to use AI?
What is our (meta)perspective on AI? How can we best factor that into how we move forward strategically?
What variables are we not considering (because we’re disincentivised from considering it, or because we don’t know how to consider it, or because certain perspectives don’t have a ‘seat at the table’ etc.) as we proceed through this process?
n … I really could go on here for a while. I think you get the gist ;)
All of this before you really even get into what would usually be considered Applied AI Ethics.
This sounds like work…
Deep work. Considered work. Potentially messy, uncertain and incomplete work.
But guess what? That is the process of being alive and human and largely aware of (a perspectival bound take on) the whole process.
We really need to step back and stop doing AI because we can (I honestly don’t even feel called to qualify the claim at this point, but… if you want me to, let’s get into it in the comments below), and start considering what should be done and why, which is where the philosophical organisation really comes to life.
Through this, we can navigate nuance, play with polarity (a little joie de vivre anyone?), and actually put analytic intelligences to use in their most appropriate contexts; realising the depths and uniqueness of our humanity in the process.
If you know ANYONE interested in such work, please let me know.
With love as always.
P.S. Many of these short musings are intended to encourage reflection, critical thought and then invite dialogue. I am not writing to simply answer questions you may have. That is a process we must live together.
Also, if this isn’t obvious, the title is meant to be a tad evocative ;)
Nate, we've been talking. Fortunately, where I am, the company is very focused on our culture. within the company, and how we project ourselves outside. It is a very positive projection. Also, we wrote our Ethics with the customer and our People in mind, deliberately inserting challenging statements which, to my amazement, were accepted and even challenged by our board as may be not strong enough.
We also wrote them before LLM suddenly caused this upsurge in attention to AI, which is a good place to be.
We are now re-writing those ethics in the tonality we wish to project to the wider world, but are also taking the opportunity to relook at them again. Can we do more? Can we challenge ourselves more?
Ethics do not stand still. Once you ask yourself one question, it leads to more.
Whilst the company I work for is far from perfect, I hope that we can become a good example to the rest of our industry, at least.
Only yesterday, our Chief Data & AI Officer was speaking at an AI conference and Trust, which is why we wrote our Ethics, is an important part of that presentation. How can you be trustworthy if you are not ethical?
The question about governance is interesting. For us, governance and ethics are peers. They certain influence governance but are not part of it. The question of "is your product ethical?" is seperate from "is your product well governed?". Either can be true, hopefully both... along with secure, respecting privacy, delivering value and providing agency over that digital relationship.
I'm rambling.
You are welcome