Ethics for the people, by the people
Can a more distributed ethics meaningfully compliment or augment the technocracy? Might this helps us better tackle the big problems of today? I strongly believe that the answer is yes.
Let me start with a simple statement that skips over a lot of historical context and nuance; Ethics has traditionally been highly technocratic.
By this I mean that qualified ‘experts’ have pondered, explored, theorised and passed judgements about good from bad and right from wrong.
Whether or not this phenomena in itself is good/bad or right/wrong is not the topic of this post. Today I am concerned with the ways in which an evolved orientation to ethics, one that is more representative, embodied and ‘real-life’, might compliment the work of professionals across values theory, moral philosophy and myriad related / complimentary areas.
To be clear, before you fall off your seat, I am not suggesting that there isn’t a deep value in the professional ecosystem of participants, procedures and institutions that deals with these challenging subjects. Nor am I suggesting that thousands of years of work hasn’t massively, and in many cases, positively impacted humanity. What I am suggesting is that constraining ethics to this ecosystem only may limit the richness of its potential to meet the present and emerging needs of a species amidst a meta / poly crisis.
What do I mean by ethics?
It’s important to clarify this. When I say ethics today I refer more the the active process we execute to align our decisions to our purpose, values and principles.
There’s a ‘doing’ orientation.
By framing ethics this way, you might realise that this is something each of us does every single day (providing we are in the position to make ‘choices’. So I recognise there are embedded assumptions here. I will not unpack all of them today).
To some extent this already opens up the ivory tower to additional possibilities.
What am I proposing?
To answer this, let me first describe how a lot of this stuff works today.
Let’s say we find ourselves within a specific organisation. It could be public or private, across any vertical.
We are embarking on a new project, venture or initiative of some kind.
The initiative is driven by some motive (creating a competitive moat, roadblocking the competition, attempting to improve certain metrics that seem to cause an increase in shareholder confidence, and thus shareholder value, actually creating value for citizens, pursuing a political agenda etc. There is no shortage of motivational forces). Again, the details don’t matter right now.
We engage in a process to assess what we’d like to do, with whom, in what timeframe, within a given budget etc. There are various different ways in which this process might be defined, designed and operationalised.
At some point throughout the process (and let me be clear, often this doesn’t happen at all) there’s some type of consideration for what is good and right. Now, given that few people are trained in moral analysis, the process of defining values, weighting values, exploring tensions, forecasting potential intended and unintended consequences etc. in ways that deliver recommendations about what to (or not to) do and how to consider doing it (often framed as risks of some description) is led by an expert or a committee or a board of some kind (key point, not embedded in the project team from the outset).
The process is outsourced (ethics is someone else’s job). Now, there are various ways in which this can be done very well (although, there are seemingly far more examples of late where this is done quite poorly). There are a number of reasons why such an organisational function ought to sit ‘outside’ the operating and incentive structure of the project / initiative itself. These are issues for another time.
The recommendations are then brought back into the project. Some of them are acted upon. Others are not. Some are watered down or ‘interpreted’ (enough said).
The project keeps moving forward and makes its way out into the world. Sometimes this works well (in terms of consequences). Other times this doesn’t.
If this seems overly ambiguous, it may be because you haven’t been involved in many processes like this. If, even at this details void level, it makes sense, it’s likely you’ve been down in the trenches more than once. You know how this stuff works. You’re either rolling your eyes or giggling. I do a bit of both.
This whole process is guided by some (often implicit) assumptions, such as:
Doing ethics is a burden. It slows us down and constrains innovation
I’m not qualified to do ethics. Someone else will need to do it
Of course we’re ethical. We do the right thing as a basic result of who we are and what we stand for
We can always make changes later (Reality check! This is BS and almost never happens. Ethical Debt, like technical debt, is a HUGE issue within orgs today)
Etc.
Imagine a world for a moment where we altered some of the assumptions to something more like:
We have a deep ethical responsibility to the ecosystem within which we operate (I’d suggest that, given where we are today, this ought to be biospheric in scope. Topic I’ve written about before and will continue to do so)
To some extent, ethics is actually everyone's job
Doing ethics can make us better at what we do. It can help us responsibly innovate
The challenging ethical decisions we are trying to make can’t be made by us and us alone. We are not the arbiters of moral truth
The most grounded and considered ethical decisions are made when the people impacted by said decisions are involved in the process
Etc.
With a paradigm shift (shared mental model) like this, how might our project operate differently?
For starters, consequence scanning, defining our explicit and implicit values, weighting these values, formally featuring these value weightings in our prioritisation and tradeoff exercises etc. would become part of BAU (you can and likely should get down to really low level detail here, where something like an expression of certain values or characteristics features in formal acceptance criteria etc.).
This could be done via a dual-track agile approach, with this type of deliberative work featuring in the Discovery workstream (just like UX research often does) that then informs a more confident delivery cycle (i.e. we have good reason to believe that what we are building is not just defensible, but preferable. It’s something that people fundamentally see as good and right).
We could build upon some of the work we do as a core, cross functional team by designing a mixed method research program that meaningfully engages a representative sample of potentially impacted parties into a process of experiencing (via a goal oriented, unimpeded experience interacting with a prototype) what we are proposing to build. We add to that a process of contextual inquiry where we explore the ways in which these people interact with the values, the tensions etc. (I’ve published about Social Preferability Research in the past. I’ll continue to do much more of this over the coming months. stay tuned!).
We seek to come out of the process with some (proxy) behavioural data and rich attitudinal data. We use a specific likert scale Q&A approach to get a sense of how supportive these people are of the intentions behind our project, and its likely consequences (this basically gives us a Social Preferability score).
We try and make sense of all of this and use the data, information and insights to inform how we approach moving forward.
All of this adds to the body of evidence that informs our ethical decision-making process.
Ideally, as I’ve published about before, all of this occurs within a broader system for ethical decision-making that features an ethics issues backlog, a key decision log, an ethics knowedgebase, along with various tools, tactics and processes (including Social Preferability research that seeks to make the process of ethical decision-making more inclusive and representative) that help inform the process of making ethical decisions and responsibly innovating.
Here’s a video that highlights how this can work.
Now, this is super organisation focused. That’s because most of my work is about helping design these systems within large and complex organisations.
But, there’s an argument that you could massively extend this so that the format for Social Preferability is driven by the sociotechnical infrastructure of as Living Lab. This type of infrastructure could help manifest something a little more like ‘Swarm Ethics’.
The process of moving towards this would augment the typically technocratic nature of ethics and combine professional moral analysis with a longitudinal, collective ethical intelligence. Together, I argue, something emerges that is great than the sum of its parts.
I’m not the first, nor will I be the last to suggest this. But this could well be applied to many of the AI Ethics challenges we have today.
How will we move from where we are today to something that might be likened to what I’ve briefly proposed? I’m working on it. It’s an active process.
I’ll document that I’m doing / learning as I go.
For now, let me know what you think. If you’re keen to dive deeper into the nuance of compassionate and considered discourse, I’m ready.
For more…
Also, just another quick point from my experience- 90% of practical ethics is just recognizing the dilemmas. Solving can be much easier.
Yes - and added to all this the idea that distributing ethics out into the world of those encountering dilemmas removes the middleman - the appeal to a centralized authority - and so speeds the whole prices up.