Here's what Responsible AI really needs
A brief musing and light proposal for your consideration
The below was the first draft of a very early proposal for a Generative AI Ethics Lab. I trust it will encourage deep, as well as practically useful thought.
Oh, and if you want to fund such an endeavour, hit me up!
According to the latest Australian Responsible AI Index, there remains a significant gap between the intentions and realities of Responsible AI programs. This gap has remained fairly consistent since the first version of this study was commissioned. What this demonstrates, which is echoed globally, is that there is reasonable agreement on the ‘ethical principles’ that ought to ‘guide’ Responsible AI development and use. What this also shows is that organisations, for many reasons, are struggling to effectively interpret, implement, and operationalise said principles across the entire Responsible AI development and use lifecycle.
Many responses to this problem, often framed as the ethical intent to action gap or ‘values gap’, have been proposed. These include greater board awareness, AI literacy work, standards development, and framework implementations, to name a few. All of this and more seems to be both valuable and necessary if society is to most benefit from Responsible AI. Yet, what is often missing from proposed responses to the ethical intent to action gap is deeper consideration for actually ‘doing ethics’.
Merely referencing principles, or attempting to encode a principle into a model, is not ethics. Ethics is the deliberative process of reflecting on our first order moral beliefs (ethical principles for the sake of simplicity) in an attempt to do what is most good and right in a given situation. Arguably it is a cognitive process, where cognition is embodied, embedded, enacted, extended, emotional and exapted (6E CogSci). In this way, ‘doing ethics’ requires teams, organisations and society at large to explore what truly matters, define explicit and implicit values, explore how these values relate to and come into tension with one another, engage in diverse and inclusive dialogue, exploration and experimentation to productively explore tensions and tradeoffs, then use such collective bodies of work to inform how a given initiative proceeds (or doesn’t).
My industry experience leads me to believe that such work is largely missing from many Responsible AI programs within organisations (and is largely missing from organisational practice in general). In short, there is a very significant risk that the real, challenging and confronting work of ‘doing ethics’ is skipped in favour of simpler approaches that lead to avoidable unintended consequences and further systemic ecological overshoot.
In response to this challenge, I propose the Generative AI Ethics Lab, a place for genuine transdisciplinary collaboration, for slowing down, for sitting with the most challenging of questions, for ‘showing the work’, and for collective learning that enhances our ethical intelligence and effectively translates into more robust, considered and normative Responsible AI development and use that benefits people and planet.
In short, I propose a safe place for diverse groups to explore not just how something should be done, but whether it should in fact be done in the first place.
The AI Ethics lab will be a place for people to learn how to do ethics by actually doing ethics. By participating in the lab, people will:
Evolve their ability to do practical ethics: This is not just a process of learning about normative ethics, but rather learning how to identify real issues, express how a broad landscape of moral theories can help us relate to real issues (by positioning how different theories express what is good and right), learning to hold competing truths and work with value tensions, and expressing the work output of a process in such a way that an ethical decision can be communicated, critiqued, observed out in the real world, and then learned from through a self-reflective meta process.
Learn critical dialogic skills: Ethics is a participatory process that suffers when participants attempt to debate, or ‘beat down’, one another. In the AI Ethics Lab, participants will instead learn how to listen to, sit with, respond to and explore value tensions through multi-directional exchanges that create a flow of meaning. Through this process, ethical decisions will benefit from diverse inputs, lived experience, cultural nuance and a broad landscape of moral theories that also feature non-Western, post-modern and indigenous approaches.
Participate in transdisciplinary collaboration: Like any discipline or practice, those with expertise have a unique role to play, and are often able to best contribute to a given situation. This is no different for ethicists and the process of doing ethics. With that said, ethics in the real-world is very different from textbooks and thought experiments. The AI Ethics Lab will reflect this by deliberately bringing together transdisciplinary groups to explore any given issue. In this way, the process of doing ethics will benefit from the skills, knowledge and experience of those who uniquely understand different aspects of the AI development and use lifecycle.
Mitigate the risk of real-world errors by first working within the AI Ethics Sandbox: As a way of testing the effects of other collaborative activities, participants will bring together their skills, experience and recent learnings into a safe, yet ‘real-world feeling’ AI Ethics Sandbox. The sandbox will feature genuine examples of projects or initiatives with both distinct and subtle ethical tensions. Participants will then collaboratively self-govern through the process of surfacing the issues, exploring tensions and tradeoffs, making decisions, attempting to implement those decisions and monitoring the real-world effects (using computational models/simulations) of the process, its output and its outcomes.
Taken together, the AI Ethics Lab could be likened to a Living Lab (“Living labs are relational infrastructure that enable people to work together to explore challenges, experiment, prototype and test concepts in real-world contexts. Living labs bring together researchers, industry partners, civil society and the general public to focus on societal challenges through co-creation and open innovation approaches”). This Living Lab, given the risks inherent to the process of developing and using AI systems, feels like a critical piece of Australia’s public digital infrastructure that is currently missing. By funding this lab, there is an opportunity to develop real capacity, enhance public trust (because the process through which we develop such systems is ‘verifiably trustworthy’), demonstrate normative leadership and more handsomely benefit from the innovation opportunities that AI might (recognising this remains an assumption y’all) afford.
The Living Lab, or AI Ethics Sandbox, consists of a philosophy, an ecology of practices, and technological infrastructure.
Philosophy: There are different kinds of knowing; propositional, procedural, perspectival and participatory. Together, these ways of knowing can help us more ‘optimally grip’ reality, supporting better decision making and more coordinated, normative action. Achieving this requires us to work together in safe and genuinely inclusive environments, so that we might explore our deepest assumptions, express our best ideas, and explore not just what’s possibile, but what is preferable. The Lab itself can support all of this so that we might test assumptions, observe effects, and learn through the process how AI systems might better serve humanity and the biospheric systems ( / other living beings) we rely on for life.
Ecology of practices: For this philosophy to have practical utility, participants of the lab need to engage in diverse practices that support deep questioning, dialogue, sense-making, idea generation, hypothesis framing, and experiment design, amongst other things. This ecology of practices will constantly evolve, likening it to a 'living system' of sorts.
Technology: For all of this to work, the lab needs robust sociotechnical infrastructure that enables ideas and hypotheses to be explored as safely and effectively as possible. Therefore the technology supporting the lab must enable robust, complex simulations of possibility, so that real-world impacts can be forecasted, learned from, and used to inform AI system development and use outside of the lab.
…
Alright, there’s my little pitch. Consider this an hors d'oeuvre. I hope you’re getting hungry!
With love as always.