These are my answers to some interview questions about AI at work for an upcoming whitepaper.
What do think of the answers? How might you have answered?
How do you see AI impacting your role/industry in the next 3–5 years? I work across industries as an ethicist, which means I'm exploring the ways in which we can better design AI systems to align with diverse human values, deliver services that help truly meet people's needs, and do all of this in a genuinely ecologically responsible way. As the systems develop, and as we deepen our relationship to these systems, this has a direct effect on the types of questions we ask, the evidence we can refer to, and the experiential learnings we can draw from. I expect, or rather, I hope, that we will develop a more nuanced relationship with both the strengths and weaknesses of these systems so that we can more consistently deliver AI systems that help improve our world in meaningful ways. Unfortunately, the last few years have been very noisy and quite messy.
What AI tools or technologies are you currently using (if any), and how confident are you in using them? Compared to many of my colleagues, who I work with in teams, I directly use language models quite infrequently at the moment. This is a choice I've made based on various values and a series of working hypotheses. With that said, I am currently engaged in a body of work where my colleagues and I are using a language model to help with our dialogical reasoning. Importantly, however, we never rely on any output from the system. We engage with humility, explore curiously, verify outputs and use the tool as part of a broader process.
Do you feel adequately supported to upskill in AI or digital tools? Why or why not? Yes and no. It really depends on what types of skills we are referring to. For basic usage, or a decent understanding of system limitations, there are a number of great resources out there. But, there are also a lot of resources that are far less worthy of our time and attention. Very thankfully, I have a number of colleagues that I can refer to, ask questions of and seek help from. As a result, when I feel I want or need to learn something new, I have a very favourable starting point.
What are the biggest challenges you’ve experienced (or anticipate) in adopting or adapting to AI? Our information ecologies are increasingly fractured, problematic, and in some cases outright dangerous. The current wave of AI systems might well settle into a rather healthy rhythm. But there's a chance it goes the other way. As a result, our capacity for measured, respectful, curious and ongoing discourse is critical. There is no reliable evidence that these systems are going to 'wake up' and 'take over the world'. Instead, we need to figure out how to design and use them in such a way they actually help humanity solve important problems. We have to figure out how these tools might help us learn to better cooperate, in respect of our differences. The big challenges are ecological, social, political, economic and cultural far more than they are technical.
What skills do you think are most critical for success in an AI-enabled workplace? Dialogical and critical reasoning, moral analysis, ecological analysis, a technical understanding of a systems' strengths and weaknesses (how to determine more / less valid use cases), a clear understanding of usage policies, respect for privacy and robust security measures.
Have you (or your organisation) undertaken any AI-related training? What worked well or didn’t? I have not personally. With that said, in addition to practical work and collaborative social learning, I believe these types of experiences can be very helpful for people (https://thebullshitmachines.com/).
How do we balance technical skill development with ethical, governance, and human-centered design principles? I don't believe it's a balance of skill development versus these other areas. We need to develop more skill in these areas. Ethics is not something we know how to do innately. It's something we must practice over time. The same goes for governance, and the same goes for applying human or ecology centred design principles to the design, development, monitoring and refinement of AI systems. Overall, however, I would argue we need to balance our technological power with an orientation towards wisdom. We need to work to develop wise sociotechnical systems (https://trustworthy.substack.com/p/we-need-wise-sociotechnical-systems). In my opinion, this is the direction AI development should be heading.
With love as always.
I loved your answers. I would love to know your views on a question I had on AI and ecology.
I inboxed you the question so we can discuss it