Nate, we've been talking. Fortunately, where I am, the company is very focused on our culture. within the company, and how we project ourselves outside. It is a very positive projection. Also, we wrote our Ethics with the customer and our People in mind, deliberately inserting challenging statements which, to my amazement, were accepted and even challenged by our board as may be not strong enough.
We also wrote them before LLM suddenly caused this upsurge in attention to AI, which is a good place to be.
We are now re-writing those ethics in the tonality we wish to project to the wider world, but are also taking the opportunity to relook at them again. Can we do more? Can we challenge ourselves more?
Ethics do not stand still. Once you ask yourself one question, it leads to more.
Whilst the company I work for is far from perfect, I hope that we can become a good example to the rest of our industry, at least.
Only yesterday, our Chief Data & AI Officer was speaking at an AI conference and Trust, which is why we wrote our Ethics, is an important part of that presentation. How can you be trustworthy if you are not ethical?
The question about governance is interesting. For us, governance and ethics are peers. They certain influence governance but are not part of it. The question of "is your product ethical?" is seperate from "is your product well governed?". Either can be true, hopefully both... along with secure, respecting privacy, delivering value and providing agency over that digital relationship.
And yes, I've seen what your attesting to and was, as we discussed on many occasion, VERY pleasantly surprised.
Here's to hoping we see more of this multi-level / systemic (org wide) commitment to the process of doing ethics (again and again and again and again!).
This are great questions Nate and the same types of questions I often recommend organisations do the work to answer before they seek to partner with other organisations. I find that the ones who have laid this kind of groundwork are able to more effectively come to the table in partnerships and collaborations. I guess it’s no surprise that you need to do this too when considering partnering with AI.
Nice! Do you find the 'market' for orgs truly receptive to this depth of thinking to be rather small? How are you finding your way into these relationships?
I’m not sure anyone loves holding a mirror up to themselves and asking hard questions - haha! - but I find that when people are looking to embark on a strategy design process, they’re more receptive to this type of work.
I’m not sure the challenges associated with this work are necessarily related to market receptiveness. I think that there is a severe lack of good, well documented, evidence that it leads to better outcomes which makes it hard to build a business case to invest in it.
As you know measurement frameworks are rarely set up at the foundational stages so we often forget to consider the value of the work we do in those stages. Perhaps that’s one place to start.
Nate, we've been talking. Fortunately, where I am, the company is very focused on our culture. within the company, and how we project ourselves outside. It is a very positive projection. Also, we wrote our Ethics with the customer and our People in mind, deliberately inserting challenging statements which, to my amazement, were accepted and even challenged by our board as may be not strong enough.
We also wrote them before LLM suddenly caused this upsurge in attention to AI, which is a good place to be.
We are now re-writing those ethics in the tonality we wish to project to the wider world, but are also taking the opportunity to relook at them again. Can we do more? Can we challenge ourselves more?
Ethics do not stand still. Once you ask yourself one question, it leads to more.
Whilst the company I work for is far from perfect, I hope that we can become a good example to the rest of our industry, at least.
Only yesterday, our Chief Data & AI Officer was speaking at an AI conference and Trust, which is why we wrote our Ethics, is an important part of that presentation. How can you be trustworthy if you are not ethical?
The question about governance is interesting. For us, governance and ethics are peers. They certain influence governance but are not part of it. The question of "is your product ethical?" is seperate from "is your product well governed?". Either can be true, hopefully both... along with secure, respecting privacy, delivering value and providing agency over that digital relationship.
I'm rambling.
Ramble away :)
And yes, I've seen what your attesting to and was, as we discussed on many occasion, VERY pleasantly surprised.
Here's to hoping we see more of this multi-level / systemic (org wide) commitment to the process of doing ethics (again and again and again and again!).
You are welcome
I loved the challenge you put of asking "should we?" before "can we?"
Scaling this mindset shift will require resisting systems that prioritize speed over deliberation, which is no small feat, but essential work.
overall very enlightening article, thank you for sharing
My pleasure. Thank you for your continued support and dialogue.
This are great questions Nate and the same types of questions I often recommend organisations do the work to answer before they seek to partner with other organisations. I find that the ones who have laid this kind of groundwork are able to more effectively come to the table in partnerships and collaborations. I guess it’s no surprise that you need to do this too when considering partnering with AI.
Nice! Do you find the 'market' for orgs truly receptive to this depth of thinking to be rather small? How are you finding your way into these relationships?
In my experience, that's not an easy process.
I’m not sure anyone loves holding a mirror up to themselves and asking hard questions - haha! - but I find that when people are looking to embark on a strategy design process, they’re more receptive to this type of work.
I’m not sure the challenges associated with this work are necessarily related to market receptiveness. I think that there is a severe lack of good, well documented, evidence that it leads to better outcomes which makes it hard to build a business case to invest in it.
As you know measurement frameworks are rarely set up at the foundational stages so we often forget to consider the value of the work we do in those stages. Perhaps that’s one place to start.
On that, this may be of interest :) https://trustworthy.substack.com/p/episode-1-whats-the-value-of-philosophy