Very useful distinction between the conditions for trust in a human relationship, and the kind of relationship we have with an AI system. I agree with you that "Trustworthy AI" is a shorthand contraction which conflates two different things. However, I wonder if the easiest path forward is to adopt a negative definition of trustworthy AI, one that avoids some of the ontological knots you point out. In that case, trustworthy AI would be *a use of AI that does not damage the trustworthiness of the organisation deploying it*. This keeps trust located within a relationship between people who can hold responsibilities to each other, but still recognizing that delegation of activities to an AI systems can indeed damage that trust. So, while such systems may well be utterly indifferent to our responsibilities to each other, those systems are not inert with respect to them - and so must be assessed and managed with those responsibilities in mind.
Mate, I partially agree with you here. Meaningful onus has to be with model developers at some level (recognising any such system has something more like 'an ecology of roles and responsibilities'). Something I'd challenge, however, is the negative frame (I recognise such a frame is common). What we should be doing is focusing on how we can design more verifiably trustworthy organisations, more verifiably trustworthy services and systems etc. The process itself really ought to be aspirational (with practical execution grounded in said aspiration). This is not to say that the negative frame isn't important, because it clearly is. I just strongly believe we are doing ourselves and each other and our future possibilities a terrible disservice when it's the only frame.
Very useful distinction between the conditions for trust in a human relationship, and the kind of relationship we have with an AI system. I agree with you that "Trustworthy AI" is a shorthand contraction which conflates two different things. However, I wonder if the easiest path forward is to adopt a negative definition of trustworthy AI, one that avoids some of the ontological knots you point out. In that case, trustworthy AI would be *a use of AI that does not damage the trustworthiness of the organisation deploying it*. This keeps trust located within a relationship between people who can hold responsibilities to each other, but still recognizing that delegation of activities to an AI systems can indeed damage that trust. So, while such systems may well be utterly indifferent to our responsibilities to each other, those systems are not inert with respect to them - and so must be assessed and managed with those responsibilities in mind.
Mate, I partially agree with you here. Meaningful onus has to be with model developers at some level (recognising any such system has something more like 'an ecology of roles and responsibilities'). Something I'd challenge, however, is the negative frame (I recognise such a frame is common). What we should be doing is focusing on how we can design more verifiably trustworthy organisations, more verifiably trustworthy services and systems etc. The process itself really ought to be aspirational (with practical execution grounded in said aspiration). This is not to say that the negative frame isn't important, because it clearly is. I just strongly believe we are doing ourselves and each other and our future possibilities a terrible disservice when it's the only frame.