Philosophy Eats AI
But only if that philosophy is pro AI and sweeps the deeper questions under the rug (lol!)
Many of you will remember the proclamation, software is eating the world (note this is from a16z, the very same VC firm behind the Techno Optimist Manifesto myself and so many others have critiqued). This then became AI is eating software. And now, thanks to the folks at MIT Sloan, we have philosophy eats AI.
Calling Tech Titans on their BS
Leading up to my Triple R radio interview on this very topic, I wrote a Q&A. I did this in an attempt to explore what might happen during the interview. The interview itself didn’t perfectly align to my preparatory writings, but this activity was helpful nonetheless.
I became familiar with this specific frame thanks to my friend,
.Here’s a video where the authors of the piece in question engage in a little back and forth (my favourite comment in the thread, largely because of its alarming irony… “I'm overwhelmed by the vagueness of these guys.”).
If you take the time to watch it, you may pick up on similar themes to me. You may pick up on different ones. And although I’m generally grateful that this is even a discussion, there’s a bunch that raises the alarm:
Schrage and Kiron (referred to as ‘they’ going forward) talk about teleology a few times, but never question the teleology of the organisation’s building AI, nor the broader paradigm of corporatism (this is something
and I covered last night in one of our Philosophy & Organisations recordings. Stay tuned for that one) and the whole history that got us to where we are (this is not to say corporations are simply bad or evil or unhelpful. It may well be true that if corporations ceased to exist as of this very moment, hundreds of millions or more may die, fairly quickly, because of the myriad services said corporations offer, many of which significantly enhance certain aspects of our lives. This is about a deeper commitment to cultivating wisdom—“the process of knowing, deeply caring for and living in close relation to what truly matters”—through which we are likely far better placed to meaningfully evolve and authentically progress as a species)They talk about ‘over indexing’ on ethics, and under indexing on other areas of philosophy, but provide close to no justification for this claim (nor do they get into the inescapable entanglement of ethics and other areas of philosophy. One cannot simply explore the moral grounds of a given situation without also considering theories of value, theories of knowledge, and theories of reality, at the very least. This is often referred to as ethico-onto-epistem-ology, thanks to Karen Barad). Note that part of me ‘might’ agree with what they are suggesting, at least at some level, but their vagueness makes it impossible for me to assess the validity of the claim
They fail to clarify their own working assumptions (at any level or using any formal-ish logic / framework). But, reading between the lines, they don’t seem willing to actually get at any of the deeper philosophical questions (what is intelligence anyways, what actually justifies good reason to build and use these systems, to what extent does any decision making process about what and what not to build need to include thoughtful, wide boundary analysis, what is the historical context within which these systems have been developed and why does that really matter / how is this affecting our relation to the ‘thing’ itself, what does it mean to be ‘human’ and what is this whole process doing to affect that, plus SOOOOOOO MUCH MORE), which is where a lot of the value of philosophy is most likely to be
They seem to sweep anything that’s actually controversial to the business community under the rug, holding up philosophy (whatever they actually mean by that) as some kind of underlying process of superior logic that can drive enhanced decision making in service of narrow corporate goals that internalise ‘profit’ and externalise ‘cost’ (I’m reminded of the Joseph Campbell quote, “The cave you fear to enter holds the treasure you seek”). This is terribly misaligned to the ‘spirit’ of philosophy I called attention to in my last post (referenced below).
Note: I am yet to read the paper Schrage and Kiron have written. It’s likely that contains more nuance, specificity etc. If I get round to reading that in full, I may evolve my take (and will update with appropriate commentary).
Much more could be said.
Now, this might seem pretty scathing. It’s not meant to be scathing. It’s meant to be critical. Because the process of philosophy-ing requires a willingness to be critical. It requires us to be open and deeply curious. It requires us to confront what we hold near and dear, and be willing to throw (or evolve it in various ways) it away if it simply doesn’t stack up. It requires of us that we consistently act with courage.
Anything short of this is a non-starter (I realise this sounds very binary. I’m gonna stick with this frame for now in the hope it hits homes hard and perhaps encourages an unveiling of some deeper stuff we may dialogically explore).
So, as per my article yesterday, we do need more philosophy.
To (re)story or not?
Over the last few years, I’ve read or interacted with a sizeable body of content about the stories we tell. Often, this is situated within the context of claims that we need new stories to live by; stories that ground our bei…
But… I don’t think I’m talking about the same kind of philosophy as Schrage and Kiron. And that qualitative distinction matters, a lot.
With love as always.
This is another (unedited) image from our latest shoot. When reviewing the 1500 odd images we took, this was on that immediately spoke to me. It ‘said something’… In the cave, grounded, confident, at home, yet deeply aware of the inherent uncertainty of being, which includes danger. Yet within this was an aliveness, a vibrancy, an infectious relation to the tragic beauty of life-ing.
I’ll refer back to Campbell on this one and say that the person in this image, Esther, so often goes courageously into the cave. This is something I am constantly inspired by, in awe of, and always learning from ❤️
So true, most philosophy in tech just becomes corporate window dressing.
We need those uncomfortable conversations about AI's deeper impacts.
A little more on this (video) here: https://www.linkedin.com/posts/nathankinch_philosophy-ai-criticalthinking-activity-7334036584817639424---D3?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAihH2cBi_SD7lvGBwmu1CPUvQdlo1hoAH0