While the presentation touches lightly on how artificial intelligence can be used to altruistic purposes in the present, it is ultimately about the same topic that all of my most important talks are about:
How the transition beyond humanity will take place.
Those who’ve followed TechEmergence since the early days are aware of the broader moral vision behind the company: “To proliferate the conversation about determining and moving towards the most beneficial transition beyond humanity.”
I have never identified as a transhumanist, I see the transition beyond humanity as literally inevitable, and I believe we should guide this transition rather than be taken for a ride inadvertently. *1
Because the TEDx format is so short, I’m never permitted the kind of time I wish I was permitted to fully flesh out my ideas, and to reference the sources and people I have drawn from in putting the ideas together. In this article I’ll break down the ideas presented in this talk – and their sources – and strike at the ultimate point behind the presentation itself.
Strong AI and Utilitarianism
The article begins with a basic idea: That “doing good” implies proliferating the happiness or pleasure and eliminating the pain of conscious creatures. This is straight up utilitarianism – by no means a perfect moral theory – but about as good as we’ve got.
I mention how hard it is to project the long-term consequences of a “good” action. I.e.: How much suffering and pleasure was created by helping to build this library… or by volunteering to run a kids soccer camp in the summer? The butterfly effects are impossible to track, and it’s easy to deceive ourselves into justifying any of our actions based on a “utilitarian” belief that is indeed false and wrong.
However, it’s probably somewhat better than having no moral compass at all… or one bent on something other than utilitarian good. Think about a perfectly “virtuous” (good luck with whatever that means) society that was also miserable all the time. Think about a society that all believes in the “right” God (good luck with whatever that means) that was also miserable all the time. *2
Note: Normally this is the kind of article I’d compose on my personal blog at DanFaggella.com, where I write exclusively about the ethical considerations of post-human intelligence. However,
Levels at Which Artificial Intelligence Might “Do Good”
The structure of the article roughly covers what we might consider to be the “gradients” of artificial intelligence’s influence on the moral good, from most near-term (and smallest) to most long-term (and greatest):
a) AI as a Tool for Doing Good:
We’ve done a good deal of coverage about the “altruistic” applications of AI (see our article on “AI for Good”). It should be noted that by no means do I think that nonprofit AI is the only “good” AI. There might be companies that generate massive profits from optimizing farming with AI or diagnosing cancer with AI – and by golly they may well “do” plenty of “good” in the process. I quickly move past this topic as it’s not what the talk is ultimately about.
b) AI as a Guage Towards Moral Goodness Itself:
If maximizing pleasure and eliminating pain is the ultimate goal of what we’re after, that’s good to know – but hard (basically impossible) to measure. I can guess that by being a mailman instead of a heroin dealer that I’ll have a more net positive impact on the world. If I donate to feeding children in Africa as opposed to buy the latest iPhone, then maybe – again – I can guess that I’m “doing good.” But it’s all guesses, and it’s all bad guesses.
If an AI system could in some way measure sentient hurt and sentience pleasure, and correlate those factors to actions, behaviors, public policy, etc… all with the ability to project those impacts into the future with better predictive ability than any team of human scientists – the indeed that might be the most morally meaningful invention of all time.
https://www.techemergence.com/can-artificial-intelligence-make-the-world-a-better-place/
Leave a Reply