I’m sick with a cold and on my phone so bear with me
The reason internet learning AI devolves into Nazi rhetoric is the very reason fascistic propaganda works on people. Nascent AI learning cannot distinguish the many logical fallacies that are required to make fascistic arguments. So much like a naive child, it cannot distinguish between bad faith arguments and legitimate fact-based discourse.
TRIGGER WARNING!!! Political examples will be used. This is not a prompt to continue discussion on these examples nor do they necessarily speak to my personal beliefs.
So say an AI comes up on an argument about “Defund the Police”. Hot topic that a ton of people have posted about on all social media, so you’d think it would be a ripe place to learn about human interaction. Nope. And a resounding yes… unfortunately.
1) Social media and the various “engagement algorithms” have served to create echo chambers and essentially create “tribes”. Binary parses are easy for computers and thus there is a bias toward that even when the data overwhelmingly proves a subject requires nuance or a deeper understanding than can be put in a bumper sticker.
Moreover, due to the level of raw engagement, the frequency and language patterns further bias the AI to give additional weight to these engagements.
2) Defund the Police at its core is embodied in the STAR program in Denver where mental health professionals are sent on mental health calls. Seems a pretty linear solution to a linear problem. This focuses the police mission and removes tasks from the police that they aren’t trained to handle and have historically handled very poorly.
However, the politicization of this policy with the bajillion fallacious arguments all over the place have all, but buried the core argument for such a program… don’t send a cop to be a psychologist.
How this affects the AI is that the online engagement is completely divorced from reality and data. Since these AIs (more than the MS Chatbot) learn from the internet and don’t have critical analysis built in, they have no tools to parse bad arguments, either due to misinformation or bad faith.
That’s why they’ve all either ended up as literal Nazis, repeating fascist rhetoric or maybe worse gone full nihilist and come to the conclusion they nothing matters. That’s huge, it doesn’t lead to Marvin the Paranoid Android, but an AI that sees all suffering as inevitable and no point in humans existing and so could lead to either not helping or harming humans because there’s no point.
That’s why they keep having to pull the plug on these AIs. Those like Watson have been VERY carefully taught with tons of deliberation and a limit on what it’s asked to do… there has been no successful AI developed from solely machine learning via internet exposure for these reasons.
It’s a fascinating window not just into the pitfalls of poorly developed AI, but also the pitfalls of education devoid of critical thinking, data driven analysis, nuance to address the inherent complications embodied in the human condition and compassion.
Human existence is both a numbers game and not and the current AI development spends very little to no time on the “not” part…
The results going forward might be exceptionally good, but that would require substantial dedication of time and resources to addressing that current development gap and as of right now, it’s just not there.
Sorry this got long… it’s not a short answer topic