We are assigning more societal decision-making power to systems that we don’t fully understand and can’t always audit, and that lawmakers don’t know nearly well enough to effectively regulate.
As impressive as modern artificial intelligence can seem, right now those AI systems are, in a sense, “stupid.” They tend to have very narrow scope and limited computing power. To the extent they can cause harm, they mostly do so either by replicating the harms in the data sets used to train them or through deliberate misuse by bad actors.
But AI won’t stay stupid forever, because lots of people are working diligently to make it as smart as possible.
Part of what makes current AI systems limited in the dangers they pose is that they don’t have a good model of the world. Yet teams are working to train models that do have a good understanding of the world. The other reason current systems are limited is that they aren’t integrated with the levers of power in our world — but other teams are trying very hard to build AI-powered drones, bombs, factories, and precision manufacturing tools.
That dynamic — where we’re pushing ahead to make AI systems smarter and smarter, without really understanding their goals or having a good way to audit or monitor them — sets us up for disaster. —Vox
Post was last modified on 25 May 2023 11:45 am
I always enjoy my visits to the studio. This recording was a quick one!
After marking a set of bibliography exercises, I created this graphic to focus on the…
Rewatching ST:DS9 Odo walks stiffly into the infirmary, where Bashir scolds him for not taking…
Imagine a society that engineers its highways so that ordinary people who make mistakes, and…
My years of watching MacGyver definitely paid off. (Not that my GenZ students got the…
As a grad student at the University of Toronto, I picked up a bit about…