We are assigning more societal decision-making power to systems that we don’t fully understand and can’t always audit, and that lawmakers don’t know nearly well enough to effectively regulate.
As impressive as modern artificial intelligence can seem, right now those AI systems are, in a sense, “stupid.” They tend to have very narrow scope and limited computing power. To the extent they can cause harm, they mostly do so either by replicating the harms in the data sets used to train them or through deliberate misuse by bad actors.
But AI won’t stay stupid forever, because lots of people are working diligently to make it as smart as possible.
Part of what makes current AI systems limited in the dangers they pose is that they don’t have a good model of the world. Yet teams are working to train models that do have a good understanding of the world. The other reason current systems are limited is that they aren’t integrated with the levers of power in our world — but other teams are trying very hard to build AI-powered drones, bombs, factories, and precision manufacturing tools.
That dynamic — where we’re pushing ahead to make AI systems smarter and smarter, without really understanding their goals or having a good way to audit or monitor them — sets us up for disaster. —Vox
There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.
The complex geometry on this wedge building took me all weekend. #blender3d #medievalyork...
I can’t fix this broken world but I guess I did okay using #blender3d to model this wedge-...
I’ve been teaching with this handout for over 25 years, updating it regularly. I just remo...
Sorry, not sorry. I don't want such friends.
Despite its impressive output, generative AI doesn’t have a coherent understanding of the ...
I was perhaps a bit more conversational and chipper than usual during class today. A grinn...