We are assigning more societal decision-making power to systems that we don’t fully understand and can’t always audit, and that lawmakers don’t know nearly well enough to effectively regulate.
As impressive as modern artificial intelligence can seem, right now those AI systems are, in a sense, “stupid.” They tend to have very narrow scope and limited computing power. To the extent they can cause harm, they mostly do so either by replicating the harms in the data sets used to train them or through deliberate misuse by bad actors.
But AI won’t stay stupid forever, because lots of people are working diligently to make it as smart as possible.
Part of what makes current AI systems limited in the dangers they pose is that they don’t have a good model of the world. Yet teams are working to train models that do have a good understanding of the world. The other reason current systems are limited is that they aren’t integrated with the levers of power in our world — but other teams are trying very hard to build AI-powered drones, bombs, factories, and precision manufacturing tools.
That dynamic — where we’re pushing ahead to make AI systems smarter and smarter, without really understanding their goals or having a good way to audit or monitor them — sets us up for disaster. —Vox
There are two factions working to prevent AI dangers. Here’s why they’re deeply divided.
Captain Gearheart inspects progress on his #neovictorian #steampunk æther cruiser. Now fea...
Captain Gearhart found some stairwell problems on his latest #Unity3D tour of his #Blender...
False equivalency in a copspeak guide to dealing with the media
In September, 2002, I was blogging about science writing, satire, ebonics, Google News, ow...
The Onion's Supreme Court Briefing on Satire Is Stunning
Still shot from a flyover video that's been taking about 3 minutes to render each fra...