We are assigning more societal decision-making power to systems that we don’t fully understand and can’t always audit, and that lawmakers don’t know nearly well enough to effectively regulate.
As impressive as modern artificial intelligence can seem, right now those AI systems are, in a sense, “stupid.” They tend to have very narrow scope and limited computing power. To the extent they can cause harm, they mostly do so either by replicating the harms in the data sets used to train them or through deliberate misuse by bad actors.
But AI won’t stay stupid forever, because lots of people are working diligently to make it as smart as possible.
Part of what makes current AI systems limited in the dangers they pose is that they don’t have a good model of the world. Yet teams are working to train models that do have a good understanding of the world. The other reason current systems are limited is that they aren’t integrated with the levers of power in our world — but other teams are trying very hard to build AI-powered drones, bombs, factories, and precision manufacturing tools.
That dynamic — where we’re pushing ahead to make AI systems smarter and smarter, without really understanding their goals or having a good way to audit or monitor them — sets us up for disaster. —Vox
Post was last modified on 25 May 2023 11:45 am
Another corner building. Designed and textured. Needs an interior. #blender3d #design #aesthetics #medievalyork #mysteryplay
What have my students learned about creative nonfiction writing? During class they are collaborating on…
Two years after the release of ChatGPT, it may not be surprising that creative work…
I both like and hate that Canvas tracks the number of unmarked assignments that await…
The complex geometry on this wedge building took me all weekend. The interior walls still…
My older siblings say they remember our mother sitting them down to watch a new…