Technology Someone had to say it: Scientists propose AI apocalypse kill switches

The Helper

Necromancy Power over 9000
Staff member
Reaction score
1,697
Better visibility and performance caps would be good for regulation too.

In our quest to limit the destructive potential of artificial intelligence, a paper out of the University of Cambridge has suggested baking in remote kill switches and lockouts, like those developed to stop the unauthorized launch of nuclear weapons, into the hardware that powers it.

The paper [PDF], which includes voices from numerous academic institutions and several from OpenAI, makes the case that regulating the hardware these models rely on may be the best way to prevent its misuse.

"AI-relevant compute is a particularly effective point of intervention: It is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain," the researchers argue.

Training the most prolific models, believed to exceed a trillion parameters, requires immense physical infrastructure: Tens of thousands of GPUs or accelerators and weeks or even months of processing time. This, the researchers say, makes the existence and relative performance of these resources difficult to hide.

 
General chit-chat
Help Users
  • No one is chatting at the moment.

      The Helper Discord

      Members online

      No members online now.

      Affiliates

      Hive Workshop NUON Dome World Editor Tutorials

      Network Sponsors

      Apex Steel Pipe - Buys and sells Steel Pipe.
      Top