Many potent technologies, including nuclear power and genetic engineering, are “dual-use”: They have the potential for considerable benefit, but they could also cause substantial harm — by evil intent or accident.
Most people agree it’s prudent to regulate these technologies, as pure capitalism can lead to rapid expansion at the expense of safety.
Artificial general intelligence is the most extreme dual-use technology. It’s defined as intelligence at the human level or better on all cognitive tasks — and once we obtain it, it’s likely to surpass human intelligence quickly. If such “superintelligence” is successful, it could have numerous positive applications, but if it goes wrong, many believe it could be an existential catastrophe for humanity.
Currently, AGI development remains largely unregulated, leaving major corporations to regulate themselves.
The weekend’s OpenAI fiasco illustrates the complexities involved in governing dual-use technologies, with the unusual corporate structure of the ChatGPT company allowing the board of directors to remove CEO Sam Altman, despite investor support for his return.
Until now, AGI corporations have done a fairly good job of regulating themselves. The three major players in the AGI space — OpenAI, DeepMind, and Anthropic — were formed with AI safety as a primary concern. Nevertheless, it’s becoming increasingly necessary to bring in regulatory bodies to ensure that unregulated profit-seeking does not drive AGI development.