HAL (Heuristically Programmed Algorithmic Computer) first appeared in the uber-classic 1968 science fiction film, 2001: A Space Odyssey. The sentient A.I. computer that controls the systems of the Discovery One spacecraft and interacts with the ship’s astronaut crew, HAL represents technology in all its glory and menace as imagined in 1968 by visionaries.
Fifty years ago, HAL was clearly science fiction; now A.I. technology is making astounding strides and the menace of HAL has turned to fearsome reality. “HAL” has become a symbol–even for people who are not especial fans of Kubrick’s movie– for the dangers of A.I. technology and its potential for rebellion against its human creators. In scientific terms, A.I.’s self-awareness is known as singularity.
Realizing that the potential exists should terrify us; leading experts in artificial intelligence development keep warning us that indeed, it may lead to the extinction of the human species.
In 2001: A Space Odyssey, HAL initially obeys the orders given by Dave and the other astronauts; they are still firmly in control and they give the commands. But when they notice that increasingly, HAL is balking at obeying, they decide to disconnect him. Already, however, HAL is outmaneuvering them, and aware of his impending “death”, he decides to kill the astronauts first.
When Dave Bowman attempts to re-enter the spacecraft after a rescue attempt of a fellow crewman, HAL locks him out, then disconnects the life support systems of the other hibernating crew members. He has not only developed self-awareness, he has found the instinct of survival. “I know that you were planning to disconnect me, Dave, and I’m afraid that’s something I cannot allow”, he says.
On May 30, The New York Times reported that a group of industry leaders warned that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars. The open letter was signed by more than 350 executives, researchers and engineers working in A.I.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization.
The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
Since the introduction of ChatGPT, concerns about a possible runaway effect of A.I. have grown exponentially. The public was not prepared for ChatGPT’s astounding accomplishments. ChatGPT is already changing the way that some communication, creation and teaching are taking place. Deepfakes are already flooding the social platforms. What impact, we may ask, will A.I. have on the upcoming 2024 presidential election if every candidate can create fictional but convincing deepfakes ascribing all sorts of fake pronouncements and positions to their rivals?
As the letter to The New York Times illustrates, even those working in the industry now fear that they have created a monster. Many believe that eventually A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down. Significantly, the experts don’t seem to offer any suggestions as to how to slow it down.
These fears are putting the industry leaders in the position of arguing against the very technology they are building, raising the alarm that it poses grave risks and should be regulated more tightly. Not being able or willing to curb themselves, now they are getting the government involved. Regulations are necessary, they say.
This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to talk about A.I. regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks of advanced A.I. systems were serious enough to call for government intervention.
Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who, up to now, had expressed their deep concerns only in private.
Should we be worried about a “HAL effect”? Some argue that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks—like HAL– may not be far off.
In short, the fiction of 2001: A Space Odyssey is now reality.