As the competition in artificial intelligence continues to heat up, the once-widespread "AI doomsday" predictions have taken a turn. According to AIbase, Daniel Kokotajlo, a former OpenAI employee and well-known AI expert, has recently revised his previous prediction on when superintelligence might destroy humanity. He stated that the progress of general artificial intelligence (AGI) seems to be "a bit slower" than he initially expected.

Image source note: The image was generated by AI, and the image licensing service is Midjourney

Previously, Kokotajlo's "AI2027" prediction caused a big stir. The prediction outlined an extreme scenario: AI would achieve full autonomous programming by 2027 and rapidly evolve into an uncontrollable superintelligence, ultimately destroying humanity in the mid-2030s. This view was cited by U.S. political figures but was also strongly criticized by scholars such as neuroscientist Gary Marcus, who called it "science fiction."

However, the latest real-world feedback has made this expert more cautious. According to AIbase's latest observations, in his updated prediction, Kokotajlo has pushed back the time for AI to achieve autonomous programming to the early 2030s and set the window for the emergence of superintelligence around 2034. He acknowledged that current AI still exhibits "imbalances" in complex real-world environments.

Despite this, leading tech companies have not slowed down. OpenAI CEO Sam Altman revealed that the company's internal goal is to achieve automated AI researchers by 2028. Although the "doomsday clock" for human destruction has been delayed, experts remind us that the complexity of the real world far exceeds science fiction scenarios, and the true arrival of AGI remains uncertain.