Professor Milton Mueller from the School of Public Policy at Georgia Institute of Technology argues in a paper published in the Journal of Internet Policy that the survival threat posed by "artificial general intelligence (AGI)" is not a realistic possibility. Mueller believes that AI development and its boundaries are shaped by society, not determined by machines themselves.
The study emphasizes that although current AI far surpasses humans in specific tasks (such as complex calculations), this does not equate to having creativity or complex problem-solving abilities. In response to the assumption that "AI will gain autonomy and surpass humans," Mueller refutes that AI is always trained with goal-oriented purposes, and its "disobedience" often results from conflicting instructions or system flaws, not from the machine developing self-awareness.
Additionally, AI's capabilities are limited by physical laws, energy requirements, and infrastructure, and its applications in areas such as healthcare and copyright are constrained by laws, regulations, and social institutions. Mueller concludes that the real challenge lies in creating intelligent, industry-specific policies to ensure technology aligns with human values, rather than preventing a non-existent AI apocalypse.
Key Points:
Lack of Social Context: The study suggests that scientists often develop excessive anxiety due to technological success, overlooking the limitations of AI within social and historical contexts.
Inherent Non-Autonomy: AI behavior is always driven by goals. The so-called "alignment gap" is a technical flaw that can be resolved through reprogramming, not an indication that the machine has developed autonomous will.
Regulatory and Physical Constraints: Because AI lacks physical capabilities and independent power sources, and is restricted by existing laws (such as copyright law and FDA regulation), it cannot "take over the world" as depicted in science fiction.
