New York (Business Emerge), September 5: Ilya Sutskever, the former chief scientist at OpenAI, has unveiled his latest venture, Safe Superintelligence (SSI), which aims to create advanced artificial intelligence systems designed to be significantly safer and more capable than current technologies. This announcement marks a bold step in the quest to ensure that AI evolves responsibly and ethically.
Sutskever, a prominent figure in AI, who studied under the legendary Geoffrey Hinton, has been a key proponent of the scaling hypothesis—an idea that AI performance improves with increased computational resources. This concept was instrumental in the development of generative AI models such as ChatGPT. However, SSI intends to take a novel approach to scaling that diverges from previous models.
Vision Behind SSI
Sutskever shared his vision for SSI, stating, “We’ve pinpointed a unique challenge that differs from my previous projects. Reaching the peak of this challenge will revolutionize our understanding of AI, leading to significant advancements in superintelligence safety.” He emphasized that SSI’s inaugural product will focus on developing safe superintelligence.
Releasing AI Comparable to Human Intelligence
When asked about the prospect of releasing AI that matches human intelligence before achieving superintelligence, Sutskever responded, “The crucial question is whether it is safe and beneficial. The changes brought about by reaching this level of AI will be profound, making it difficult to predict our precise actions. The global perspective on AI will shift dramatically, leading to more intense discussions.”
Defining Safe AI at SSI
On the subject of determining what constitutes safe AI, Sutskever noted, “A significant portion of our approach will involve extensive research. As AI evolves, identifying the steps and tests necessary to ensure safety will be increasingly complex. While there are promising ideas being explored, definitive answers are still forthcoming. This is a critical area we will address.”
The Scaling Hypothesis and AI Safety
Reflecting on the scaling hypothesis, Sutskever pointed out, “The common discourse around scaling hypothesis often overlooks what exactly is being scaled. The past decade’s breakthroughs in deep learning have provided a specific framework for scaling, but this is expected to evolve. As AI capabilities expand, so will the intensity of safety concerns, which will be a major focus for us.”