Former chief scientist of OpenAI Ilya Sutzkever creates Safe Superintelligence company to develop safe superintelligent artificial intelligence
Ilya Sutzkever, former lead scientist of OpenAI, creates a new company Safe Superintelligence Inc. (SSI), which will focus exclusively on developing safe super-intelligent artificial intelligence for humanity.
Safe superintelligence AI
A new company with a unique mission
After leaving OpenAI, Ilya Sutzkever founded Safe Superintelligence Inc. (SSI) with entrepreneur Daniel Gross and former colleague Daniel Levy. The main goal of the newly created company is to research and develop safe systems of super-intelligent artificial intelligence that can benefit humanity without negative consequences.
Focus on Security and Progress
Unlike many tech startups, SSI will not experience commercial pressure and will be able to fully focus on ensuring the reliability and security of the developed AI systems. This company intends to avoid the distraction of management issues or product development in order to efficiently scale its research.
Search for leading experts
SSI is currently actively seeking talented engineers and scientists to complement its small , but a dedicated team. All employees of the company will work on a single mission - to create a safe super-intelligent AI.
Glossary
- OpenAI is a leading artificial intelligence research company founded by Elon Musk and others.
- Ilya Sutzkever is a former chief researcher and co-founder of OpenAI, an expert in the field of AI.
- Sam Altman is the former CEO of OpenAI.
- Daniel Gross is an entrepreneur and investor, co-founder of Cue, a startup acquired by Apple in 2013.
- Daniel Levy is a former employee of OpenAI, now co-founder of SSI.
Link
- Ilya Sutzkever's message about the creation of SSI
- The Verge article on SSI
- WSJ article on the conflict between Sutzkewer and Altman
Answers to questions
What is SSI (Safe Superintelligence Inc.) and what is its mission?
Who heads SSI and what are their offices?
What are the advantages of the SSI business model?
What is known about the conflict between Elon Musk and Sam Altman in OpenAI?
What actions did Ilya Sutzkever and Ian Lake take after the OpenAI conflict?
Hashtags
Save a link to this article
Discussion of the topic – Former chief scientist of OpenAI Ilya Sutzkever creates Safe Superintelligence company to develop safe superintelligent artificial intelligence
Ilya Sutzkever, a well-known artificial intelligence researcher and former chief scientist of OpenAI, founded a new laboratory called Safe Superintelligence Inc. (SSI). The company will focus exclusively on creating super-intelligent artificial intelligence that is safe for humanity.
Latest comments
8 comments
Write a comment
Your email address will not be published. Required fields are checked *
Олександра
Really interesting news about the launch of the new AI company SSI! They seem to focus solely on the security of developing super-intelligent AI 🤖. This is the right approach, because smart systems require careful monitoring and security testing.
Франсуаза
Yes, I was also interested in this initiative. Sutzkever is a well-known specialist in the field of AI, his involvement gives weight to the project. And AI security is an extremely important thing 🔐, I am glad that such companies are appearing.
Вольфганг
Interestingly, Sutzkever left OpenAI precisely because of differences in approaches to AI security. And here he will be able to focus exclusively on this problem 🕵️♂️. Although achieving the security of superintelligent AI is not an easy task.
Анджей
And I heard that OpenAI had a lot of problems due to different visions of security between Altman and Sutzkever 😕. It is wonderful that Ilya will be able to implement his approach in the new company SSI.
Бруно
I understand the skepticism of some, but according to Sutzkever, they are independent of commercial pressure and can fully dedicate themselves to AI security 🛡️. This is great, because protection against possible threats should be a priority.
Ґреґор
New AI company again 🙄? Do these guys honestly think they can achieve super intelligent AI security when they don't even know what it will look like? What a ridiculous trick 😂, money and time wasted.
Ольга
I don't quite share your skepticism, Gregor. At least SSI is prioritizing security and not just trying to release new technology as soon as possible. In the field of AI, this is an important step 👣.
Анатолій
I agree that it is currently difficult to guarantee the safety of future superintelligent AI 🤖. But you should think about this problem in advance and develop approaches, and not ignore it. The SSI initiative is worth attention 🧐.