Artificial Intelligence: A Call for Risk Control from OpenAI and Google DeepMind Experts
"""
An open letter from experts calls for stronger safeguards and regulation of the development and deployment of AI technologies to prevent potential risks and abuses, ensure transparency and the ethical use of artificial intelligence.
AI Regulation
Concerns and Demands
Former and current employees of OpenAI and Google DeepMind have signed a public petition expressing concern about possible threats associated with the uncontrolled development of artificial intelligence. In the message, experts insist on the creation of independent regulatory bodies and the establishment of strict standards to oversee the development and implementation of AI technologies. They emphasize the need for interdisciplinary collaboration to ensure the safe and ethical use of AI, as well as the importance of transparency and accountability. in this area.
The signatories of the letter put forward the following demands for advanced AI companies:
- Do not discourage employees from criticizing potential risks or retaliate against them for such criticism.
- Provide a way to anonymously report concerns to boards of directors, regulators and other organizations.
- Cultivate an atmosphere of open dialogue and allow public statements by employees, subject to maintaining commercial secrets.
- Do not sanction employees who publicly disclose confidential risk information after unsuccessful attempts to communicate their concerns through other means.
OpenAI's response
The open letter has received widespread support in academic circles and among public organizations concerned with issues of ethics and safety in the technology sector. OpenAI responded to the appeal by saying it already has channels for reporting possible AI risks, including a hotline. The company also claims that it does not release new technologies without implementing appropriate security measures.
However, according to Vox, it was revealed last week that OpenAI forced laid-off employees to sign extremely restrictive non-disclosure and embargo agreements critics, otherwise they lost all their shares. OpenAI CEO Sam Altman apologized for this and promised to change its firing procedures.
Experts Concerned
Concerns about the pace and direction of AI development are shared by other experts in the field. Former OpenAI engineer Leopold Aschenbrenner has compiled a 165-page document in which he claims that by 2027 we will reach AGI (human-level intelligence), which will then quickly self-educate to ASI (superior human intelligence). According to Aschenbrenner, the main risk is that AI is now a kind of “second nuclear bomb”, and the military is already seeking (and gaining) access to these technologies.
Glossary
- OpenAI is a leading artificial intelligence research company , founded by Elon Musk and other entrepreneurs.
- Google DeepMind is a British company specializing in the development of artificial intelligence technologies, a subsidiary of Alphabet Inc.
- AGI (Artificial General Intelligence) - artificial intelligence equal to human.
- ASI (Artificial Superintelligence) - artificial intelligence superior to human intelligence.
Links
Answers to questions
What are the main requirements of experts to ensure the safe development of artificial intelligence?
Why are experts expressing concern about the development of artificial intelligence?
How are AI companies like OpenAI responding to the demands of experts?
What are the concerns of artificial intelligence experts regarding the pace and direction of AI development?
What measures are proposed to ensure transparency and accountability in the development of artificial intelligence?
Hashtags
Save a link to this article
Discussion of the topic – Artificial Intelligence: A Call for Risk Control from OpenAI and Google DeepMind Experts
Former and current employees of leading AI companies such as OpenAI and Google DeepMind have signed an open letter demanding stronger security measures and controls over the development of artificial intelligence. They warn that without proper regulation, the potential risks could outweigh the expected benefits.
Latest comments
8 comments
Write a comment
Your email address will not be published. Required fields are checked *
Николай
I believe that security measures in the development of AI are extremely important. Technologies are developing too quickly, and we must be sure that this will not lead to catastrophic consequences. 🤔 I have concerns that AI could be used to harm humanity.
Софи
Nikolay, I completely agree with you. Independent regulators and strict standards are needed. We cannot allow companies to develop AI uncontrollably; the potential risks are really high. 😬 We need transparency and accountability in this area.
Ханс
Didn't OpenAI already have a hotline for reporting risks? This is a good step, but clearly not enough. Companies should discuss issues openly and not bully their employees. 🙄 Otherwise, we risk repeating the mistakes of the past.
Марио
I agree that more security measures are needed, but the benefits of AI should not be forgotten. 💡 These technologies can help solve many of humanity's problems, from fighting diseases to protecting the environment. You just need to find the right balance.
Генрих
What nonsense! 😡 All these newfangled technologies are a waste of time and money. People are always worrying about nonsense instead of focusing on real problems. AI does not pose any threat, it is just another trend that will soon be forgotten.
Анна
Heinrich, I cannot agree with you. AI is not just a trend, but the most important direction in technology development. 💻 Yes, there are risks, but they can be minimized with the right approach. We should not ignore the problem, but find reasonable solutions.
Пьер
This is a really serious problem. I work in the field of AI myself and see how quickly these technologies are developing. 🚀 But without proper control and ethical principles, we risk creating something dangerous. We need to join forces and develop clear rules of the game.
Юлия
I fully support the idea of an open letter. Companies must create a safe environment for their employees to feel free to voice concerns. 🗣️ This is the only way we can prevent possible abuses and ensure the responsible development of AI.