The tech giants agreed to install "switches" for AI at critical risks
'''
Leading technology giants have agreed to implement a "shutdown" button in artificial intelligence systems in case of critical risks.
AI switch as a security measure
Global agreement on AI
The world's technology leaders namely Amazon, Google, Meta , Microsoft, OpenAI, and Samsung, along with the governments of various countries, reached a consensus on new safety standards for artificial intelligence during an international summit in Seoul. According to the agreement, each developer of AI systems must publish a security framework that will define unacceptable risks, so-called "red lines", including automated cyber attacks or the creation of biological weapons.
Protective mechanism for critical situations
In order to respond to extreme cases, the companies agreed to integrate into its artificial intelligence "emergency shutdown switch", which will stop their operation in the event of catastrophic scenarios. Before publishing restrictions, developers will take into account the opinion of trusted persons, including government structures. This will take place before the next artificial intelligence summit, scheduled for early 2025 in France.
Scope
These obligations will apply exclusively to advanced AI systems capable of generating text and visual content close to human. It is these powerful models that are causing concern from governments, regulators and technology companies because of the potential risks of their misuse.
Expansion of previous agreements
The new agreement expands on previous commitments made by governments and companies developing generative AI software last November .
Glossary
- Amazon is a global e-commerce and cloud computing company.
- Google is a leading technology company known for its search engine and various AI products.
- Meta is a technology giant that owns social networks Facebook, Instagram and the WhatsApp messenger.
- Microsoft is a world-renowned software and operating systems development company.
- OpenAI is a research company specializing in the development of artificial intelligence systems.
Link
'''
Answers to questions
What goal did tech giants and governments try to achieve by making new AI security rules?
What are the specific commitments made by companies developing advanced AI models?
What role does the European Union play in regulating artificial intelligence?
What approach to AI regulation have other countries like the US and China taken?
What are the prescribed sanctions for violating the new AI regulation rules in the EU?
Hashtags
Save a link to this article
Discussion of the topic – The tech giants agreed to install "switches" for AI at critical risks
Leading technology companies and national governments have agreed on new AI safety rules, including setting AI "switches" in case of critical risks, during a summit in Seoul.
Latest comments
8 comments
Write a comment
Your email address will not be published. Required fields are checked *
Олександр
The news about the inclusion of a "switch" in AI is of great interest. On the one hand, this is an additional protection against potential risks, but will it not become an obstacle to the development of technology? 🤔
Софія
I think this is a reasonable approach to security. Although artificial intelligence brings enormous benefits, we must be prepared for unforeseen situations. It is better to prevent than to deal with the consequences later. 👍
Ганс
I wonder how exactly this "switch" will work? Will the developers share the details of the algorithms to avoid abuse? Transparency is very important here. 💻
Марія
I am sure that the "red lines" will be clearly defined, and AI will not be able to cross them. Although, of course, there is always the risk of human error or cyber attacks. But at least there is an attempt to control the situation. 🔒
Ігор
And can't states and companies abuse this "switch" for political or economic purposes? How to ensure that it will not be used to deter competitors or censor objectionable systems? 🤨
Томас
Of course, like any new technology, AI needs a responsible approach and regulation. But let's not be pessimistic! Let's remember that this is a tool that can bring enormous benefits to humanity. It is important to find a balance between security and progress. 🚀
Марко
In my opinion, this is a step backwards. AI is developing too fast, and attempts to limit it risk delaying technological development. We should trust science and not be afraid of change. 😒
Анна
I think it's a good compromise. On the one hand, we get additional security, and on the other hand, AI continues to develop. The main thing is not to abuse this "switch" and use it only in extreme cases. 🆒