International Agreement for Developing Artificial Intelligence Systems Revealed by US and UK
Australia, Canada, Germany, Italy, and Japan are among the 16 countries featured in the latest AI agreement. A joint international agreement to ensure the safety of artificial intelligence (AI) systems from malicious actors and aid developers in making cybersecurity decisions has been revealed by agencies in the United States and the United Kingdom. The 20-page document, was published by both the Department of Homeland Security’s (DHS)’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC), outlining guidelines for tech companies to follow when developing AI products and services.
In October, U.S. President Joe Biden issued an executive order directing DHS to promote the adoption of AI safety standards globally. The guidelines are not legally binding and are currently only recommendations for tech companies developing AI. According to the agreement, these guidelines are crucial for harnessing the benefits of AI while addressing the potential harms.
Among the other 16 countries featured in the agreement are Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore. There have been concerns about uncontrolled AI development, with Tesla CEO Elon Musk issuing numerous warnings about the potential dangers of AI, and U.S. Securities and Exchange Commission Chair Gary Gensler expressing concerns over financial crisis risks stemming from the widespread use of AI. The CIA has also released its own roadmap for AI to enhance cybersecurity capabilities and protect AI systems from cyber-based threats.
The bipartisan bill “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” aims to create greater transparency, accountability, and security in the development and operation of AI tools. Major social media companies, such as Meta and YouTube, have already taken steps to restrict AI use in areas like political advertising and content creation.
It is still uncertain what measures to prevent the misuse of AI products and services will be implemented.
Source link