In a move to address the burgeoning concerns surrounding the rapid advancement of artificial intelligence, the Group of Seven (G7) industrialized nations is set to introduce a voluntary code of conduct for companies at the forefront of AI development, according to a G7 document reviewed by The New York Times.
This landmark initiative, which will be agreed upon next week, underscores the growing urgency among major nations to establish guidelines for AI, given the rising apprehensions about privacy and security.
The inception of this initiative referred to as the “Hiroshima AI process”, took place in May during a ministerial forum. The G7, comprising Canada, France, Germany, Italy, Japan, Britain, and the United States, along with the European Union, has been instrumental in its development.
The 11-point code seeks to “promote safe, secure, and trustworthy AI worldwide.” It offers voluntary guidance for organizations that are pioneering the most cutting-edge AI technologies, including advanced foundation models and generative AI systems. The overarching goal is to harness the potential of these technologies while addressing the inherent risks and challenges.
The code emphasizes the importance of companies taking proactive steps throughout the AI lifecycle. This includes identifying and mitigating potential risks and addressing any misuse once AI products are in the market. Furthermore, the code advocates for companies to release public reports detailing the capabilities and limitations of their AI systems and to invest in stringent security measures.
While the European Union has been proactive with its robust AI Act, other nations, including Japan and the United States, have opted for a more laissez-faire approach, prioritizing economic growth.
Vera Jourova, the European Commission’s digital chief, while addressing an internet governance forum in Kyoto, Japan, earlier this month, highlighted the significance of the Code of Conduct. She posited it as a foundational step towards ensuring safety and a bridge to future regulation.