Artificial intelligence (AI) is rapidly transforming our societies, offering unprecedented opportunities but also significant challenges. For the first time, a group of 100 experts from 33 countries, led by Quebec researcher Yoshua Bengio, has published an international report on the safety of advanced AI. This comprehensive document aims to provide governments with a deep understanding of the risks and action plans to regulate this rapidly expanding technology.
The potential and risks of AI
“While it holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide,” emphasizes Yoshua Bengio, stressing the urgency of international governance.
Rapid progress in general AI
The report highlights the rapid progress in general AI. Five years ago, language models struggled to produce coherent texts. Today, systems like GPT-4o, Gemini 1.5, or Claude 3.5 can write articles, code complex software, and even solve advanced scientific problems. For example, a recent study demonstrated how GPT-4o was able to generate a detailed scientific paper on quantum computing, showcasing the advanced capabilities of modern AI systems.
Autonomous agents and the risk of losing control
A particularly worrying trend is the development of autonomous agents capable of making decisions without human supervision. According to the report, these systems, still in their early stages, could “perform tasks over long periods, plan and coordinate actions with other AIs,” increasing the risks of losing control.
Major risk areas related to AI
The report identifies three major risk areas related to AI:
- Malicious Uses of AI
AI can be exploited by cybercriminals, hostile states, or groups with dubious intentions. AI is now capable of generating ultra-realistic content, making disinformation more effective than ever.
Example: In 2024, a fake video of U.S. President Joe Biden calling for people not to vote circulated widely before the elections, illustrating how easily deepfakes can manipulate public opinion. The report also highlights that AI can enhance cyberattacks: models like LLaMA 3 can identify security flaws in minutes, increasing risks for critical infrastructures. More worryingly, recent experiments have shown that some advanced AIs could provide detailed instructions for designing biological and chemical weapons, a development deemed concerning by experts.
- AI System Malfunctions
These can lead to biased or erroneous decisions. Already, algorithms used in the medical field have generated incorrect recommendations, endangering patients.
Example: In the U.S., an AI system used to assess recidivism risks among inmates has been accused of systemic racial bias, assigning higher risk scores to minorities. AI can also amplify stereotypes, influence business or political decisions, and generate errors that are difficult to detect due to a lack of transparency in its functioning. The report also warns about the loss of control humans could face with overly autonomous AIs. While this scenario remains hypothetical, some experts believe it could become a reality in the coming decades.
- Systemic Risks of AI
These could profoundly disrupt the economy and society. One major concern is the impact on the job market: according to a MIT study, AI could threaten up to 40% of current professions by 2035. The rapid automation of many tasks risks increasing unemployment, particularly in administrative and technical sectors.
Example: The report also highlights the concentration of technological power among a few large companies like OpenAI, Google DeepMind, and Anthropic, which could lead to an oligopoly where a few actors decide the future of AI. Another major issue is the colossal energy consumption of the most advanced models, like GPT-4: a single training session can require as much electricity as a city of 100,000 inhabitants for a year, raising major environmental concerns.
Uncertainty and the need for regulation
One of the main conclusions of the report is the uncertainty about the speed of AI evolution. Some researchers believe that superintelligent AI systems could emerge within 5 to 10 years, while others think it will take several decades.
This uncertainty makes it difficult to implement appropriate regulations. “Policymakers will often have to make decisions with incomplete data, without waiting for irrefutable evidence of risks,” the document states.
Proposed measures
In response to these challenges, several measures are proposed:
- An international Governance Framework: Inspired by nuclear or biotechnology regulations, an international AI treaty could limit dangerous uses.
- Increased transparency of AI models: Requiring companies to publish security audits before launching new models.
- Strict regulation of Open AIs: Publicly accessible models, like those from Meta or Mistral, must include safeguards to prevent misuse by criminals.
- Crisis management protocols: Creating rapid response teams to address AI-related incidents, similar to government cyber teams.
- Massive investment in secure AI research: Promoting more robust and controllable alternatives.
Conclusion
This report will serve as the basis for discussions at the AI Summit which will be held in Paris on February 10 and 11, 2025. Heads of state and experts will attempt to establish a common roadmap. It marks a key step towards stricter AI governance and could influence future public policies. As Yoshua Bengio points out: “We have a limited window of time to act. What is decided today will determine whether AI becomes an asset or a threat to humanity”. The future of AI may seem elusive, but one thing is clear: the choices made today will shape the world of tomorrow.
References:
https://www.gov.uk/government/publications/international-ai-safety-report-2025
https://yoshuabengio.org/2024/06/19/the-international-scientific-report-on-the-safety-of-advanced-ai/
Image generated with Designer
#yoshuabengio, #aireport, #parisaisummit, #aisafety, #futureofai
Leave a Reply