The Paris AI Summit: Falling Short on Global AI Governance

- And more news

Hello Readers, welcome to this edition of The Bastion View newsletter.

In this edition, I reflect on the recently concluded Paris AI summit as a missed opportunity to establish a global consensus on responsible AI development that truly serves humanity.

LATEST DEVELOPMENTS

The Paris AI Summit: Falling Short on Global AI Governance

Source Getty Images

The Paris AI Summit has come and gone. As I reflect on the event, having pored over the statements, commentaries, and post-event analyses, I can’t but feel disappointed by the lack of a firm, holistic, and consensus-driven approach to AI regulation on such an important gathering of politicians and big tech AI players.

Even before it began, the summit’s theme suggested it wouldn’t zero in on AI safety as much as some of its recent predecessors - the 2023 AI Safety Summit at Bletchley Park in the UK and the 2024 AI Seoul Summit in South Korea, where both prioritised safe and responsible innovation. By contrast, the Paris Summit largely steered away from “AI safety” discussions or the existential threats AI could pose or talks that emphasizes ethics, safety, and responsible innovation to a single-minded drive to outdo rivals and claim global AI dominance as speakers made speech after speech. I do understand that not every global gathering on AI will focus on the same issues, I believe this was still a missed opportunity to underscore the need for responsible AI development. The summit offered a prime opportunity to forge real, international AI governance. It however became more of a platform for showcasing individual nations’ AI investments and ambitions. From the statements released, it was clear that the so-called “AI Race” took precedence over any attempt to cultivate ethically guided collaboration.

Back in May 2023, Dr. Gary Marcus and Sam Altman both testified before the U.S. Senate, agreeing on the importance of a policy concerning AI safety. One would have thought that by now, we would see more momentum behind creating robust guidelines. Instead, the recent shift in the U.S. administration, with President Trump overturning former President Biden’s AI regulations aimed at scrutinizing powerful AI models and establishing cybersecurity measures indicates an unpleasant inconsistency. The Paris Summit, unfortunately, only added to this sense of disjointed governance.

Sure, there was talk about expanding AI’s reach and powering economic growth, but there wasn’t enough emphasis on creating an environment that is both pro-innovation and safe. For me, the biggest shortcoming was the summit’s failure to commit to a shared set of guidelines on ethical AI. If we chase speed at all costs, rapidly churning out the next generation of AI models, we risk enabling harmful biases, privacy intrusions, and energy-intensive practices that can have serious global environmental consequences. We’re already witnessing some of these issues in current systems.

The summit could have shown the world that while we’re enthusiastic about AI’s potential we’re also determined to prevent misuse and curb unintended consequences. Instead, many participants including head of states appeared to see governance as just another hurdle to clear, rather than the foundation we need for responsible progress.

Still, there is hope. I firmly believe it’s possible to balance rapid innovation with solid oversight as shared by many of the brightest minds in AI. But if AI is going to truly serve humanity, we need transparent research standards, international collaboration, and regulations that hold organizations accountable for how AI systems operate in the real world. A hurried “race” with minimal cooperation won’t achieve that. We must take lessons from the past where security was an afterthought in the early days of computer systems development and recognize why today’s reactive, bolt-on security measures have often fallen short.

Looking ahead, I hope that future global gatherings will seize the opportunity the Paris AI Summit missed. Instead of merely touting each country’s capacity for investing in AI breakthroughs, we should see a universally recognized commitment, supported by clear, enforceable guidelines to responsible AI development. For me, it’s less about creating rigid barriers and more about ensuring that this technology, with all its incredible potential, is harnessed ethically and for the greater good.

MORE NEWS

OpenAI has launched GPT-4.5 as a research preview for ChatGPT Pro users. The model offers enhanced writing capabilities and improved world knowledge but is not considered a frontier model. It will roll out to Plus, Team, Enterprise, and Edu users in the coming weeks.

 The bill aims to implement the EU AI Act and will designate national competent authorities responsible for enforcing EU regulations and setting out penalties for non-compliance.

The Information Commissioner's Office (ICO) published a response to the UK Government's consultation on copyright and artificial intelligence (AI), emphasizing the need to align copyright and data protection laws, enhance AI training transparency, ensure lawful data processing, and provide clear guidance on text and data mining exemptions.

Meta is reportedly developing a standalone AI chatbot app for Meta AI, expected to launch in the next fiscal quarter. The company also plans to introduce a paid subscription with additional features.

California Senate Bill 468, introduced on February 19, 2025, mandates high-risk AI systems deployers to establish a comprehensive information security program. The program must include administrative, technical, and physical safeguards tailored to the deployer's business size, scope, resources, and the data stored among other provisions.

Meta is reportedly developing a standalone AI chatbot app for Meta AI, expected to launch in the next fiscal quarter. The company also plans to introduce a paid subscription with additional features.

The White House, through the Office of Science and Technology Policy, has issued a request for information on an AI Action Plan, inviting public feedback on various aspects of artificial intelligence policy. Key areas for input include AI hardware, energy efficiency, model development, cybersecurity, data privacy, regulation, national security, and international collaboration.

Virginia's House Bill 1642, which passed both the House and Senate, mandates that criminal justice decisions cannot be based solely on AI-based tool recommendations or predictions. The bill covers decisions in various stages of the criminal justice process, including pre-trial, prosecution, and rehabilitation, and ensures that a human judicial officer or equivalent makes the final decisions. It also allows for legal challenges to the use of AI tools in the criminal justice system.

Amazon has introduced Alexa+, an enhanced version of its voice assistant. Alexa+ is a generative AI-powered assistant that is smarter and more conversational.

A first-of-its-kind policy allows Chinese firms to treat data as an asset with apparent compliance hurdles and potentially shaping global accounting norms.

AI SECURITY TOOLKIT

AISafetyLab is a comprehensive AI safety framework that covers attack, defense, and evaluation. It includes models, datasets, utilities, and a curated list of AI safety-related papers.

AI TOOLS

🧠Advanced Voice - ChatGPT’s conversational voice feature for free users

 📖Perplexity Deep Research - Generate in-depth research reports in minutes

 🎬Animate Anyone 2 - Alibaba’s open-source character animation model

 📚Alice.tech - Turn generic course materials into personalized learning and exam prep

And that’s a wrap!

Hope you enjoyed it?

See you in the next edition!

(For feedbacks and suggestions, please reply to this email)