Welcome to our #1 edition of The Bastion View

🧠OpenAI unveils o1, LLM that “thinks and more

Hello Readers,

Welcome to the maiden edition of The Bastion View newsletter. We are excited you are joining us on this journey.

In this edition, we discuss OpenAI’s unveiling of a “thinking” LLM, Clearview AI’s trouble with regulators, and more:

  • 🧠OpenAI unveils o1, LLM that “thinks

  • 💼Dutch regulator fines Clearview AI £25.6 million over illegal facial recognition database

  • ⚕️AI in healthcare: Promise and peril?

  • 🏛️The White House AI infrastructure initiative

  • 💰Open AI raising more funds at a $150 billion valuation

  • 📊 Business roundup

  • 💼 AI and Security jobs

  • 🛡️ More AI and Security news

  • 🔐 AI Security toolkit

  • 📅 Upcoming events

Read time: 4 time

LATEST DEVELOPMENTS

Source Reuters

OpenAI has unveiled o1, a new large language model designed to "think" before responding. This release introduces o1 Preview and o1 Mini, immediately available in ChatGPT, claiming significant improvements over GPT-4 in reasoning tasks.

Context:

  • o1 reportedly outperforms GPT-4 on reasoning benchmarks

  • The model's "thinking time" enhances safety features and guardrails

  • OpenAI acknowledges new risks, rating o1's persuasion capability as "medium risk"

  • Training data and environmental impact details remain undisclosed

  • The release coincides with OpenAI's ongoing funding negotiations

Why it matters:

The unveiling of OpenAI's o1 model marks a significant milestone in AI development, showcasing advancements in problem-solving capabilities that mimic human cognition.

However, this progress comes with a responsibility to maintain scientific integrity and transparent communication. While OpenAI touts o1's "thinking" abilities, a marketing gimmick, experts have cautioned against anthropomorphizing AI systems. The timing of the release during funding negotiations and the absence of peer review underscores the need for cautious interpretation and thorough examination of AI breakthroughs.

Important to note also that this comes after OpenAI and Anthropic had agreed to let the US government access major new AI models before release to help improve their safety in a signed memorandums of understanding with the US AI Safety Institute to provide access to the models both before and after their public release.

💼Dutch regulator fines Clearview AI £25.6 million over illegal facial recognition database

Photo source: Digit News

The Dutch Data Protection Authority (DPA) has imposed a €30.5 million fine on U.S.-based facial recognition firm Clearview AI for multiple violations of the General Data Protection Regulation (GDPR).

Context:

  • Illegal data collection: Clearview AI amassed a database of over 30 billion photos scraped from the internet without consent, including images of Dutch citizens. The DPA declared this practice illegal under GDPR, noting that collecting and using such data is prohibited.

  • Privacy violations: The regulator highlighted Clearview AI's lack of transparency regarding the use of individuals' photos and biometric data. The company failed to inform people that their images were being used and did not comply with requests from individuals seeking access to their own data.

  • Potential additional penalties: The DPA has ordered Clearview AI to cease its privacy violations. If the company fails to comply, it could face an additional penalty of up to €5 million (£4.2 million).

  • Management liability: The Dutch watchdog is considering holding the company's management personally liable for directing these violations, which could result in personal fines.

  • Previous legal battles: This development comes nearly a year after Clearview AI successfully appealed a £7.5 million fine from the UK's Information Commissioner's Office (ICO), where it was ruled that the company did not process data related to monitoring individuals in the UK.

Why it matters:

This case underscores the complex challenges at the intersection of AI, cybersecurity, and privacy rights as well as the growing regulatory scrutiny on companies leveraging technological advancement in AI-driven facial recognition and the imperative to protect individual privacy rights under laws like the GDPR.

It also makes a strong case for regulations on AI systems that collect and process personal data. Organizations operating in the AI and biometric data sectors must prioritize compliance and transparency to avoid substantial penalties and uphold public trust.

⚕️AI in healthcare: Promise and peril?

Photo source: The Nuffield Council on Biotics

The integration of artificial intelligence into healthcare holds immense potential, particularly in the realm of AI-powered genomic health prediction (AIGHP). However, recent insights suggest that this technology isn't ready for widespread adoption.

Context:

A report by the Nuffield Council on Bioethics and the Ada Lovelace Institute cautions against the premature deployment of AIGHP. While the UK government is funding projects to integrate AIGHP into preventive medicine, the report highlights several challenges:

  • Accuracy and reliability: Current AIGHP systems lack consistent accuracy across different populations and disease types. Predominantly European genetic datasets result in poorer performance for non-European individuals.

  • Ethical and privacy concerns: The sensitive nature of genetic data raises risks of data breaches and misuse. There's a heightened potential for genomic discrimination, particularly in insurance practices.

  • Resource allocation risks: An overreliance on AIGHP could divert funding from traditional medical approaches that remain valuable.

The report recommends establishing minimum standards for accuracy, enhancing regulations on data privacy, and enacting laws against genomic discrimination.

Why it matters:

The integration of AI into healthcare must be handled with caution. Without rigorous testing and ethical safeguards, AIGHP and similar AI innovations in healthcare could inadvertently cause harm, exacerbate inequalities, and undermine public trust. Balancing innovation with responsibility ensures that we harness AI's potential to improve lives while navigating the associated risks effectively.

Source: The White House

As current electricity grids struggle to keep up with demand, the Biden-Harris Administration has launched a comprehensive strategy to strengthen U.S. AI Infrastructure and maintain U.S. leadership in artificial intelligence (AI) development and infrastructure. This initiative brings together key players from the tech industry, government, and energy sectors to address the critical challenges of AI infrastructure development.

Context:

High-Level Roundtable: Senior White House officials, including Chief of Staff Jeff Zients, National Economic Advisor Lael Brainard, and National Security Advisor Jake Sullivan, convened with leaders from major tech companies like Nvidia, OpenAI, Anthropic, Google, Microsoft, and Amazon. The meeting focused on strategies to meet clean energy, permitting, and workforce requirements for developing advanced AI datacenters in the U.S.

Task Force Objectives: Led by the National Economic Council, National Security Council, and the Deputy Chief of Staff's office, the Task Force will coordinate policies to advance datacenter development aligned with economic, national security, and environmental goals. It aims to streamline coordination across government agencies, identify opportunities, ensure adequate resourcing, and prioritize AI datacenter projects crucial to national interests.

Supporting actions:

  • Permitting processes: The Administration will enhance technical assistance to federal, state, and local authorities handling datacenter permitting. The Permitting Council will work with developers to set comprehensive timelines and accelerate evaluations for clean energy projects supporting datacenters.

  • Department of Energy initiatives: The DOE is creating an AI datacenter engagement team to leverage programs—including loans, grants, tax credits, and technical assistance—to support datacenter development. It will also host convenings with stakeholders to drive innovative solutions and share resources on repurposing closed coal sites for datacenters.

  • Industry commitments: Tech leaders reaffirmed their commitments to achieving net-zero carbon emissions and procuring clean energy for their operations. They also pledged to enhance cooperation with policymakers through ongoing dialogue and collaboration.

Why it matters:

This initiative marks a significant advancement in the U.S. strategy to maintain its edge in AI technology. By actively shaping the infrastructure needed for advanced AI operations, the government is ensuring that AI systems are developed and operated domestically, bolstering national security and economic interests.

BUSINESS ROUNDUP

In July, widespread reports indicated that OpenAI is on track to spend approximately $5 billion. It currently has no profits. It is therefore no surprise OpenAI is now raising cash. Bloomberg reports that the AI giant is discussing raising $6.5 billion in equity financing and in talks with banks to provide a $5 billion credit line. The new valuation, excluding the money currently being raised, significantly exceeds the company's earlier $86 billion valuation from its tender offer this year, cementing its position as one of the world's most valuable startups.

💰OpenAI's ChatGPT has reportedly achieved a significant financial milestone, with COO Brad Lightcap revealing that the AI platform has surpassed 11 million paying subscribers. This user base includes 1 million subscribers on higher-priced business plans, potentially generating over $2.7 billion in annual revenue.

Mastercard agreed to acquire AI-powered threat intelligence company Recorded Future for $2.65 billion, aiming to enhance its cybersecurity capabilities.

As part of its plans to accelerate the roadmap for Cisco Security Cloud, Cisco has announced plans to acquire Robust Intelligence, a security startup with a platform designed to protect AI models and data throughout the development-to-production lifecycle. The acquisition will enable it integrate the Robust Intelligence's AI security platform with Cisco Security Cloud to streamline threat protection for AI applications and models and increase visibility into AI traffic.

More business and M&A news here

AI AND SECURITY JOBS

Tiktok: Security Analytics Data Scientist

JPMorganChase: Principal Cybersecurity Architect - AI ML Security

EY: TC-CS-Cyber Architecture-OT and Engineering-Security Architect AI-Senior

Microsoft: Cambridge Residency Programme – Postdoctoral Researcher in AI Security and Privacy

MORE NEWS

OpenAI and Anthropic have agreed to let the US government access major new AI models before release to help improve their safety.

Google DeepMind has unveiled two advanced AI systems that significantly enhance robotic dexterity, enabling machines to perform complex tasks requiring precise and skillful movements—such as tying shoelaces and hanging shirts.

According to technological research and consulting firm Gartner, 40% of GenAI (generative artificial intelligence) solutions will be multimodal by 2027—a significant increase from 2023’s figure of 1%.

A new international agreement, which is the first legally-binding treaty governing the safe use of artificial intelligence (AI), has been signed in the UK. The new convention, agreed by the Council of Europe, commits parties to collective action on managing AI products and protecting the public from misuse and risks.

AI SECURITY TOOLKIT

📜Explore the OWASP Top 10 for LLM Applications here. It outlines the most critical security risks associated with AI-driven applications that utilize large language models. These risks include prompt injection attacks, where malicious inputs manipulate model outputs; data poisoning, where corrupted training data impacts the model's behavior; model theft, involving the extraction of intellectual property; and insufficient access controls that expose sensitive functionalities.

Additionally, there are concerns over inadequate privacy controls, which may result in leakage of sensitive data, and bias or fairness issues, which can propagate discrimination. The Top 10 list also highlights the importance of secure deployment practices, transparency, and ongoing monitoring to mitigate evolving security challenges in LLM-based systems.

🌍Information Security Magazine: Navigating the Global AI Regulatory Landscape: Essential Insights for CISOs

UPCOMING EVENTS

Gartner Security & Risk Management Summit: 23 – 25 September 2024 | London, U.K.

Explore brochure here

And that’s a wrap!

We would love to hear from you. Please, tell us what you think about the newsletter by replying to this email.

See you in the next edition.