🛡️OpenAI’s five key steps forward

PLUS: New study suggests LLMs like GPT-4 may mimic human memory and more

Hello readers!

Welcome to The Bastion View where we dive into the latest trends and insights at the crossroads of AI, security, and privacy, helping you to decode emerging threats and opportunities, and empowering you to navigate the AI space with confidence and foresight.

In this edition, OpenAI seeks to build trust in five key steps; report reveals how social media platforms have been collecting and indefinitely retaining troves of data from and about users and non-users in ways consumers might not expect; could AI be heading towards a catastrophic end; and more.

  • 🛡️OpenAI’s five key steps forward

  • 🧠New study suggests LLMs like GPT-4 may mimic human memor

  • 🤖Meta gears up to use your posts for AI training

  • ⚠️Are we heading towards an AI catastrophic collapse?

  • 🏛️UN enters the AI regulation space with a non-binding global framework for AI governance

  • 🖋️California Governor signs five AI bills, but SB 1047 still in limbo

  • 📊 Business roundup

  • 💼 Careers 

  • 🔐 AI Security toolkit

  • 📰 And more news

    Let’s get to it!

Read time: 4 time

TOP OF THE NEWS

🛡️Safety and security at OpenAI: Five key steps forward

Source: OpenAI

As the storm seems to cool down at OpenAI following several months characterized by exits and movements of founders and other key personnels of the company including safety researchers notably Jan Leike, the AI giant has initiated a number of trust-building initiatives. One of these is the establishment of a Safety and Security Committee which has conducted a 90-day review of safety processes and provided recommendations to improve governance and security across the organization. These recommendations span five core areas, now adopted by OpenAI to shape the future of AI model development and oversight.

The key recommendations:

  • Independent governance: A dedicated, independent Board committee chaired by experts in AI and security will oversee safety evaluations and model launches.

  • Enhanced security measures: Expanded cybersecurity operations, threat intelligence sharing, and new internal safeguards will mitigate emerging AI risks.

  • Transparency: OpenAI commits to sharing detailed safety assessments through system cards and other transparent practices.

  • Collaboration with external bodies: Partnerships with independent labs and governments will ensure broader testing and the establishment of industry safety standards.

  • Unified safety framework: A new integrated safety framework will ensure rigorous oversight for future models as their capabilities grow.

Why it matters:
Following a period of internal turmoil, OpenAI is not only working to restore trust and stability but also setting a precedent for the entire AI industry. These actions highlight the growing importance of ethical considerations and proactive governance in AI, potentially influencing future regulations and industry standards.

🧠New study suggests LLMs like GPT-4 may mimic human memory

Photo source: DaLL-E

A groundbreaking study by Hong Kong researchers reveals that large language models (LLMs), such as GPT-4, exhibit dynamic memory similar to human cognition, challenging long-held beliefs about AI capabilities and narrowing the gap between artificial and human intelligence.

The context:
Unlike traditional views of memory as static storage, researchers defined memory in LLMs as the ability to generate responses based on specific inputs rather than static information storage indicating that both human brains and LLMs seem to use "dynamic fitting," adjusting responses based on inputs rather than retrieving fixed information.

Why it matters:
If AI memory operates similarly to human memory, the distinction between human and artificial cognition may be less significant than previously thought. This could mean scaling AI capabilities comes down to improving hardware and training data, pushing the boundaries of what AI can achieve without fundamental differences from human intelligence.

🤖Meta gears up to use your posts for AI training

source: digit news

Although these plans were temporarily halted by European and U.K. regulators, Meta is now moving forward with its plans to harness user content from its platforms to train its generative AI models. The company previously announced that leveraging the vast amount of user-generated content across Facebook and Instagram would give its AI a competitive edge. 

Meta says following “positive” engagement with the Information Commissioner’s Office (ICO), it will now use public posts shared by adults across its platforms, aiming to create AI models that better reflect British culture and language. The company claims this will benefit U.K. businesses and institutions by offering AI tools tailored to local nuances. 

While Meta insists that private messages and content from users who opt out won’t be used, many have raised concerns, particularly in the U.S., where objections have been largely overlooked. The ICO has acknowledged that Meta has made it easier for users to object to this practice in the U.K. 

Meanwhile, Meta’s global privacy director, Melinda Claybaugh, admitted to Australian authorities that the company has already scraped public posts from Australian adult accounts—dating as far back as 2007—without an opt-out option. This includes photos of children posted on these accounts, raising further privacy concerns. 

Privacy advocates, including the Open Rights Group (ORG) and None of Your Business (NOYB), have expressed serious concerns about Meta’s plans with both calling on the ICO and the EU to intervene and halt these practices. The plans remain on hold in Europe. 

In the meantime, a new 129 page report from the U.S. Federal Trade Commission (FTC) has revealed the many ways in which social media platforms have been gathering and using all manner of data about their users - and non-users, and feeding these data into their algorithms and automated systems with users not given a choice about the harvesting that occurs. According to the FTC, legislation is “badly needed” to mitigate the harms of automated on-platform decision-making and to enshrine legitimate data privacy rights for U.S. citizens.

Why it matters:

Meta's move to utilize user data and user-generated content for training its AI models brings significant privacy and ethical issues to the forefront and underscores the ongoing tension between technological innovation such as AI and individual privacy rights.

The FTC's recent report further emphasizes the broader industry pattern of data harvesting without user choice. By collecting and using personal data without explicit consent, social media platforms are potentially violating data protection regulations like the GDPR. The situation highlights the need for clear legal frameworks governing how companies can use personal data for AI development. It also raises concerns about transparency and user autonomy, as individuals often lack control over how their data is collected and used. 

⚠️AI catastrophic collapse?

A recent study published in Nature has brought to light a significant challenge facing AI models, particularly Large Language Models (LLMs) like GPT-4, termed "model collapse." This phenomenon occurs when AI models are trained on a dataset increasingly composed of AI-generated content rather than diverse human-created texts. Over time, this can lead to a dilution of the original richness and diversity of the data, resulting in models that echo increasingly homogeneous and less accurate outputs. The study conducted several experiments which demonstrated that models trained on recursive AI-generated data degrade significantly, losing the ability to produce varied and complex outputs. 

Why it matters:
The risk of model collapse is a critical concern as it directly impacts the reliability and functionality of AI systems across various applications. If AI models become echo chambers, repeating their own interpretations without the diverse input needed for accurate predictions and responses, their utility and effectiveness could drastically decrease. This could undermine trust in AI-driven technologies and stifle innovation in sectors reliant on AI for data interpretation and decision-making.

🏛️UN proposes global framework for AI governance

The United Nations' AI advisory body has released a report with seven key recommendations to address AI-related risks and governance gaps. This initiative comes as AI's rapid spread raises concerns about misinformation and copyright infringement.

Key points:

  • Establishment of an impartial panel for scientific knowledge on AI

  • Creation of a global AI fund and data framework

  • Proposal for an AI standards exchange and capacity development network

  • Suggestion for a dedicated AI office within the UN

The recommendations aim to balance innovation with responsible development, addressing the concentration of AI power in a few multinational companies. This move follows varied international approaches to AI regulation, from the EU's comprehensive AI Act to the US's voluntary compliance model.

🖋️California Governor signs five AI bills, but SB 1047 still in limbo

source: the register

California Governor Gavin Newsom has signed five AI-related bills into law, marking a significant step in regulating artificial intelligence at the state level. These new laws primarily focus on combating deepfake election content and protecting performers' rights in the digital age.

Key legislation:

Election Integrity: Three bills target deepfake election content, mandating disclosures in advertisements and requiring social media platforms to label or remove synthetic election-related content close to elections.

Performer protection: Two bills safeguard actors and performers from unauthorized synthetic replication of their voices and likenesses.

Pending decision: SB 1047, a controversial bill opposed by major tech companies and venture capitalists, remains unsigned. At a Salesforce conference, Newsom expressed concerns about its potential "chilling effect" on the open-source AI community. The governor has until month-end to make a decision.

This legislative package reflects California's proactive approach to addressing AI's societal impacts, balancing innovation with public trust and individual rights. The outcome of SB 1047 could significantly influence the state's AI ecosystem and potentially set precedents for national AI policy.

BUSINESS ROUNDUP

💼Sakana AI, a new AI R&D company based in Tokyo, Japan has secured approximately $200 million in a Series A funding from Japanese companies to accelerate AI development and market expansion.

⚙️BHP warns AI growth will worsen copper shortfall expecting global demand for the metal to rise by more than 72% by 2050

🛡️Picus, a security Validation Firm Picus Security Raises $45 Million

🔒C/side has raised $6 million in a seed-stage funding round to help organizations protect against malicious browser third-party scripts.

🖥️Operant AI, a startup specializing in runtime protection for cloud applications, APIs, and AI systems, secures new $10 million investment.

💼More funding news here.

AI AND SECURITY JOBS

Liberty Global: GSEC AI ML Security Architect

Barclays: AI Security Engineer

Empower: Cybersecurity Data Scientist

MORE NEWS

📱Apple Intelligence is now available in public betas

Apple has rolled out public betas for iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, introducing new Apple Intelligence features such as text rewriting and photo cleanup. These AI tools are available exclusively on the iPhone 15 Pro, iPhone 16, iPhone 16 Pro, and M1-powered iPads and Macs. The final releases are anticipated in October. 

Google has unveiled the Open Buildings 2.5D Temporal Dataset, an AI-driven tool that tracks building changes across the Global South from 2016 to 2023, providing insights into building presence, counts, and heights. Google is making strides with its innovative AI models leveraging the technology to help design smarter cities and mitigate environmental disasters.

As governments and private sector continues to support AI innovation with adequate infrastructure, Global Infrastructure Partners, BlackRock, Microsoft, and MGX have launched a new AI partnership to invest in data centers and supporting power infrastructure. The $100 billion investment potential “will enhance American competitiveness in AI while meeting the growing need for energy infrastructure to power economic growth” according to the partners.

LinkedIn may have trained its AI models on user data without updating its terms of service. In the U.S., LinkedIn users — but not those in the EU, EEA, or Switzerland, likely due to stricter data privacy regulations — have access to an opt-out toggle in their settings that reveals LinkedIn uses personal data to train “content creation AI models.” This toggle isn’t new, but as first reported by 404 Media, LinkedIn initially failed to update its privacy policy to reflect this data usage which ordinarily occurs well before a big change like using user data for a new purpose like this.

Lionsgate, the studio behind The Hunger Games, John Wick, and Saw, has partnered with AI video generation company Runway to develop a custom AI model trained on its extensive film catalog.

⚠️OpenAI cautions users against exploring the reasoning processes of its new o1 AI models, warning that policy violations could result in account bans.

🧠Alibaba has released Qwen 2.5, a multilingual AI model with 72B parameters, rivaling larger models in performance across various benchmarks.

👨‍👩‍👧Instagram rolls out teen accounts with privacy, parental controls

🏢Microsoft, G42 sets up new AI centers in Abu Dhabi

👓Snap Inc. has introduced its fifth-generation Spectacles, standalone AR glasses running on the new Snap OS, featuring enhanced AI capabilities to elevate social interactions through augmented reality.

🏭Lenovo to make AI servers in India, opens new AI-centric lab

⚙️Google AI Studio has launched a new model comparison feature, allowing users to easily compare outputs from different AI models and parameter settings.

AI SECURITY TOOLKIT

🔐Securing LLM Backed Systems: Essential Authorization Practices

With the rapid integration of LLM-powered systems, there is an urgent need for formal guidance on designing them securely, especially when LLMs are used to make decisions or interact with external data sources. This document by the CSA describes the LLM security risks relevant to system design and provides guidance to build systems that utilize the powerful flexibility of AI while remaining secure.

Key takeaways:

LLM security measures and best practices

Authorization considerations for the various components of LLM-backed systems

Security challenges and considerations for LLM-backed systems

Common architecture design patterns for LLM-backed systems

 

🧑‍💻LLM Mastery: Hands-on Code, Align and Master LLMs

Want to understand LLM architecture and deep learning? This Udemy course teaches you to code an LLM, understand its deepest secrets, takes you on a deep dive into deep learning and helps you gain practical skills that will make you the AI guru in any room.

And that’s a wrap!

We would love to hear from you. Please, tell us what you think about the newsletter by replying to this email.

See you in the next edition.