- Turing Post
- Posts
- FOD#13: Balancing Safety Measures and Open Source Protections
FOD#13: Balancing Safety Measures and Open Source Protections
Open source in danger? And other questions about the race to shape AI's future responsibly, the possibility of guardrails, and the most important developments in AI
Today, we discuss two emerging AI coalitions: the Frontier Model Forum for AI safety, and an open-source alliance urging for protections in the EU AI Act. We also highlight a study revealing AI chatbot vulnerabilities and note increasing competition in generative AI. We also share updates on AI advancements, new research, and military AI applications.
Dive in!
Some of the articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.
Unions in AI: Balancing Safety Measures and Open Source Protections
Last week, the advancement of AI technology triggered simultaneous discussions on its ethical implications and the preservation of open-source contributions. These twin concerns have prompted the formation of two very different yet interconnected coalitions.
The Frontier Model Forum (FMF), launched by Google, Microsoft, Anthropic, and OpenAI, underscores safety and responsible AI development. The forum aims to foster best practices, fund research on safer AI models, and facilitate dialogue between companies, governments, and other stakeholders on potential risks. Notably, their focus lies more on safety than ethics, a point that has sparked conversations across the AI community. Critics such as The Algorithmic Bridge question this approach, highlighting the absence of "ethics" in the forum's official announcement. However, other platforms like Artificial Ignorance view this coalition as a hopeful alternative to the prevailing "launch first, evaluate harms later" approach.
On a parallel front, a coalition of open-source AI entities, including Hugging Face, GitHub, EleutherAI, Creative Commons, LAION, and Open Future, urges EU legislators to protect open-source AI innovation in the finalization of the EU AI Act. Their policy paper, "Supporting Open Source and Open Science in the EU AI Act," calls for a careful balance between regulation and innovation, cautioning against "overbroad obligations" that might favor proprietary models over open-source AI. As the EU finalizes its groundbreaking AI legislation, this coalition pushes for dialogue that encompasses the needs and realities of open-source developers. This move is significant given the EU's role as a global trendsetter in tech regulation, known as the "Brussels Effect." If the announcement made by Google, Microsoft, Anthropic, and Open AI looks like a well-tailored press release, the paper suggested by open-source advocates is really worth reading.
As pointed out by Interconnects.ai, open-source issues need to be addressed independently.
We currently navigate a very complex terrain with almost zero visibility into the future and real scale of AI risk (or, for that matter, AI benefit). The pivotal question about the different unions remains: can these disparate factions find common ground in the race to shape AI's future responsibly?
Concerns about guardrails
The New York Times is concerned about an increasingly unpredictable environment for technology. All because of the recent research “Universal and Transferable Adversarial Attacks on Aligned Language Models,” published by Carnegie Mellon University, the Center for AI Safety, and Bosch Center for AI.
Despite the meticulous fine-tuning of AI chatbots like ChatGPT, Bard, and Claude to curb harmful content, researchers reveal alarming gaps. By algorithmically appending tailored suffixes to a user query, they managed to coax these AI models into generating harmful content. This exploit is not only unlimited but also easily transferable to closed-source chatbots. The inherent nature of deep learning models may make these vulnerabilities inevitable, raising significant safety concerns as our reliance on AI technology grows.
New superconductor
It's not fully confirmed yet whether the new material that conducts electricity at room temperature and pressure has truly been developed. However, if it has indeed been created, it will significantly cut down the energy costs of electronics and propel us into the unpredictable future of AI even faster.
This is awesome and worth tracking.
Team Twitter is attempting to reconstruct the LK-99 superconductor paper. 🙇‍♂️
If we really have just discovered a new superconductive material, humanity is about to go double exponential.
— Mustafa Suleyman (@mustafasuleymn)
4:52 PM • Jul 30, 2023
Strong competition for enterprise attention
Another significant alliance: ServiceNow, NVIDIA, and Accenture have initiated the AI Lighthouse Program to advance generative AI in enterprises. This joint venture is set to transform AI solutions across various industries. Simultaneously, Amazon Web Services (AWS) is reinforcing its position as the leading cloud provider for generative AI by introducing several updates and new services.
At the recent AWS Summit, new AI and Machine Learning innovations were showcased, including enhanced Amazon QuickSight capabilities and the AWS Glue Studio notebook that now supports Amazon CodeWhisperer. Moreover, AWS Bedrock, Amazon's AI service, is increasingly challenging Microsoft and Google, attracting thousands of users including high-profile companies like Sony, Ryanair, and Sun Life.
These developments signal an intensifying competitive landscape in generative AI, with both collaborative ventures like the AI Lighthouse Program and singular giants like AWS poised to significantly impact the future of AI across various sectors. AWS’s team also published a blog post Lessons from Edison’s Bulb: Using Generative AI to Light the Way for Business Transformation, listing the benefits of GenAI for business.
Other news, categorized for your convenience:
AI Assistants and Tools:
Cohere Coral: AI knowledge assistants designed to help users access and analyze information more effectively.
OverflowAI by Stackoverflow: A suite of AI tools integrated into Stackoverflow's platform, including semantic search, enhanced search for teams, enterprise knowledge ingestion, Slack integration, and more.
AI Detection Tool Shutdown by OpenAI: OpenAI has discontinued its AI Detection Tool, reasons undisclosed.
But introduced ChatGPT for Android:
The ChatGPT app for Android is now available to users in Argentina, Canada, France, Germany, Indonesia, Ireland, Japan, Mexico, Nigeria, the Philippines, the UK, and South Korea! 🎉
— OpenAI (@OpenAI)
3:10 PM • Jul 27, 2023
Libraries and Frameworks:
Agent.js by Hugging Face: JavaScript library that facilitates tool access to large language models.
Meta-Transformer by CUHK and Shanghai AI Lab: A unified framework for encoding text, images, audio, and other multimodalities, demonstrating the universal perception potential of transformer architectures.
Robotics:
VIMA by Nvidia and Stanford: A multimodal large language agent featuring a robot arm, capable of perceiving and interacting with its environment. The researchers also shared VIMA-Bench: a diverse benchmark with multimodal tasks and systematic evaluation protocols for generalization.
RT-2 Language Model by Google: A language model designed to translate words and visual inputs into actions for robots.
AI in Image and Video Generation:
Generative Expand in Photoshop by Adobe: An option in Photoshop beta to allow for more creativity and flexibility in image generation.
Gen-2 Image to Video by Runway ML: An updated feature in Runway ML's Gen-2 line for transforming images into video.
SDXL 1.0 by Stability AI: A cutting-edge text-to-image model with improved image quality and faster processing capabilities.
AI Development and Research Projects:
M2-BERT by Stanford Hazy Research: A new architecture exploring alternatives to traditional transformers like BERT. Introduces Monarch matrices as key components of M2-BERT, designed to be more efficient and scalable in terms of sequence length and model dimension.
Chain of Hindsight by UC Berkeley: A novel method for fine-tuning large language models on human preferences, allowing the models to learn from any form of feedback by converting it into sentences used for fine-tuning.
Measuring Faithfulness in Chain-of-Thought Reasoning: A research study investigating if the reasoning produced by large language models in a step-by-step, chain-of-thought manner is a faithful explanation of the model's actual reasoning process.
AI Benchmarks and Evaluations:
WebArena by Carnegie Mellon University: A standalone, self-hostable web environment for building autonomous agents. WebArena generates websites across four popular categories, mimicking real-world equivalents, and embeds tools and knowledge resources as independent sites. The platform introduces a benchmark for interpreting high-level, realistic natural language commands into concrete web-based interactions, supported by annotated programs designed for functional task correctness validation.
Advanced Reasoning Benchmark (ARB): A novel, challenging benchmark to test large language models (LLMs) on advanced reasoning problems across various fields, including mathematics, physics, and biology. It also introduces a rubric-based evaluation for a more nuanced assessment of LLMs' performance.
DIY AI Resources
a16z AI Companion Starter Kit: A code shared by the venture capital firm a16z to help individuals create a custom chatbot.
AI in Real World, specifically: At War
AI-Powered Drones in Ukraine: Ukraine's drone technology is rapidly advancing, with AI-powered software enabling improved accuracy and resistance to electronic interference. This progress is driven by the demands of war and has attracted significant investments from prominent figures like former Google CEO Eric Schmidt.
Palantir CEO on AI Weapons: The CEO of Palantir argues for the development of AI weapons by the US despite prevalent ethical concerns.
In other newsletters:
Michael Spencer offers his list of AI newsletters worth following.
Devansh debunks the notion that prompting is a million-dollar skill, emphasizing (and proving) the importance of domain expertise over just prompt writing. Overall, an interesting read about prompt engineering.
Sebastian Raschka dives in the paper Low-Resource Text Classification: A Parameter-Free Classification Method with Compressors.
Some thoughts about LLMSec by DataMachina
If you are interested in Microsoft and Google earnings analysis, check Ben Thompson’s recent newsletter.
A deep dive into ASML: “The current AI boom, for example, is dependent on chips made with ASML’s unique, magical technology”.
Thank you for reading, please feel free to share with your friends and colleagues 🤍
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.
How was today's FOD?Please send us a real message about what you like/dislike about it |
Reply