• Turing Post
  • Posts
  • FOD#1: Google leak, Warren Buffet, Geoffrey Hinton, Joe Biden and AI researchers as late-stage teenagers

FOD#1: Google leak, Warren Buffet, Geoffrey Hinton, Joe Biden and AI researchers as late-stage teenagers

Get through 100,500 AI newsletters all at once

Recently, the amount of AI newsletters has grown exponentially. They observe different angles of the polygon named AI. But to follow all of them is quite a task. We do it anyway, so here you go: a summary of over 100 daily, weekly, and monthly newsletters (NLs). Just for you. We call it “Froth on the Daydream". It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.  

Today, we will categorize the news and conversations by main personas. Starting with:

ANONYMOUS

Almost everyone is buzzing about a supposedly leaked email from one of Google's employees (published in SemiAnalysis). In a nutshell, the realization has dawned on tech behemoths: open-source "ate their lunch." All because OpenAI and Google focused too much on competing with each other. While they were doing that, open-source has solved many of the problems that both companies are still struggling with, and it does it at a fraction of the cost and time. Since the open-source community gained access to its first foundation model, the innovation accelerated:

  1. Open-source is much cheaper ($100 model with 13 b parameters vs $10,000,000 Google high-end model with 540 b parameters).

  2. Scaling problems are solved by the open-source community via optimization.

  3. The open-source market's dynamics with the whole world contributing to the improvement of models and their implementation result in a higher iteration speed.

In an outcry “I told you so!”: Tech made simple revealed he already warned us that “Crypto and LLMs being a bubble, AI Emergence being BS, and that making ML Models bigger wasn’t useful”. Uff, it’s hard to be so ahead of everyone.

More Entropy argues that "OpenAI and other AGI or Bust companies will release the world's largest models, capabilities will step forward, and the world will gasp as AI accomplishes previously impossible feats. There's no chance open-source will move in lock-step, let alone surpass these private labs when it comes to training the largest models." The anonymous Google employee urges nonetheless "to prioritize enabling 3P integrations and consider where our value add really is. In the end, competing with open-source is a losing proposition. We need them more than they need us."

He/she asks the right questions: "Who would pay for a Google product with usage restrictions if there is a free, high-quality alternative without them" and about Meta being one of the main beneficiaries of the LLM leak because they got an "entire planet's worth of labor."

An entire planet's worth of labor poses a question about safety. Exponential View points out that with public access, it becomes easier for bad actors to exploit open-source and develop unsafe or malicious models. The rescue comes from collaboration: "Just a couple of days ago we learned that major LLM providers, closed-source and open-source, are going to put their models out for testing by a group of hackers at the major hacker conference Defcon, all in partnership with the White House." Amen.

Ben’s Bite notices that "If we're calling arms to drive open-source forward, then Hugging Face should be our Queen - they are simply prolific enablers of the community!" This might be very well true as exactly this week, they introduced StarCoder: a 15B LLM for code with 8k context and trained only on permissive data in 80+ programming languages.

Following that launch, TheSequence believes that though it doesn't get as much attention as other LLM implementations, "the AI coding revolution is already happening." The Prompt also notices that the "Code interpreter by OpenAI is eating data science."

What surprises me most in these discussions is the rapid democratization of LLM technology. ChatGPT was launched only in November of 2022 (five months ago!) – and now we have this horde of hungry for innovation, knowledgeable for creation, open for collaboration globally distributed developers, engineers, and scientists. This gives me hope that we will avoid the dangers of consolidating AI power in the hands of a single alpha male, and instead have a system of checks and balances, which the open-source community is known for. This is much better for humanity. But in any case, since OpenAI published ChatGPT despite its initial principles, we are now in the wild wild west of AI, and the new gold rush has already begun.

WARREN BUFFET

Speaking about gold… Despite panicky headlines that 92 years old Oracle of Omaha compared AI to an atomic bomb, the truth is that he just said that both things we can not uninvent. During the Berkshire Hathaway annual meeting, his response about AI had more uncertainty than any particular fear.

GEOFFREY HINTON

But if we raise the topic of fear and uncertainty in AI, there is another significant event that shook the AI community. Geoffrey Hinton, a widely respected figure in the field of AI, quit Google because of his fear of AI. This might sound especially significant coming from someone who some consider the father of AI and Einstein of AI. However, Hinton's fear is based not on specific knowledge of upcoming risks, but rather on a general sense of uncertainty that has made him unsure of what to expect. The Algorithmic Bridge makes a good point that “modern AI systems are based on heuristic techniques that lack a solid theoretical basis or sufficient empirical evidence result of valid and reliable—and replicable—testing.” Meaning that there are so many unknowns that we can’t assess AI’s existential risk – or safety, for that matter – with any certainty.

Gary Markus, who’s been a mouthpiece for everything catastrophic about AI, posted a quite modest comment that he recently sent both to The Daily Mail and Geoffrey Hinton: “The core issue, as he has noted, is whether we can guarantee that we can control future systems; so far we can't. One implication is that we need to develop much better regulatory mechanisms”. Geoffrey Hinton replied that he agrees with that. Here comes…

JOE BIDEN

Who just dropped by a meeting about AI this week in the White House. Washington is not behind in having cool events!

Accidentally, as participants, there were the main leaders of AI: Satya Nadella (Microsoft), Sam Altman (OpenAI), Sundar Pichai (Google), Dario Amodei (Anthropic), Demis Hassabis (DeepMind), and Jack Clark (OpenAI). The topic was about the ethical, moral, and legal responsibility that the private sector has to ensure their products’ safety and security. I have a feeling there was a lot of nodding during this meeting. Nodding is usually part of the formal dress code for such meetings. Nonetheless, recently, the White House, Federal regulators and Congress are adjusting their optics to focus more on AI. We don’t know yet if it makes Geoffrey Hinton feel any better.

ROB REICH

None of the AI newsletters mentioned that, but here is an interesting quote from Rob Reich, a political scientist and the associate director of Stanford’s Institute for Human-Centered Artificial Intelligence, in his conversations with Esquire: “AI researchers are more like late-stage teenagers,” he says. “They’re newly aware of their power in the world, but their frontal cortex is massively underdeveloped, and their [sense of] social responsibility as a consequence is not so great.” AI researchers are not teenagers, of course, but the power of AI that might flash before their eyes can be numbing and make risk aversion closer to zero.

Our hopes will stay with the open-source community and thoughtful regulation (nod if you are in a formal suit).

On this note, let’s see what happens to regulations in AI in Europe (summary of: The EU AI Act NL:

The European Parliament has reached a provisional political deal on the proposed EU AI Act, which will now go to a key committee vote on May 11 and a plenary vote in mid-June. The Act includes a ban on "purposeful" manipulation, and AI used to manage critical infrastructure and recommender systems on very large online platforms are deemed high-risk. Extra safeguards have been included for processing sensitive data to detect negative biases, and high-risk AI systems must keep records of their environmental footprint and comply with European environmental standards. MEPs have also updated their rules to include generative AI and require companies to disclose any copyrighted material used to train their models. A group of MEPs has called for a summit to address the rapid development of advanced AI systems, agreeing with the core message of the Future of Life Institute's recent open letter. However, the proposed AI Act remains vague on how to implement the essential requirements for high-risk AI systems, and concerns have been raised about the lack of representation from human rights experts or civil society organizations in the bodies responsible for determining these requirements. Policy recommendations include aligning the EU and US approaches to AI risk management to facilitate bilateral trade and developing an AI assurance ecosystem.

More talking heads:

Other topics discussed last week in newsletters:

  • Tanay’s NL: The fear of job loss can be traced back to Luddites (so, it’s not new).

  • The Rundown NL covers AI method called CEBRA that translates brain signals into video. “The potential future applications of this research are literally mind-boggling and could open up new frontiers in our understanding of the brain using AI technology.”

  • Why Try AI NL compares Midjourney V5 and V5.1 and provides information on which version is better for what purpose.

As we said, this weekly overview is experimental, so let us know what we think, and we keep reading tons of opinionated newsletters for you.

Join the conversation

or to participate.