- Turing Post
- Posts
- FOD#18: AI Insights Summit’s Insight and a Personal Insight
FOD#18: AI Insights Summit’s Insight and a Personal Insight
The last week of summer to think about imbalances in AI industry
Some of the articles might be behind the paywall. If you are our paid subscriber, let us know, we will send you a pdf.
AI Insights Summit’s Insight and a Personal Insight
The AI Insight Summit initiated by Senate Majority Leader Charles Schumer is going to be held on September 13, but conversations about it have been circulating non-stop for at least the last week.
IIUC, the people on this list who are well-positioned to speak to real-world effects of AI on those likely to be harmed include: @ruchowdh@rajiinio@mayawiley@LizShuler and Meredith Stiehm . Hope you all get together to strategize. 💚
— MMitchell (@mmitchell_ai)
7:14 PM • Aug 31, 2023
The initial list of invitees was surprisingly male-centric, industry-centric, and lacked representation. So, the list has been updated regularly in an attempt to introduce more diversity.
It's hard. The industry is indeed heavily male-dominated, and you need to make an extraordinary extra effort to be noticed and heard. It's a long conversation that essentially shows us how we raise our kids, what we tell them, and what we encourage them to do.
We have four boys in our family, and now I'm expecting a girl. Yesterday, I had a very interesting conversation with 7- and 9-year-old girls. They congratulated me on the upcoming baby:
'Oh finally, you will have a girl! It’s much better to have a girl.'
'Why?'
'The girls are easier, and they avert risks.'
I was impressed by this conversation. But mostly, I was impressed with my reaction: I don't want girls to avert risks. I want them to dive headfirst into the uncertain and unpredictable world and make the best of it. It might be scary, but we have enough risk-aversion mechanisms naturally built-in to succeed. So, we don't need to impose more; we need to overcome this stumbling block.
Anyway, back to the AI Insights Summit list. Now it features labor and civil rights advocates like AFL-CIO President Liz Shuler and AI accountability researcher Deb Raji. The forum aims to lay the groundwork for bipartisan AI legislation.
Additional info: Azeem Azhar published his commentary on AI governance. He covers such topics as the evolving landscape of technology governance, the complex case of excluding China, and how to approach building a resilient process for AI governance →read more
AI Supremacy also posted a guest post about AI governance with a brief survey of mechanisms to minimize risk and 🎯 maximize upside →read more
Btw, Turing Post is founded and managed by females only. I’m very proud of it.
Since we speak about domination…
X_Elon Musk
As it was quite obvious from the very beginning, Musk’s xAI company will be sourcing data from exTwitter (now X): the social platform has updated its privacy policy to collect biometric and professional data for AI training. Amidst launching xAI and critiquing LinkedIn, speculation is mounting on whether this move aims to create a LinkedIn rival or monetize dwindling ad revenue. Musk claims it's for better content recommendations, but ambiguity prevails.
I can't believe my feed isn't flooded by ppl OUTRAGED by Musk changing the #privacy policy to collect #biometric & other sensitive data for "safety, security" purposes! WTAF?! Are ppl tweeting about it & the posts are being suppressed?! Seriously, WTAF?!
https://— Kathy Baxter (@baxterkb)
3:10 AM • Sep 1, 2023
In other news from Musk’s empire: Tesla is activating a 10,000-unit NVIDIA H100 GPU cluster to accelerate its Full Self-Driving (FSD) training. Launched in late 2022, the H100 GPUs offer up to 30x faster performance than their A100 predecessors, especially for AI training. Amid a GPU supply crunch from NVIDIA, Tesla is also initiating its $1 billion+ Dojo supercomputer, custom-designed for high-performance computing. Elon Musk suggests that with adequate GPU supply, Dojo might not be necessary. Tesla's combined compute capabilities now set it far ahead of other automakers.
Tesla is much more than just a car company. X is much more than a social network. SpaceX is much more than just a spacecraft manufacturer. xAI is much more than an AI lab. Altogether, Elon Musk is much more than just a billionaire; he has immense, unprecedented resources at his disposal.
His portfolio is not just diverse—it's almost Promethean in its scope and potential impact. But the cautionary tale here is that Musk, despite his genius, is still human—with all the fallibility that entails.
Hopefully, X will not be too ominous to cross out humanity.
To the good news
Andreessen Horowitz (a16z) has launched the a16z Open Source AI Grant program to financially support open-source AI developers. Unlike traditional investments or SAFE notes, the grants aim to alleviate financial pressures on developers, enabling them to focus on their projects. The firm has also announced the first cohort of grant recipients and their funded projects:
Jon Durbin (Airoboros): instruction-tuning LLMs on synthetic data
Eric Hartford: fine-tuning uncensored LLMs
Jeremy Howard (fast.ai): fine-tuning foundation models for vertical applications
Tom Jobbins (TheBloke): quantizing LLMs to run locally
Woosuk Kwon and Zhuohan Li (vLLM): дibrary for high-throughput LLM inference
Nous Research: new fine-tuned language models akin to the Nous Hermes and Puffin series
oobabooga: web UI and platform for local LLMs
Teknium: synthetic data pipelines for LLM training
Our friends at the MLOps Community are conducting a survey on 'LLMs in Production - Evaluation.' All findings from this survey will be freely shared with everyone. Please participate!
The Allen Institute for AI has introduced Satlas, an AI-based tool that utilizes satellite imagery to map renewable energy projects. Using the European Space Agency's Sentinel-2 images and a deep learning technique called "Super-Resolution," Satlas aims to enhance the monitoring of climate change from space. While the tool has limitations like generating distorted features, its goal is to provide unprecedented high-res data on renewable energy and forestry worldwide.
News from The Usual Suspects
OpenAI
OpenAI introduced ChatGPT Enterprise. The Financial Times take on it is that OpenAI's enterprise-grade ChatGPT addresses corporate IT concerns but lacks domain-specific expertise. Despite 80% of large U.S. companies unofficially using ChatGPT, the lack of business-grade features and domain knowledge raises questions about generative AI's role in specialized business tasks. The tension exists between generic, large-scale models and narrower, domain-specific ones. Companies may prefer "on-premise" models for governance and bias issues. OpenAI's pivot to enterprise solutions poses challenges given its current focus.
Additional info: To help businesses worldwide across industries accelerate their adoption of AI for CRM, IBM and Salesforce announced a collaboration.
OpenAI also released a guide for teachers using ChatGPT in their classroom.
Introducing the Twitter Library: Each week, we post the best resources from our Twitter feed on our website. These resources have gained popularity, so bookmark this page to have quick access to our bite-sized information, perfect for fast learners! This week, we have...
Google Cloud Next 2023 unveiled a slate of advancements aimed at supercharging AI capabilities and data analytics:
Key hardware releases include the new Cloud TPU optimized for generative AI and A3/A4 Virtual Machines featuring NVIDIA H100 GPUs.
On the software front, Vertex AI has been expanded with improvements in token capacity, language support, and new tools for chatbots and code generation.
BigQuery & AlloyDB now integrate generative AI and are fused with Vertex AI for unified analytics.
Google Kubernetes Engine adds an Enterprise Edition supporting cloud GPUs and TPUs.
RO-ViT by Google: This method advances object detection through pre-training vision transformers in a region-aware manner. It introduces new "cropped positional embeddings" tailored for object detection tasks and employs focal loss for complex learning scenarios. RO-ViT shows superior performance in both region-level and image-level tasks →read more
Meta
Meta introduces FACET, a unified benchmark for assessing fairness in computer vision. It comprises a dataset of 32k images with exhaustive, expert-reviewed annotations across multiple demographic attributes. FACET is available publicly →read more
Alongside FACET, Meta releases its computer vision model DINOv2 under the Apache 2.0 license. This offers greater flexibility for researchers and developers for downstream tasks like semantic image segmentation and monocular depth estimation →read more
Amazon
Amazon offers a new service: Amazon Bedrock, a centralized platform offering access to a variety of foundation models, such as Stability AI's Stable Diffusion, AI21 Labs' multilingual LLMs, and Anthropic's Claude series →read more
Inflection
In our Corporate Chronicle about Inflection AI, we've noted, 'Operating with a low profile, Inflection hasn't been very transparent with independent media.' And now, Mustafa Suleyman is giving interviews left and right. Hopefully, it's because he is reading Turing Post :)
We also reached out to their press office with a request for an interview. Let's see how that plays out. The last week Suleyman’s interviews:
Other news, categorized for your convenience:
Generative Agents
For a while, the AI world has been abuzz with groundbreaking research on Generative Agents. For aficionados, two last week's articles serve as seminal reads: one offering an architectural framework for Generative Agents and another diving into its applications by Google and Stanford researchers. Additional info: To cap it off, don't miss the groundbreaking paper published in early August, "Generative Agents: Interactive Simulacra of Human Behavior”.
Communication and Behavior Modeling
Large Content and Behavior Models (LCBMs): This paper takes Shannon's information theory and extends it to model effectiveness in communication, a realm where current technology lags. The models are trained with "behavior tokens" and aim to predict receiver behavior more accurately →read here
Reinforcement Learning and Human Feedback
RLAIF vs. RLHF: The paper compares Reinforcement Learning from AI Feedback (RLAIF) with Reinforcement Learning from Human Feedback (RLHF). Both methods aim for better alignment of language models with human preferences, particularly in summarization. The results suggest that RLAIF offers a scalable alternative to RLHF →read here
National AI Initiatives and Language Models
Baidu's Ernie Bot: After receiving approval from the Chinese government, Baidu officially released Ernie Bot, a rival to ChatGPT. This move not only led to a surge in Baidu's stock prices but also marked the country's official endorsement of an AI model →read more
Jais and Jais-Chat: Developed by a collaboration between Inception, MBZUAI, and Cerebras, these models are focused on the Arabic language but are competitive in English as well. With 13 billion parameters, they outperform existing Arabic models. The initiative is a partnership involving G42 and the monarchy in the UAE, underscoring a national strategy to promote Arabic-centric AI research →read more
In other newsletters:
Not AI but will be useful if you are building a data team → https://seattledataguy.substack.com/p/centralized-vs-decentralized-vs-federated
Gary Marcus’s obituary to Douglas Lenat, founder and CEO of Cycorp, and researcher in artificial intelligence who worked on (symbolic, not statistical) machine learning, knowledge representation,"cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering" https://garymarcus.substack.com/p/doug-lenat-1950-2023
Runway CEO: AI could usher in new ‘golden era’ of cinema | Semafor
Nathan Lambert’s take on Cruise’s collision: https://www.interconnects.ai/p/cruise-collisions-self-driving
Thank you for reading, please feel free to share with your friends and colleagues 🤍
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.
How was today's FOD?Please give us some constructive feedback |
Reply