- Turing Post
- Posts
- FOD#30: One Year of ChatGPT – The Impact
FOD#30: One Year of ChatGPT – The Impact
Reflecting on ChatGPT's milestones in conversational AI, education, healthcare, and ethical AI discussions
In three days, ChatGPT*, developed by OpenAI, will celebrate its first public launch anniversary. Its success has shown us just the tip of the iceberg of ML and deep learning capabilities. GPT models and other Foundation Models (FMs) have since ignited numerous debates about the future of AI development: Will the focus be on synthetic data, or perhaps a novel architecture beyond transformers? Or both, or something else? That we will see.
*ChatGPT, a deep learning model using transformer networks, generates human-like text from extensive internet data, excelling in conversational AI, content generation, and data processing
But today let’s reflect on just a few significant achievements and impacts ChatGPT has made in just one year:
Advancements in Conversational AI led to Revolutionizing Customer Service and Support:
More sophisticated virtual assistants and chatbots, far surpassing the simple, script-based bots of the past, are now a reality. Every tech behemoth is either utilizing the OpenAI API or developing their own models to achieve this level of advanced communication. The world of copilots. Examples: GPT-powered Microsoft Bing and Edge. Other instances influenced by ChatGPT are other large language models, both closed and open-source, such as Cohere, Claude, Inflection-1, LLaMA, Chinchilla, and all the others.
Enhancing Education and Learning:
ChatGPT assists in learning languages, explaining complex concepts, and even helping in programming and math problem-solving. Examples: Khan Academy is developing an AI tool called Khanmigo for educational purposes. Duolingo developed new enchanced learning experience Duolingo Max. New York City Public Schools is launching an AI Policy Lab to shape the approach of the nation’s biggest school district towards the fast-developing field of AI technology. Etc.
Skyrocketing the GenAI Market
According to Bloomberg Intelligence, the GenAI market, which includes technologies like ChatGPT, is expected to grow to $1.3 trillion by 2032, from $40 billion in 2022. This massive growth, at a CAGR of 42%, is driven by the rising demand for GenAI products across various sectors. Significant revenue growth is anticipated in areas like AI infrastructure for training LLMs, digital advertising driven by AI, and specialized GenAI assistant software.
Enhancing Medicine and Healthcare:
Harvard Medical School has considered ChatGPT for clinical applications, such as assisting in diagnosing illnesses. It can also help with research, patient monitoring, and medical education. Also in bioengineering, ChatGPT influenced the company such as Gingko Bioworks to work on their own LLM that can ‘speak DNA’.
Influencing Content Creation and Media:
The journalism industry, including The New Yorker, is exploring how ChatGPT and AI tools can be integrated into newsrooms. Talking from the user perspective: ChatGPT is a tool that saves a tremendous amount of time and money – but only if you are professional enough to use it in the realm of information.
Shaping Ethical and Policy Discussions around AI, and Addressing Diversity and Bias Issues:
I consider this to be one of the most profound influences. While some ethicists might disagree with me, I believe that, thanks to ChatGPT, we have started to engage in more meaningful discussions and make progress on previously intractable issues regarding ethics, biases in data and diversity. Countries are working to negotiate and align their AI policies, which is not an easy task, but there is noticeable progress. What's even more remarkable is that the AI community, which is increasingly active on social networks and gaining media attention, is finally being heard.
Maybe, after all, OpenAI wasn't so much about transparency (though they, of course, should be) but about opening AI to a wider world, making an entirely new industry possible and also accessible to non-tech people; simultaneously fueling an ongoing discussion about powerful generative AI and what changes we should consider and be aware of.
The list above is not even nearly exhaustive but illustrates some of the profound influences of the ChatGPT public launch.
Today is the last day to use our Cyber Monday 30% OFF discount. Don’t miss out! A super interesting article on LoRA is coming on Wednesday only for our Premium subscribers →
We recommend – an ML use case
At Flo Health, the maker of the most popular women’s health app in the world, ML is an engineering discipline – and as a quickly growing company, their ML team faces significant operational challenges, such as a disjointed approach to ML, with systems spread across the company.
Join Tecton + Flo Health for this webinar on Thursday, December 7, to learn why the Flo Health team implemented a centralized ML platform and see the myriad benefits realized, including enabling the team to:
Build and use the same pipelines for training and inference of their ML models;
Leverage built-in materializations for the online store;
Generate point-in-time correct joins for dataset collection from offline storage;
Easily share features across teams and projects.
News from The Usual Suspects ©
AI godfathers arguing over their risky baby
The heated conversation between Geoffrey Hinton and Yann LeCun, with participation of Andrew Ng, Pedro Domingos, and all pervasive Gary Marcus.
Yann LeCun thinks the risk of AI taking over is miniscule. This means he puts a big weight on his own opinion and a miniscule weight on the opinions of many other equally qualified experts.
— Geoffrey Hinton (@geoffreyhinton)
10:27 PM • Nov 24, 2023
Expanded Claude by Anthropic
Anthropic has updated its AI model, Claude 2.1, enhancing its capabilities and reducing its pricing in response to increased competition in the conversational AI market. Claude 2.1 features a 200K token context window, allowing processing of large documents, and has significantly reduced hallucination rates, increasing reliability and accuracy. Additionally, it includes a new tool use feature for integration with external processes and APIs. But according to Greg Kamradt: ‘Starting at ~90K tokens, performance of recall at the bottom of the document started to get increasingly worse’.
OpenAI and their Q-star (Q*)
If you remember, last week was stirred by the scandal around Sam Altman's In and Out as OpenAI's CEO because of the board's coup against him. One of the rumors is that researchers warned the board about a powerful AI, potentially threatening humanity. This unreported letter and AI algorithm, named Q* (Q-star), contributed to the decision. Q* showed promise in solving mathematical problems, sparking optimism about its capabilities in achieving artificial general intelligence (AGI).
I won't be discussing Q* and its capabilities because there is no actual information about it. Instead, I'll share just one interesting read, one good talk, and one unexpected aftermath effect.
Nathan Lambert offers "The Q* hypothesis: Tree-of-thoughts reasoning, process reward models, and supercharging synthetic data”. Worth as an intellectual exercise.
Jürgen Schmidhuber reminds about his TED-talk 11 years ago, in which he says: “Don’t think of us versus them. Think of yourself, and humanity in general, as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. As for the near future, our old motto still applies: “Our AI is making human lives longer & healthier & easier.”
The unexpected effect: Semafor cites Jaan Tallinn, famous sponsor of Effective Altruism (EA) projects: “The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” said Tallinn, who has poured millions into effective altruism-linked nonprofits and AI startups. “So the world should not rely on such governance working as intended.” He is bailing out of EA!
Amazon is ‘AI-ready’
The demand for AI talent is enormous. Amazon initiated "AI Ready", aiming to offer free AI skills training to 2 million people by 2025, addressing the growing demand for AI talent. It includes eight new, free AI and generative AI courses, and collaborations with Udacity and Code.org to provide scholarships and tech education, especially to underrepresented students. The initiative is part of Amazon's broader effort to enhance workforce skills in AI and cloud computing.
Inflection
Inflection released version 2.0 of their model with “much improved factual knowledge, better stylistic control, and dramatically improved reasoning.” It also claims that this model is now the best-performing large language model after GPT-4.
Twitter Library
Other news, categorized for your convenience
Advances in Language and Generative Models
UltraFastBERT (ETH Zurich): Revolutionizes BERT models by reducing neuron usage, achieving massive speedup in language model inference →read more
Orca 2 (Microsoft Research): Trains smaller language models for enhanced reasoning, challenging the capabilities of larger counterparts →read more
MultiLoRA (Ant Group): Improves LLMs for multi-task learning, optimizing performance with minimal additional parameters →read more
System 2 Attention – Is Something You Might Need Too (Meta): Aims to improve LLMs' response quality by refining attention mechanisms →read more
Adapters: An open-source library for efficient, modular transfer learning in language models →read more
White-Box Transformers via Sparse Rate Reduction: Offers a new approach to learning data distributions, focusing on compression in deep network architectures →read more
Benchmarking AI and Novel AI Applications
GAIA Benchmark: Establishes a real-world task-based benchmark for General AI Assistants, testing fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency →read more
Direct Preference for Denoising Diffusion Policy Optimization (D3PO): Enhances diffusion models using direct human feedback for fine-tuning →read more
We are watching
Highly acclaimed non-technical explanation of what LLMs are by Andrej Karpathy:
Thank you for reading, please feel free to share with your friends and colleagues. In the next couple of weeks, we are announcing our referral program 🤍
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.
How was today's FOD?Please give us some constructive feedback |
Reply