- Turing Post
- Posts
- FOD#24: What’s Overlooked In The Recent AI Reports – Big Questions
FOD#24: What’s Overlooked In The Recent AI Reports – Big Questions
plus podcast about creativity in AI and the curated list of the most relevant developments in the AI world
An October Cornucopia of AI Prognostications
For some reason, October is ripe on AI reports. In the same week, the AI community was awarded the Kaggle AI Report and the State of AI report from Nathan Benaich. No jokes, the world is indeed in turmoil. Halloween is upon us. Zombies haven't started munching on brains – yet – but self-aware AIs could well decide to channel their inner utilitarian philosopher and maximize global happiness. That scenario might be as concerning as listening to Sam Altman's musings on AGI. The point is, that not only does AI need benchmarks, but we also deserve support. So, the reason might have been: why wait until year-end retrospectives when we're all still here and, tentatively at least, listening?
Missing the Forest for the Trees
While annual AI reports from the State of AI and Kaggle offer a kaleidoscope of predictions for the next year, they may not necessarily capture the seismic shifts truly dictating the future of the field. The State of AI report posits bold expectations for the coming year: a generative AI media company will face scrutiny for election meddling; tech IPOs will thaw; and AI companies will face antitrust scrutiny. On the other hand, Kaggle zooms in on the tech-specifics, speaking about the ethical considerations in generative AI and the complexities around computer vision and ML for tabular data. However, what these reports might overlook is the big picture. There are deeper currents, not always obvious, that merit our collective contemplation: the herculean undertaking in neuroscience to map human brain cell types, and the leap in energy-efficient machine learning models, for example.
Nature Speaks to Us
One can argue that we're overlooking developments that could fundamentally change our understanding of intelligence, human or artificial. Take, for example, the monumental feat in neuroscience: the creation of a comprehensive atlas showcasing over 3,000 human brain cell types. The atlas, detailed in a Nature article, promises to decode brain complexity, opening doors for enhanced AI models trained not just on computational data but on a richer, biological context. This work, which contains 21 papers, will “aid the study of diseases, cognition, and what makes us human, among other things.”
Similarly, engineers at Northwestern University have developed nanoelectronic devices that make machine learning 100-fold more energy efficient. Detailed in Nature Electronics paper, these devices offer the ability to perform AI tasks in real-time, without relying on the cloud, thereby improving data privacy. Not only do they contribute to sustainable AI, but they also offer real-world applications far removed from the speculative games we often play in predicting the future of AI.
Looming Questions
Before the end of the year, these are the questions that we keep contemplating:
AI is both a culprit and a solution to the climate crisis. Is it a yin-yang scenario where AI contributes to massive energy consumption but also holds the promise of optimizing energy grids, predictive maintenance for renewables, etc.?
With multimodality becoming a trend, how can we build ML models that not only ingest multimodal data but also interpret the data in a context-sensitive manner to produce more nuanced outputs?
We are also about to figure out Reinforcement Learning with Human Feedback (RLHF) for open source, with such people as Yann LeCun suggesting that “Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style. It is the only way for LLMs to become the repository of all human knowledge and cultures.” How the access to the repository of all human knowledge combined with neuroscience discoveries will enhance us?
In a society where opinions are often shaped by headlines, how is the layperson's perception of AI evolving? Are we looking at a future where "AI literacy" becomes as fundamental as reading and math, especially given the influence of AI in decision-making processes from healthcare to finance?
How advancements in AI could radically alter our social fabric?
What are other historical research and ideas (like those mentioned here) that were overlooked and/or forgotten?
As we ponder these questions, maybe we’ll find the answers we didn’t even know we were seeking. Please email us with your thoughts and questions.
Or.. we can just use Mistral’s Trismegistus: a new model designed for a niche audience interested in esoteric and spiritual topics 🙂
News from The Usual Suspects ©
Andreessen vs Marcus: The Tug of War in Techno-Optimism
Marc Andreessen paints a vivid, unflinching techno-optimist future, advocating for an unbridled embrace of technology and markets as the harbinger of prosperity. Meanwhile, Gary Marcus counters by advocating for a more nuanced approach. Marcus scrutinizes Andreessen’s 11,000-word essay for not substantiating his claims with data and failing to address the proverbial elephants in the room – like climate change and misinformation. While both agree that technology is pivotal for the future, they diverge on how blind or calculated that optimism should be →get popcorn
OpenAI: The Evolving Ethos
Meanwhile, as the ether is occupied with word fights, OpenAI has quietly shuffled its core values, setting its sights squarely on AGI. Gone are words like 'Audacious' and 'Unpretentious,' replaced by 'AGI focus' leading the charge and ‘Intense and scrappy’. The modification in OpenAI’s online charter provides a transparent lens into how its mission is shifting gears. General intelligence is a term that's bandied about often, but could it be more myth than reality? The so-called race to Artificial General Intelligence (AGI) feels like a chase after windmills, diverting resources from solving more immediate, tangible problems.
(side note: instead of our usual Midjourney, we used DALL-E 3 to create a cover for this newsletter. To be able to text with an image generator and optimize the picture in real-time is incredible!)
Google's AI-powered artist and Wordsmith
Google isn't just satisfied with answering queries; it wants to inspire our imagination. The Search Generative Experience (SGE) now comes with capabilities to generate images based on user prompts and offers a draft function that enables users to alter the length and tone of the content it produces. It's not only assisting you in finding information but also in shaping it.
The Steering Wheel for Language Models
NVIDIA’s SteerLM is billed as a democratizing force in the world of LLMs. With real-time "knobs" for tweaking model behavior, it transitions from a one-size-fits-all model to a bespoke tool. Moreover, a customized 13B Llama 2 model is already up for grabs for real-world testing.
Adobe Fires Up the AI Ring
Adobe isn’t one to stand by. Adobe MAX saw a flurry of announcements, including three new Adobe Firefly models for images, vectors, and design. The magnitude of AI features and tools was staggering, effectively signaling Adobe's eagerness to become a heavyweight contender in the AI arena.
Turing Post as a Guest
In this podcast, I argue that expanding AI literacy will widen people's creativity, similar to how increased literacy rates during the Industrial Revolution accelerated progress. Instead of limiting ourselves by anthropomorphizing AI or fearing it will replace us, we should use AI as a tool to enhance our abilities and evolve as humans →subscribe to Creativity Squared to listen to more podcasts about creativity and AI.
Twitter Library
Tech news, categorized for your convenience:
Task-Specific Enhancements for LLMs
Meta-CoT: Generalizable CoT Prompting in Mixed-task Scenarios: Presents a generalized prompting method for mixed-task scenarios, specifically improving generalization in multiple reasoning tasks →read more
LEMUR: Harmonizing Natural Language and Code: Introduces open-source models proficient in both text and code, aiming to enhance the capabilities of language agents →read more
Performance and Efficiency
HyperAttention: Long-context Attention in Near-Linear Time: A new attention mechanism is introduced that effectively handles long contexts in linear time, improving performance significantly →read more
Flash-Decoding for Long-context Inference: Introduces a method to make the attention mechanism more efficient during inference, especially for longer sequences →read more
Resource Efficiency and Deployment
LoftQ: Quantization for LLMs: Proposes a novel framework that combines quantization and fine-tuning for better performance in downstream tasks →read more
Industry Contributions
Zephyr-7B by Hugging Face: A fine-tuned model that outperforms other state-of-the-art models in several benchmarks →read more
Simulation and Real-World Interaction
Learning Interactive Real-World Simulators (UniSim): Aims to create a universal simulator for emulating real-world interactions, showing potential for broader machine learning applications →read more
In other newsletters
How And Why We Need To Implement Data Quality Now! by SeattleDataGuy
The Half-Life of the AI Stack by Matt Rickard
Lessons from History: The Rise and Fall of the Telecom Bubble by Fabricated Knowledge
Washington is Using China to Destroy Open Source by Interconnected
A Guide: How Professors Can Discourage and Prevent AI Misuse by Automated
Thank you for reading, please feel free to share with your friends and colleagues 🤍
Another week with fascinating innovations! We call this overview “Froth on the Daydream" - or simply, FOD. It’s a reference to the surrealistic and experimental novel by Boris Vian – after all, AI is experimental and feels quite surrealistic, and a lot of writing on this topic is just a froth on the daydream.
Leave a review! |
Reply