- Turing Post
- Posts
- FOD#40: Trust and Responsibility in the Age of AI
FOD#40: Trust and Responsibility in the Age of AI
we explore AI's dual role in reshaping quality journalism's future and offer the best curated list of the freshest ML news and papers
Next Week in Turing Post:
Wednesday, Token 1.19: Explainable AI.
Friday, Guest post about Modern AI and its Unhyped Capabilities.
Turing Post is a reader-supported publication. To have full access to our most interesting articles and investigations, become a paid subscriber →
I’m a professional journalist who has worked in tech for a few decades. Since the bloom of social media, it’s been a tough times for journalism as so many voices appeared and the cacophony was deafening.
AI-generated content adds even more infotrash. But surprisingly enough, I think that AI is here to bring us back to the quality of journalism, both as a risk factor and as an enabler. Two other articles from last week made me think about this. The first one from Semafor introduced their new offering: Semafor's Signals. Using Microsoft and OpenAI tools, Signals provides diverse insights on global news, adapting to digital shifts and AI challenges. Reed Albergotti, the technology editor of Semafor, wrote:
“It’s a great example of a shift that is happening. The advent of social media was a weakening force for media organizations. AI, on the other hand, is a strengthening technology. Social media turned some journalists into stars and helped juice traffic numbers for almost every major publication. But the targeted advertising business, turbocharged by social media, siphoned money away from high-quality publications, and the traffic was just an empty promise. When people think of AI and news, the first thing that comes to mind is reporters being replaced by bots. While a handful of outlets like CNET and Sports Illustrated have been tempted to try this, those examples are just anomalies. AI-generated content is more or less spam, which doesn’t replace journalism. It drives consumers toward trusted publishers.”
I totally agree with this point; in the age of AI, there is nothing more important than to have voices/media whom you trust. And here comes the professional journalist. The responsible journalist. Who is this person? That’s a tricky question since ‘responsible’ in the context of AI becomes a joke. In the era of AI, the question of what constitutes responsible journalism gains new dimensions. Last week, for example, Goody-2 was launched, a chatbot designed to avoid misinformation by providing vague responses and being “responsible”.
AI can be dangerous and used as – for example – for audio-jacking, but in terms of journalism, it offers a bunch of amazing tools that significantly enhance reporting, editing, and content distribution. For instance, automated fact-checking platforms like Full Fact in the UK utilize AI to quickly verify claims made in public discourse, enhancing the accuracy and reliability of news reporting. Data journalism has also been revolutionized by AI, with tools like Datawrapper allowing journalists to create interactive charts and visualizations without extensive coding knowledge. Moreover, The New York Times' experiment with personalized article recommendations showcases how AI can curate content tailored to individual readers' interests, potentially increasing engagement and subscription rates.
Last week, The Platformer was also contemplating the future of the web and journalism.
“To the extent that journalists have a role to play in the web of the future, it is one they will have to invent for themselves. Use Arc Search, or Perplexity, or Poe, and it is clear that there is no platform coming to save journalism. And there are an increasingly large number of platforms that seem intent on killing it.”
And here I agree again: no one is coming to save journalism, but with AI – as risk and enabler – journalism can finally return to its essence. Reflecting on the journey of journalism through the digital and AI revolutions, it becomes clear that while challenges abound, the essence of journalism as a pillar of democracy remains intact. Embracing AI thoughtfully allows journalism to return to its core mission: to inform, educate, and hold power to account – to have responsibility – thereby ensuring that it continues to thrive as a trusted guide in an increasingly complex world.
Twitter Library
News from The Usual Suspects ©
Vesuvius and Pompeii
Using AI, three students read more than 2,000 Greek letters buried in Pompeii in 79 AD and won $700,000.
Roblox
The game company introduced AI-powered real-time chat translations in 16 languages.
Sam Altman
Sam Altman seeks $5-7 trillion for global AI chip production expansion. (That’s a lot…). Gary Marcus offers 7 reasons why the world should say no (that’s not that many…)
OpenAI meanwhile
OpenAI hits $2 billion annual revenue being among fastest-growing tech firms.
OpenAI is working on two AI agents to automate diverse tasks.
ChatGPT system prompt is 1700 tokens?!?!?
If you were wondering why ChatGPT is so bad versus 6 months ago, its because of the system prompt.
Look at how garbage this is.
Laziness is literally part of the prompt.
Formatted in the paste bin below.
pastebin.com/vnxJ7kQk— Dylan Patel (@dylan522p)
4:28 AM • Feb 7, 2024
Microsoft
A summary of recent research from Microsoft and around the world that can help us create a new and better future of work with AI.
NVIDIA
Nvidia wants to start producing personalized chips for AI companies
Google rebranded Bard to Gemini. Read Ethan Mollick about
mixed strengths and weaknesses.
A few
Nvidia, OpenAI, Microsoft, and nearly 200 other companies joined the US AI Safety Institute Consortium (AISIC) to support the safe development and deployment of generative AI.
The freshest research papers, categorized for your convenience
Large Language Models and Their Enhancements
More Agents Is All You Need: Demonstrates how increasing the number of agents in LLMs enhances performance through a sampling-and-voting method. Read the paper
Tag-LLM: Adapts general-purpose LLMs to specialized domains using custom input tags for domain- and task-specific behavior. Read the paper
BiLLM: Introduces a 1-bit post-training quantization approach for LLMs, maintaining high performance under ultra-low bit-widths. Read the paper
Direct Language Model Alignment from Online AI Feedback: Enhances model alignment through online feedback, improving exploration and performance. Read the paper
The Hedgehog & the Porcupine: Presents Hedgehog, a learnable linear attention mechanism that mimics softmax attention in Transformers. Read the paper
An Interactive Agent Foundation Model: Proposes a novel AI framework for domains like Robotics and Healthcare, integrating visual autoencoders, language modeling, and action prediction. Read the paper
DeepSeekMath: Pushes the limits of mathematical reasoning in open language models. Read the paper
SELF-DISCOVER: Enables LLMs to self-compose reasoning structures for complex problem-solving. Read the paper
Can Mamba Learn How to Learn?: Compares the in-context learning abilities of State-Space Models against Transformer models. Read the paper
Scaling Laws for Downstream Task Performance of Large Language Models: Investigates the impact of pretraining data size and type on LLMs' downstream performance. Read the paper
Rethinking Optimization and Architecture for Tiny Language Models: Studies optimizing tiny language models for mobile devices. Read the paper
Shortened LLaMA: Explores depth pruning as a method for improving LLM inference efficiency. Read the paper
Multimodal and Vision-Language Models
λ-ECLIPSE: Achieves personalized text-to-image generation by leveraging CLIP's latent space. Read the paper
SPHINX-X: Proposes an advanced series of Multi-modality Large Language Models focusing on model performance and training efficiency. Read the paper
SpiRit-LM: Integrates text and speech in a multimodal foundation language model for improved semantic understanding and expressivity. Read the paper
Question Aware Vision Transformer for Multimodal Reasoning: Embeds question awareness within the vision encoder for enhanced multimodal reasoning. Read the paper
EVA-CLIP-18B: Scales CLIP to 18 billion parameters, achieving significant performance improvements in image classification. Read the paper
Robotics, Autonomous Systems, and Interactive Agents
Driving Everywhere with Large Language Model Policy Adaptation: Enables adaptation to local traffic rules for autonomous vehicles using LLMs. Read the paper
Offline Actor-Critic Reinforcement Learning Scales to Large Models: Demonstrates that offline actor-critic reinforcement learning can effectively scale to large models. Read the paper
WebLINX: Introduces a benchmark for conversational web navigation, highlighting the need for models that adapt to new web environments. Read the paper
In-Context Principle Learning from Mistakes: Enhances LLM learning by inducing mistakes and reflecting on them to extract task-specific principles. Read the paper
Multi-line AI-assisted Code Authoring: Presents CodeCompose, an AI-assisted code authoring tool offering both single-line and multi-line inline suggestions. Read the paper
Time Series Forecasting, Object Detection, and Other Innovations
Lag-Llama: Introduces a foundation model for univariate probabilistic time series forecasting, showcasing strong zero-shot generalization. Read the paper
InstaGen: Enhances object detection by training on synthetic datasets generated from diffusion models. Read the paper
Implicit Diffusion: Presents an algorithm optimizing distributions defined by stochastic diffusions for efficient sampling. Read the paper
Memory Consolidation Enables Long-Context Video Understanding: Proposes a method enhancing video understanding by consolidating past activations. Read the paper
Grandmaster-Level Chess Without Search: Trains a transformer model to achieve grandmaster-level chess performance without explicit search algorithms. Read the paper
Code Representation and Quantization Techniques
CODE REPRESENTATION LEARNING AT SCALE: Introduces CODESAGE, an advanced model for code representation learning with a two-stage pretraining scheme. Read the paper
Interpretability and Foundation Models
Rethinking Interpretability in the Era of Large Language Models: Examines the role of interpretability with the advent of LLMs, advocating for a broader scope in interpretability. Read the paper
For those curious about the most transformative innovation to semiconductor manufacturing since EUV – Hybrid Bonding by Semianalysis
All My Thoughts After 40 Hours in the Vision Pro by Wait But Why
A funny take on a paper that claimed that AGI had been achieved by Jack Clark from Anthropic (Import AI newsletter)
The promise and challenges of crypto + AI applications by Vitalik Buterin
If you decide to become a Premium subscriber, remember, that in most cases, you can expense this subscription through your company! Join our community of forward-thinking professionals. Please also send this newsletter to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve. 🤍 Thank you for reading
How was today's FOD?Please give us some constructive feedback |
Reply