- Turing Post
- Posts
- Inside Perplexity AI’s Unicorn Journey: from $500M to $9B in a Year
Inside Perplexity AI’s Unicorn Journey: from $500M to $9B in a Year
How Perplexity AI is rewriting the rules of online search – one controversy at a time
Intro
From a roughly $500 million valuation in January to a potential $9 billion this November – this company had four funding rounds so far this year, with the fifth on the horizon!
Today, in our GenAI Unicorn series, we explore Perplexity AI, an AI-driven search engine that leverages large language models to provide answers with cited sources. Parsing the internet real-time, it aims to deliver relevant results. This approach has sparked controversy, with debates over the validity and copyright of the sources. Despite – or perhaps because of this – the startup has drawn millions of users and is now seeking to raise $500 million and double its valuation to $9 billion.
Perplexity AI's journey is remarkable. Two years ago, they secured initial funding through cold emails – before even having a product. Starting as a Twitter-based search tool, they soon had to pivot to challenging the very foundation of online search. Just recently – in November – they offered to New York Times to provide services during a tech workers' strike, which of course sparked backlash as Perplexity’s CEO, Aravind Srinivas, was seen as undermining the workers' collective bargaining efforts.
How do they manage to thrive among controversies? Can they truly challenge Google? What does the future hold as SearchGPT emerges? Could an acquisition be on the horizon? Let’s dive into the details and uncover the trends shaping this unique story. It’s a long read.
In today’s episode:
How it all started
No clear idea but two demos in a day
ChatGPT Moment – four months of work abandoned with pivot to AI search
“Like Wikipedia and ChatGPT had a kid”
Perplexity Products
Tech Spec – How does Perplexity AI's intent-recognition approach differ from traditional search algorithms?
Fight hallucinations with citations
Unfortunately, citations didn’t help with controversies
Financial situation – rounds raised
Business model – diversify across the board
SearchGPT by OpenAI and other challenges
Competitors
Future: Acquisitions and potential synergies
Conclusion
Bonus: Resources
Not a subscriber yet? Subscribe to receive our digests and articles:
How it all started
Aravind Srinivas – the co-founder and CEO of Perplexity – initially wanted to study Computer Science at Indian Institute of Technology (IIT) Madras but was admitted to Electrical Engineering. That didn’t stop his interest in algorithms programming.
“A friend mentioned a machine learning contest to me. At the time, I didn't even know what ML was. It turned out to be fun, and I ended up winning the contest without spending too much time on it – it just came naturally. That’s when I decided to dive deeper into it,” tells Aravind Srinivas his story.
The same 2017 when he earned his a Bachelor's and Master's degree in Electrical Engineering from IIT Madras, he went to Berkeley, to work on his Ph.D. in Computer Science. But the most brilliant part was his strategic internship.
Internship: 2018, May-Aug – OpenAI, Research on Policy Gradient Algorithms
“I came to Berkeley thinking I was definitely one of the top AI PhD students. Then I joined OpenAI, and it hit me hard – everyone was so much better than me. It was a big reality check.”
That summer of 2018, OpenAI published its first GPT (Generative Pre-trained Transformer) model.
"We realized there was a new way of learning – using all the internet data to learn from it – and I felt that was going to be crucial. I told my advisor in Berkeley, ‘This is the right direction; we should pursue it.’ Surprisingly, he was open-minded and said, ‘Alright, I’m not a specialist in this, but let’s give it a try.’ So we spent a lot of time – holidays, weekends – just learning, coding, and understanding everything we could. We did this for two years, which eventually led me to a new research focus: combining generative AI and reinforcement learning. This approach powers technologies like ChatGPT, which doesn't just predict the next word but ensures it knows how to communicate effectively with humans."
Internship: 2019, May-Sept – DeepMind, Large-Scale Contrastive Learning: CPCv2
In 2020, Aravind Srinivas met Denis Yarats over email after independently publishing similar research papers on AI training methods, each two days apart – Srinivas at UC Berkeley and Yarats at NYU. This shared academic interest sparked an ongoing dialogue between them on AI advancements.
Internship: 2020-2021, May-April – Google, Transformers for Vision: Bottleneck Transformers, HaloNet. SoTA Vision Models: Copy-Paste Augmentation, ResNet-RS.
While at Google, Srinivas started thinking: “How could we innovate in search with Google so dominant?” At the moment, he was also reading the book “In the Plex: How Google Thinks, Works, and Shapes Our Lives”, and was very inspired by that. Transformers seemed to hold massive potential for a breakthrough in search. Aravind even reached out to Ashish Vaswani, one of the creators of transformers at Google, saying, “I want to work on this with you – it’s the next big thing.”
Unfortunately, the timing wasn’t right. So 2021, he joined OpenAI as a Research Scientist, focusing on language and diffusion generative models. Then GitHub Copilot – a tool enabling programmers to complete code as they write – achieved real adoption and profitability.
The timing was finally right to bring generative technology to market. In 2022, Aravind Srinivas, then at OpenAI, and Denis Yarats, at Meta AI, teamed up with Andy Konwinski, a co-founder of Databricks, and Johnny Ho, a former Quora engineer and Wall Street quant trader, to develop… just something new.
No clear idea but two demos in a day
Thank you for reading and supporting Turing Post 🤍 We appreciate you
Reply