Dear curious mind,
In today's issue, we explore how voice-powered chatbot search is revolutionizing the way we find information online. I'll share my experience switching from Google to the AI chatbot Perplexity.
In this issue:
π‘ Shared Insight
Voice-Powered Chatbot Search: A Game-Changer for Information Discovery
π° AI Update
Snipd Introduces AI-Powered Book Discovery for Podcast Listeners
Arc Search's Android Launch
π Media Recommendation
Podcast: Anthropic View on AI Safety and Responsible Development
π‘ Shared Insight
Voice-Powered Chatbot Search: A Game-Changer for Information Discovery
Since I started to use the internet in 2021, every web search I performed was through Google. Ok, not every search, as I experimented with privacy-friendly alternatives like DuckDuckGo and Startpage, but the quality gap always led me back to Google. However, in recent months, my search habits have undergone a dramatic shift β I've largely abandoned Google in favor of AI chatbots, particularly Perplexity.
Unlike traditional search engines that return a list of links, chatbot searches provide direct answers with cited sources. This approach saves valuable time by eliminating the need to click through multiple websites to piece together the information you need. The natural language interface also makes the process more intuitive β you can ask follow-up questions and get clarifications just as you would in a conversation. And in contrast to the default ChatGPT, you can access the links to the web-sources holding the information which were used to generate the reply.
To use a chatbot for web search effectively, you have to adapt the way you search. Instead of entering keywords, you should ask questions in natural language to get the best results. The biggest downside of this is that you need to enter more words which takes time and sometimes feels cumbersome. Another usage shift removes this downside nearly completely, I enter my request via voice input instead of typing. This combination of chatbot search and voice commands has transformed how I find information online.
Voice Input: The Missing Piece
To maximize the efficiency of chatbot search, voice input of your detailed request is a game changer. Speaking is significantly faster than typing, and all the major operating systems already have built-in voice-to-text capabilities:
Android: Google's Gboard voice typing
Windows: Native voice typing in Windows 10/11
macOS: Built-in dictation feature
Linux: Open-source solutions like aidful-whisper-typer
All built-in solutions named above use cloud-based transcription and raise with that potential privacy concerns, but it should not really matter as your search request is sent to a cloud server anyway. However, there is the option to perform the voice transcription locally. For example, the Linux solution named above is a program I developed when I couldn't find a solution that met my needs. It starts the voice recording after pressing a global shortcut, stops it after pressing the same shortcut again, and outputs the transcript generated with the open-source Whisper model calculated with my GPU in a fraction of a second. In theory, it should also run on Windows and macOS, but I did not verify this.
Popular Chatbot Search Options
There are several platforms which offer chatbot-based search capabilities. The most well known are:
Perplexity: Currently my top recommendation for reliable results with cited sources
You: Started as a personalization-focused search engine, but shifted to a chat-based AI assistant
ChatGPT Search: Available to Plus subscribers
Gemini: The chatbot from Google, recently enhanced with "Grounding" feature
Arc Search: A mobile app (covered in the AI Update section of this issue)
While chatbot search can dramatically improve your information-finding efficiency, it's crucial to maintain a critical mindset. Don't blindly trust the outputs β always review the cited sources and apply your own judgment. Think of these tools as powerful assistants rather than the only source of truth.
The shift from traditional search to chatbot-based search might feel uncomfortable at first. However, the combination of natural language interaction and voice input creates a more efficient and intuitive way to find information. I encourage you to give it a try β you might be surprised at how quickly it becomes your preferred method of searching the web.
π° AI Update
Snipd Introduces AI-Powered Book Discovery for Podcast Listeners (@snipd_app on π)
I love that my favorite podcast app Snipd now auto-extracts book mentions from episodes and presents them very nicely! It was already outstanding before, as I could highlight a passage recommending a book by pressing a button, but now Snipd is delivering even greater value.
Arc Search's Android Launch (@browsercompany on π)
While Arc Search showcases interesting AI features, it doesn't match the source correlation of competitors like Perplexity, which directly links citations to parts of the generated content. The Browser Company's recent decision to discontinue their popular desktop Arc browser due to "complexity" while announcing to build a new AI powered more simple browser raises questions about their direction (YouTube video from CEO). Arc Search feels more like an interim experiment than a fully realized product, especially given their hints at a different upcoming AI-focused release. Nevertheless, give it a try yourself, which you can do seamlessly without creating an account.
π Media Recommendation
Podcast: Anthropic View on AI Safety and Responsible Development
Anthropic CEO Dario Amodei and research scientists Amanda Askell and Chris Olah recently shared valuable insights about the chatbot Claude development and safety in a podcast interview by Lex Fridman.
Key discussion points included:
Current state and rapid advancement of large language models like Claude
The critical importance of aligning AI systems with human values
Details about Anthropic's "Responsible Scaling Plan" which establishes different safety requirements based on AI model capabilities
Deep dive into "mechanistic interpretability" - a method for understanding AI models' internal workings
This conversation is particularly relevant as it provides rare direct insights from key leaders at Anthropic, one of the major players in AI development alongside OpenAI and Google.
My take: While the full 5+ hour conversation is fascinating, not everyone has the time to listen to it entirely. Using the transcript with NotebookLM offers a fantastic way to interact with this content efficiently. You can generate quick summaries, find specific topics of interest, and jump directly to those segments in the video/audio using the provided timestamps. I've found this particularly useful for technical discussions like this one, where you might want to revisit specific explanations about AI safety or mechanistic interpretability. The ability to chat with the content helps in understanding complex concepts at your own pace. If you're interested in exploring this approach, you can either create your own notebook or use my shared one to get started quickly.
Disclaimer: This newsletter is written with the aid of AI. I use AI as an assistant to generate and optimize the text. However, the amount of AI used varies depending on the topic and the content. I always curate and edit the text myself to ensure quality and accuracy. The opinions and views expressed in this newsletter are my own and do not necessarily reflect those of the sources or the AI models.
Great post! How does having voice powered search change the level of effort you add to your queries compared to Google. I have noticed i personally provide more context when I can just talk it out. One downside is I find it annoying when ai chat bots don't wait for me to finish my request before processing. This has happened a lot with inflection's pi model.
Appreciate you sharing a notebook lm of the Dario podcast. I already started getting stressed when I added the 5 hour podcast to my queue last night lol