π€ Learn talking to AI
Dear curious mind,
Welcome to this week's newsletter edition, where I talk about the fascinating field of AI prompting. This issue is all about mastering how to communicate with AI and getting the most out of these tools to improve your productivity and creativity.
In this issue:
π‘ Shared Insight
The Art of Prompting: Unlocking the Full Potential of AI
π° AI Update
OpenAIβs New o1 Model Marks a Leap Forward in AI Reasoning
π Media Recommendation
Book: Co-Intelligence by Ethan Mollick
π‘ Shared Insight
The Art of Prompting: Unlocking the Full Potential of AI
When ChatGPT quickly exploded in popularity with its November 2022 release, many thought the skill of prompting would be short-lived. Prompting refers to crafting clear, detailed instructions that guide the AI system to produce the desired output.
Fast-forward nearly two years, and it's clear that prompting is still key to getting the best results from AI. Prompting is the primary way users communicate with language models like ChatGPT. Just like giving instructions to a person, the clearer and more precise the prompt, the better the AI's response will be.
When prompting, precision matters. If you're vague, the AI will likely miss the mark. But to be honest, 90% of my own requests are simple and straightforward. You don't always need deep-level detail to get useful answers. It all depends on the task.
Adding detail can help for very specific expectations for the replies. Here are the most common strategies that can boost your results:
Instruction-based prompting: Provide specific instructions to help the AI understand the context better.
Few-shot prompting: Share example outputs, so the AI can mimic the style and format you're aiming for.
Persona-based prompting: To get more specialized insights, ask the AI to act as a domain expert (e.g., a teacher or a scientist).
Multi-turn prompting: Engage in a back-and-forth conversation to gradually refine responses.
Chain-of-thought prompting: Guide the AI to reason through the task step-by-step, encouraging clearer and more logical answers.
A prompt which works for one chatbot or language model might not work for another one. Different models have different capabilities, training data, and underlying architectures, so the same prompt can produce very different results. It's important to keep this in mind and be willing to experiment with prompts to find what works best for the specific model you are using. Try rearranging the order of your instructions, adding or removing parts, mixing prompting techniques and trying different formulations.
When you find something that works consistently, save it. I use Espanso, an open-source text expander thatβs available for Windows, macOS, and Linux. It allows me to reuse prompts instantly with a keystroke.
I think prompting is not a temporary skill; it is the way you communicate with AI. Just like with people, clarity and precision lead to better outcomes. When you start to use AI for everything you do, you will after some time figure out when to keep it simple and when to add more detail.
π° AI Update
OpenAIβs New o1 Model Marks a Leap Forward in AI Reasoning
OpenAI just launched their latest model, o1 (formerly code-named "Strawberry"), and it's a game changer in reasoning and problem-solving. This model surpasses its predecessors in areas like competitive programming (ranking in the 89th percentile) and outperforms even Ph.D.-level experts in physics, biology, and chemistry.
o1 has been trained to use "chain-of-thought" reasoning for every question it answers. This means it breaks problems down step-by-step, leading to clearer and more accurate responses.
A major innovation is the introduction of compute during inference. When o1 takes more time to βthinkβ about a problem, its performance improves. Previous models like GPT-4 sometimes drift off track when left to run too long, but o1 stays focused.
Instead of pouring all resources into training bigger and bigger models, OpenAI is focusing on optimizing the model's "thinking time." This could redefine how we measure and improve AI performance, making future models less reliant on massive compute resources.
Currently, there are two versions available via API and within ChatGPT, namely o1-preview and o1-mini. Using them in ChatGPT is strongly rate-limited, and after a few usages you have to wait to continue using the models.
There are mixed statements on π about the coding capabilities of these models. The models are good for coding, no question, but you have to judge yourself if they are better than the current state-of-the-art of Claude 3.5 Sonnet from Anthropic for your usage scenarios.
Besides the currently released versions o1-mini and o1-preview, OpenAI already highlights an even better performance in reasoning benchmarks of the final o1 model.
My take: This shift towards improving models through reasoning, rather than just raw power, is huge. It opens up new possibilities for more intelligent AI systems without having to scale up models. Interestingly, this was achieved by baking the chain-of-thought prompting technique via reinforcement learning in the model itself. However, I would really appreciate an option which shows the detailed reasoning the model did in its βthinkingβ process. But so far, this is only very rough.
π Media Recommendation
Book: Co-Intelligence by Ethan Mollick
"Co-Intelligence" by Ethan Mollick offers an insightful exploration of how generative AI is reshaping our world, particularly in the fields of work, creativity, and decision-making.
The book emphasizes real-world applications of AI, providing concrete strategies for integrating AI into various aspects of work and life. Ethan's four principles offer actionable guidance for readers:
Always invite AI to the table.
Be the human in the loop.
Treat AI like a person (but tell it what kind of person it is).
Assume this is the worst AI you will ever use.
While acknowledging the transformative potential of AI, Mollick doesn't shy away from discussing the limitations and potential pitfalls.
The book's central theme of humans and AI working together as "co-intelligence" offers a vision of the future that explores how humans and AI can complement each other.
My take: The book is written in an engaging and accessible style, making it suitable for both tech-savvy readers and those new to AI concepts. The book "Co-Intelligence" by Ethan Mollick offers a forward-looking approach to living and working alongside AI. It equips readers with the knowledge and strategies needed to navigate the AI-enhanced world we're rapidly entering. Whether you're excited about AI's potential or concerned about its implications, this book provides valuable insights and practical guidance for thriving in the age of artificial intelligence.
Disclaimer: This newsletter is written with the aid of AI. I use AI as an assistant to generate and optimize the text. However, the amount of AI used varies depending on the topic and the content. I always curate and edit the text myself to ensure quality and accuracy. The opinions and views expressed in this newsletter are my own and do not necessarily reflect those of the sources or the AI models.