π€ Personalized content is here
Dear curious mind,
Last week was indeed super crazy with major releases from OpenAI, Anthropic, and Google. While Google hosted their annual developer conference, Google I/O, announcing many AI developments, OpenAI and Anthropic also made efforts to capture attention with their own significant releases. However, there wasn't much happening in the open-source AI world this newsletter focuses on. Combined with attending my best friend's wedding, I unfortunately wasn't able to write an issue last week. Apologies if you were looking for it!
This week, we're diving into a fascinating shift underway in how we interact with information: the move from static content towards dynamic, personalized experiences powered by AI. As AI models become increasingly capable and accessible, they are fundamentally changing how we learn, work, and engage with digital media. This significant transformation brings important questions for both consumers and creators.
In this issue:
π‘ Shared Insight
The Death of Static Content: Why AI is Making Traditional Content Feel Ancient
π° AI Update
Gemma 3n: A New Family of Lightweight Models from Google
Devstral: Mistral Releases The Powerful Coding Agent Model
Bagel: Bytedance Releases Multimodal 14B MoE Model with Image Generation
π Media Recommendation
Podcast: Why Nations Are Building Their Own AI Models and AI Infrastructure
π‘ Shared Insight
The Death of Static Content: Why AI is Making Traditional Content Feel Ancient
A recent observation from Tiago Forte expresses perfectly a shift I am experiencing and thinking about myself:

This resonates deeply with my own experience. The traditional model of watching a YouTube tutorial or reading through long how-to articles increasingly feels inefficient when compared to the dynamic, personalized responses I can get from AI in seconds. The shift Tiago describes is a fundamentally different approaches to knowledge consumption. Traditional content is created once and consumed by many, regardless of their specific context, skill level, or immediate needs. AI-powered learning, on the other hand, can adapt to your exact situation, answering your specific questions with your particular background in mind.
Think about your last experience where you looked for specific information: How much time did you spend searching for the right video, skipping through irrelevant sections, or trying to apply generic advice to your specific situation? This inefficiency becomes obvious when you compare that experience to asking an AI chatbot that can immediately understand your context and provide targeted guidance. But so far, this was mainly written content, which is about to change. The latest AI model releases are multi-modal, and the audio and video generation is getting better at a fast pace.
I experienced this transformation firsthand when experimenting with NotebookLM's podcast generation feature. While the tool was impressive for creating audio content from documents, the real revolutionary moment came with the beta feature that allows you to join the conversation and interrupt the AI-generated hosts to ask your own questions in real-time. This interaction transforms passive consumption into an active dialogue.
The Audience of One
We are watching the emergence of what is called "audience of one" content. Instead of searching through multiple YouTube videos to find that one piece of information you need, imagine generating a personalized video explanation tailored specifically to your knowledge level and learning style. With recent advances in AI video generation by Google's Veo 3, which produces genuinely mind-blowing results, this feels closer than ever before.
The implications extend beyond educational content. If AI can generate personalized explanations, tutorials, and even entertainment on demand, what happens to the vast libraries of static content we have built? While static entertainment likely maintains its popularity as we still enjoy shared cultural experiences, informational content seems targeted for disruption.
The Privacy Aspect
The future of learning is not about consuming more content; it is about having more intelligent conversations with AI systems that understand your goals, adapt to your pace, and respond to your questions in real-time. However, at least for me, I do not want to share all this information with cloud-based models and look forward to powerful AI models which are capable of running on my own hardware. The recent release of the Beagle model from Tencent (covered in the AI Update section of this issue) shows what is possible today for privacy-friendly local setups.
To sum it up, static content is not dead yet, but it is definitely starting to feel ancient.
π° AI Update
Gemma 3n: A New Family of Lightweight Models from Google [Google blog]

Googleβs Gemma is a new family of lightweight, state-of-the-art open-weights models built on the same research and technology as the Gemini models. Designed for efficiency, the new Gemma 3n models (with raw parameter counts of 5B and 8B) run with a memory footprint comparable to 2B and 4B models thanks to a new Per-Layer Embeddings technique. They support text, audio and image input and excel at content understanding tasks like question answering, summarization, and reasoning. The models are well-suited for usage in resource-constrained environments like laptops, desktops, and mobile devices. You can already explore its capabilities on Android phones by installing the Google AI Edge Gallery app.
On my Pixel 8 Pro, I achieved only 6 tokens per second on the GPU, which is quite disappointing compared to more than 16 tokens per second stated by Google for running the same model on a Samsung S25 Ultra.
Devstral: Mistral Releases The Powerful Coding Agent Model [Hugging Face model card]

Devstral-Small-2505 is a 24 billion parameter agentic LLM designed for software engineering tasks, created by a collaboration between Mistral AI and All Hands AI. The latter is the team behind the open-source agent framework OpenHands, which was initially known as OpenDevin. The Devstral model stands out at code-based tasks like codebase exploration and multi-file editing, achieving top performance on the SWE-Bench benchmark, which evaluates LLMs on real-world software issues from GitHub. Built upon Mistral-Small-3.1, the model brings a long 128k token context window and is lightweight enough to run locally on high-end consumer hardware (e.g., a single RTX 3090 with 24GB or a Mac with 32GB RAM).
Devstral is open-source (Apache 2.0 Licence) and can be integrated with tools like OpenHands for efficient agent workflows and API access. Another suitable integration I am going to explore is the usage of Devstral in the open-source AI code editor Zed.
Bagel: Bytedance Releases Multimodal 14B MoE Model with Image Generation [Bagel website]
Bytedance unveiled Bagel, a groundbreaking open-weights multimodal model under the Apache Licence. A Mixture-of-Experts (MoE) architecture with 14B parameters (7B active) that offers text generation with thinking mode, image understanding and image generation, including precise editing and style transfer.
For me, especially the ability to run advanced image synthesis and manipulation on my own computer sounds exciting, but the tooling to offer these capabilities is not fully there yet. Recently, a Bagel ComfyUI workflow was released. If you want to get aware of new integrations of Bagel, I advise you to follow the updates on the Bagel GitHub page.
π Media Recommendation
Podcast: Why Nations Are Building Their Own AI Models and AI Infrastructure
As AI capabilities continue their rapid advancement, the geopolitical implications of this powerful tool gets prominent. Nations around the world are starting to view AI as a critical strategic asset, like natural resources or military strength.
A recent episode of the a16z podcast, titled "Sovereign AI: Why Nations Are Building Their Own Models", dives deep into this emerging global trend.
The discussion highlights why countries are investing heavily in building their own AI infrastructure and models, aiming for "Sovereign AI" rather than relying exclusively on foreign technology. Here are three of my key insights from the episode:
AI Models as Cultural Infrastructure: A key driver for national AI is the recognition that foundation models are becoming cultural infrastructure. Countries are hesitant to be in a position where another nation controls the core models that might shape their information, communication, and decision-making processes, reinforcing the desire for full control.
The Global Race for Compute: Currently, the US and China are identified as having the compute capacity to be 'hyper centres'. Other nations, like recently Saudi Arabia, are actively seeking to join this group, trading other resources or assets to acquire the necessary AI infrastructure.
AI Data Centres as New Oil Reserves: The podcast draws a compelling analogy, comparing advanced AI data centres and computing power to the oil reserves of the Industrial Revolution. They are seen as essential national infrastructure, crucial for building industry, exporting goods, driving economic development, and projecting global power.
Disclaimer: This newsletter is written with the aid of AI. I use AI as an assistant to generate and optimize the text. However, the amount of AI used varies depending on the topic and the content. I always curate and edit the text myself to ensure quality and accuracy. The opinions and views expressed in this newsletter are my own and do not necessarily reflect those of the sources or the AI models.