🤝 AidfulAI Newsletter #16: Answer 1000 Questions to Preserve Wisdom for Your Children's Children and Your Personal AI
Dear curious minds,
Welcome to the ultimate newsletter for those interested in Artificial Intelligence (AI) and Personal Knowledge Management (PKM).
Major AI News
🧠💻 Custom Instructions in ChatGPT: A Step Towards Personalized AI Interaction
A new feature initially released last month for ChatGPT Plus users and now rolling out to all users on the free plan. The exceptions are the UK and EU, which are announced to follow soon.
Introduction blog article and usage FAQ from OpenAI.
Two input options for custom instructions guide what ChatGPT knows about you and how it responds.
Custom instructions can make ChatGPT more useful, personalized, and efficient for your coding needs, as you don't have to repeat or remind the model of your context every time.
Examples in this Medium article showcase custom instructions for concise writing, technical understanding, and on point summarizations.
One powerful example, also referenced in the article above, is shared by @nivi in an 𝕏 post.
My take: This has been possible on the OpenAI playground for ages, but having it in the nice ChatGPT UI might be a game changer. Sadly, it only works with activated "Chat history & training" option.
Privacy-Friendly AI
🔍👀 Zoom's New Terms: AI Training Using Customer Data
Zoom, a video meeting platform, can now use customer data to train its AI models, including product usage information, telemetry and diagnostic data, and similar content or data collected by the company as stated in their new terms of service. This was summarized in a CNBC article.
Zoom does not provide an opt-out option for this data collection.
Customer content such as messages, files and documents are not included in this data collection, but Zoom can use this content if the user enables two new generative AI features (a video summary and a chat message composing tool) that are currently in beta.
My take: Who is really reading changed terms of services? Most likely, many people will continue to use services which collect all the data they can get to train their own AI models.
🦜💻 StableCode: A New Frontier in Coding Assistance by Stability AI
Stability AI announces StableCode, an innovative LLM generative AI product designed for coding assistance, in a blog article.
Three different 3B models for coding assistance:
The base model was trained on a diverse set of programming languages and a large corpus of 560 billion tokens of code to learn general coding patterns and structure. It is intended to do single and multiline code completion from a context window of up to 4k tokens. The model checkpoints are released under the Apache 2.0 license, which allows commercial use.
The long context window model can handle reviewing, editing, and autocompleting of up to 16k token which is equivalent to up to five average-sized Python files at once, making it an ideal tool for learning and developing more complex code. This model is also released under the Apache 2.0 license.
The instruction model is a generative AI product for coding that can follow natural language instructions and generate code snippets that match the user's intent. It is released under the StableCode Research License and you have to share your contact information and accept the conditions to access it.
The models showcase competitive performance to models of the same size in the HumanEval benchmark which is a set of 164 handwritten coding problems in Python, each with a prompt, a solution, and a test suite. The code generation models are given the prompts and asked to generate solutions that pass the test suites.
My take: Not on the level of the results from larger models in the HumanEval benchmark as stated on the Papers with Code leaderboard. Nevertheless, having a locally feasible model is the right direction for using it as a base model in many companies.
PKM and AI
🎤🧠 Preserving Wisdom for Future Generations and Your Personal AI
The save wisdom project by Brian Roemmele aims to capture and preserve human wisdom.
The Project seeks to create wisdom legacies and pass insights to future generations by reflecting on life's key questions fosters self-awareness and empathy.
You are intended to use a simple audio voice recorder to capture insights from 1000 questions. The questions are open-source and stated here (you might need to scroll down a bit).
Besides storing your wisdom for future generations, the project aims to build AI models locally to digitally preserve and interact with your wisdom.
My take: I am looking forward to answering the 1000 questions myself and extracting wisdom from them with AI assistance. From my current point of view, this is likely the most life-changing experience everyone can do. To be honest, I'm a little bit scared what answering the questions will unfold. I expect to already get insights by just answering the questions and even more in a later stage by using AI to analyze it and answer questions using my conserved responses.
Podcasts
⌨🛠️ Treating Prompt Engineering Like Code
In episode #167 of the MLOps podcast, Demetrios interviews Maxime Beauchemin, the founder and CEO of Preset and the creator of Apache Airflow and Apache Superset.
Maxime introduces Promptimize, a tool that he developed to scientifically evaluate the effectiveness of prompts for language models.
He explains how Promptimize allows users to create test suites for their prompts, similar to software engineering, and how it can improve the quality, reliability, and performance of their AI-powered products.
Maxime discusses the challenges and opportunities of prompt engineering, such as finding the right edge cases, tracking prompt versions, monitoring speed and cost, fine-tuning models, and avoiding catastrophic forgetting.
My take: It is not clear if prompt engineering will be needed forever. Nevertheless, it is valuable today and Promptimize is one product which offers an analytical way to get bet at it.
Disclaimer: This newsletter is written with the help of AI. I use AI as an assistant to generate and optimize the text. However, the amount of AI used varies depending on the topic and the content. I always curate and edit the text myself to ensure quality and accuracy. The opinions and views expressed in this newsletter are my own and do not necessarily reflect those of the sources or the AI models.
Man you really drove it home with the 1000 questions, well done ! 👏👏👏