🤝 AidfulAI Newsletter #10: The New Era of One-on-One Tutoring
Dear curious minds,
Welcome to your go-to AI and PKM digest! This edition delves into innovative updates in AI tech, ways to make AI learning more effective, and a crucial conversation about the urgent need for AI regulation. Let's explore the latest in AI and how it's reshaping our world!
Major AI News
📸🎨 Midjourney’s 5.2 Update Unveils Camera-like Zoom Feature
Midjourney has released version 5.2 of its AI-powered image synthesis model, which includes a new “zoom out” feature. This feature allows users to maintain a central synthesized image while automatically building a larger scene around it, similar to zooming out with a camera lens. The feature, available through Midjourney’s Discord server, allows users to zoom out by a factor of 1.5x, 2x or a custom value, including the ability to change the prompt. It also has a Make Square button to generate material around the existing image in a 1:1 square aspect ratio. But, you can only use this feature with images made in Midjourney. Matt Wolfe showed in a Video shared in a tweet that the feature can be used multiple times on the same close up of a face to generate consistent characters in Midjourney, which before was quite hard to achieve.
Besides the “zoom out” feature, v5.2 also includes an overhauled aesthetic system, better image quality and various other minor changes. Users have expressed excitement, calling the new features “stunning” and “mind-blowing”. However, the use of AI for image synthesis remains controversial due to the way many AI systems, including Midjourney, are trained using images from the web without artist consultation, credit, or permission.
Privacy-Friendly AI
⌨🔐 Rift: The future of AI code assistants is open-source, private, secure, and on-device
Morph has taken a significant step towards privacy-friendly AI code assistants by launching Rift, an innovative AI-native language server and Visual Studio Code extension.
With its open-source nature, Rift aims to provide a private, secure, and on-device experience for developers. Rift integrates the code-v1-3b model from Replit, which was highlighted in the previous issue of the newsletter in the GPT4All passage, and also supports other models available on the Hugging Face Hub. This release represents a shift towards empowering individuals with tools that ensure privacy and security while harnessing the capabilities of AI to write code.
🤖🦩 OpenFlamingo: An LLM processing pairs of images and text
OpenFlamingo is an open-source framework designed for training and using LLMs, which are capable of processing input pairs of images and text. This makes them particularly good for tasks such as captioning and visual question answering.
The latest release, OpenFlamingo V2, includes five new models that are significantly more efficient compared to their predecessors. These models range from 3 to 9 billion parameters and have been built on top of open-source LLMs from Together and MosaicML.
OpenFlamingo models are getting closer to the performance of the DeepMind Flamingo models they are replicating by achieving over 80% of their performance across various datasets. OpenFlamingo is still under development, but it has the potential to be a valuable tool for AI researchers and end-users. If you are interested in using OpenFlamingo, try it on Hugging Face.
PKM and AI
🧠💡 Revolutionizing Tutoring: How AI Makes Personalized One-on-One Learning Accessible and Effective
In “The Unreasonable Effectiveness of 1-1 Learning” Dan Shipper emphasizes the value of personalized one-on-one tutoring. According to a renowned study in educational psychology, individuals can learn almost anything 98% more effectively through one-on-one tutoring compared to a traditional classroom setting.
However, one-on-one tutoring can often be expensive. This is where AI, especially advanced language models in a chat interface like ChatGPT, come into play. Through AI tutoring systems, learners can have interactive and tailored educational experiences at a fraction of the cost of hiring a human tutor. These systems can provide instant feedback, adapt to the learner's pace, and cover a wide array of topics.
Nonetheless, AI tutors, especially large language models (LLMs), sometimes produce incorrect or “hallucinated” information. To mitigate this issue, users can employ several strategies:
Cross-Verification: Users should cross-verify the information provided by the AI tutor with other credible sources to ensure accuracy.
Critical Thinking: It is essential to think critically about the responses and ask follow-up questions to probe the AI tutor for additional information or clarification.
Feedback Mechanism: Users should provide feedback when the AI tutor produces incorrect information. This helps in improving the models over time.
Combination of Human and AI Tutoring: Utilize AI for initial learning and clarification, but also seek human experts for more in-depth or nuanced understanding.
Use state-of-the-art Models: As much as I like open-source models, as of today, their answers are often worse compared to the best proprietary model, most likely GPT-4 for now.
Use a model which has access to web-search: These models integrate actual information, which can help to generate more accurate and informative responses and often provides a source used to create the answer.
By combining the convenience and affordability of AI with critical thinking and human expertise, learners can harness the power of one-on-one tutoring in a more cost-effective and reliable manner. Pick a topic and give it a try by asking ChatGPT for a study plan.
Podcasts
⏳🛡️ AI Is a Ticking Time Bomb: We Need to Understand What Humans Truly Want
In the latest Bankless podcast, Episode 177, Connor Leahy, CEO of the AI alignment research startup Conjecture, warns of unregulated AI development, urging for government intervention to ensure safety. Unlike previous Bankless guest Paul Christiano, who expressed hope in AI alignment, Connor perceives the unchecked progression of AI power as a significant threat. For a safe AI future, he advocates aligning advancements with human values. But are these clearly defined?
Before listening to the 95 minutes long podcast episode, you might take a look at the highlights I created and the AI summary of these generated by Snipd.
Disclaimer: This newsletter is written with the help of AI. I use AI as an assistant to generate and optimize the text. However, the amount of AI used varies depending on the topic and the content. I always curate and edit the text myself to ensure quality and accuracy. The opinions and views expressed in this newsletter are my own and do not necessarily reflect those of the sources or the AI models.