Data Machina #251

Data Machina #251


Six Nerdy AI Activities for the Long W/E. I’ve just read that lots of AI engineers in the US are running the rate race, feeling burnout. Here in the European AI scene things are innately a bit more relaxed.

Aah… A long bank holiday in London; so much stuff to do in this amazing city! But if you are feeling the AI FOMO kick and can’t survive a long weekend IRL, here are six AI activities for you:

  1. Generate comics with AI. I gave it a go, generated a few short comics, and having fun so far. The AI team at Bytedance just introduced an impressive diffusion-based, zero-shot, text-to-image and image-to-video model that generates amazing videos and comics. Checkout the demo, paper and repo here: StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation. Make sure you click the Comic Generation Demo link and be patient.

  2. Learn how to build and use a robust AI Agents stack. I absolutely believe that the future is going to be about millions of AI Agents working and generating income for the people. In this vid, Tony shows how to create an AI agent that fetches all the comments of a YouTube video and generates insights to improve video content. Tony uses an AI Agent stack that looks very solid: 1) CrewAI agent framework, 2) the nifty Ollama 3) Groq the super fast AI inference engine, and 4) AgentOps the observability tool for AI agents

  3. Play AI Town game in your computer. I’ve played this game and it’s quite addictive! AI Town is a MIT-licensed game -developed by a16z- in which AI characters live, chat and socialise in a virtual town. You can play AI Town online here. But if you hate cloud signups like me, and want to create your own custom AI Town, check this out: How to create your own AI Town with Llama-3 based agents in your local environment using the nifty Ollama and the one-click deployment with the amazing Pinokio AI browser. The video below provides more details on deploying AI Town locally with Pinokio.

  4. Read the latest on In-Context Learning (ICL.) There is a debate among AI researchers on whether In-Context Learning within a long-context window size can fully beat fine-tuning done with highly-curated data, in terms of domain knowledge and accuracy of model outputs. Let’s see…

    1. This is a great post on that: Is Fine-Tuning Still Valuable? A reaction to a recent trend of disillusionment with fine-tuning

    2. Ethan, a well known AI researcher is more assertive: Fine-tuning is dead. Prompts have closed the gap.

    3. DeepMind: Many-Shot In-Context Learning. Many-shot in-context learning works very well and can be applied universally. “We find that both Reinforced and Unsupervised ICL can be quite effective in the many-shot regime, particularly on complex reasoning tasks.”

    4. In-Context Learning with Long-Context Models: An In-Depth Exploration. “We conclude that although long-context ICL can be surprisingly effective, most of this gain comes from attending back to similar examples rather than task learning.”

    5. MSR: Make Your LLM Fully Utilize the Context. In this paper, Microsoft proposes a solution to the “lost-in-the-middle” long context problem, in which LLMs struggle using information located in the middle section within the long context.

  5. Read the 2024 State of AI Readiness Report. Nice report with good insights and cool charts. The research team at Scale AI interviewed 1,800 AI/ ML practitioners on the latest AI trends, applied AI, and what it takes beyond “adopting AI.” Link to the report: Scale Zeitgeist 2024 AI Readiness Report, 3r ed (pdf, 47 pages)

  6. Read this free book and fall down the rabbit hole of designing Neural NetsThis primer is an introduction to this fascinating field [of differentiable programming applied to NNs] as imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland.” Link: Alice’s Adventures in a Differentiable Wonderland

Have a nice week.

  1. Agentic Workflows and Building Models that Self-Learn

  2. Stanford – Machine Unlearning in 2024

  3. KANs: A New [better?] Alternative to the Multi-Layer Perceptron

  4. [deep dive] MOMENT: A Foundation Model for Time-series Tasks

  5. Google TeraHAC: A New Algo for Clustering Trillion-Edge Graphs

  6. How to Build Domain-specific Datasets for Training AI Models

  7. Amazon Q: A Generative AI Assistant for Biz & Devs

  8. Advanced RAG 101 – How to Build Agentic RAG with llama3

  9. $100-$500K Fast Compute Grants for AI Researchers

  10. LMSYS Kaggle Chatbot Competition – Predicting Human Preference

Share Data Machina with your friends

  1. [notebooks] Examples of Automated Multi Agent Chat with Autogen

  2. AgencySwarm- An Opensource Agent Orchestration Framework

  3. Powerful Automatic Speech Recognition + Diarisastion + Speculative Decoding

  1. Diving into The Math Behind RNNs

  2. Multivariate Time-series Forecasting with CNNs

  3. Geometric Deep Learning: The Erlangen Programme of ML

  1. Tencent AI: More Agents is All You Need

  2. Meta AI: A Simple Recipe to Improve CoT Reasoning with DPO+NLL

  3. Octopus v4: A Graph of LMs to Integrate Multiple Specialised Open Models

  1. How to Monitor a Deep Learning Stack in Production

  2. Automatic Model Deployment with MLflow & GitHub Actions

  3. MLRun -An Opensource MLOps Platform for Continuous ML Apps

  1. MS Marco Search: 10 Billion High-quality Web Pages

  2. WildChat Dataset: 1 Million Real-world User-ChatGPT Interactions

  3. OpenStreetView-5M: 5.1 Million Geo-referenced Street View Images

Enjoyed this post? Tell your friends about Data Machina. Thanks for reading.

Share

Tips? Suggestions? Feedback? email Carlos

Curated by @ds_ldn in the middle of the night.





Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.