Data Machina #260

Data Machina #260


Vision-Language Models Booming. VLMs are experiencing a boom. Large foundation models like OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, and Google Gemini Pro 1.5 keep showing amazing vision-language capabilities and still dominate the benchmarks. But in a race to democratise VLMs at an efficient cost of operation -while maintaining performance- there is a new type of emerging, small, versatile, specialised VLMs that are becoming very powerful. And that is great!

Start here: The best intro to VLMs, 2024. Probably – by far- the best introduction to VLMs. A mega paper in the format of a pdf book, published by Meta AI, NYU, MILA, MIT and several other unis. Paper: An Introduction to Vision-Language Modelling.

Recommended: Tutorial on VLMs CVPR June 2024. This tutorial covers the latest approaches and theories on 1) Learning VLMs for multimodal understanding and generation 2) Benchmarking and evaluating VLMs and 3) Agents and other Advanced Systems based on Vision Foundation Model. All the slides and video sessions from the tutorial here: Recent Advances in Vision Foundation Models.

Trend: Small but powerful VLMs. This an unstoppable trend: we’re starting to see some amazing, small -even tiny- VLMs that are very powerful. Some are even full open-source, or at least open/open weights, and in some cases achieve SOTA in several benchmarks. Here are 5 small VLMs you should know:

LLaVA-Next (interleaved). Key feature: An image-text interleaved format that unifies multi-image, video, and 3D tasks in one model with excellent performance. Released in 0.5b, 7b, and 7b-dpo versions, it achieves SOTA in many benchmarks, and leads the open source VLMs. Checkout the repo, demo, and blog here: LLaVA-NeXT: Open Large Multimodal Models.

PaliGemma. Key feature: It takes both images and text as inputs and can answer questions about images with detail and context efficiently. PaliGemma is an open, lightweight VLM based on the more powerful PaLI-3 family of VLMs from Google. You can get the base, pre-trained PaliGemma and PaliGemma-FT fine-tuned version. Sonu posted a nice in-depth review of PaliGemma and how to fine-tune it.

Phi-3 Vision. Key feature: Very strong multi-modal, high quality reasoning trained on dense text and image data. Phi-3 Vision is based on the latest family of Phi-3 models released by Microsoft. You can get the latest model version released, Phi-3-Vision-128K-Instruct, which achieves SOTA in several benchmarks with quite a long context. Sam recently published a nice review, testing the model with some real world cases.

Florence-2. Key feature: a unified, prompt-based representation for a variety of vision and vision-language tasks. Published by Microsoft, Florence-2 was designed to take text-prompts as task instructions and generate desirable results in text forms, whether it be captioning, object detection, grounding or segmentation. Paper: Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks. The team at Roboflow just released a in-depth tutorial on fine-tuning Florence-2.

InternLM-XComposer 2.5. Key feature: Ultra-high resolution & fine-gained image and video understanding in a small model with long context. This VLM supports 96K long-context input and output. It achieves GPT-4V level capabilities in many benchmarks with just a 7B LLM backend. Repo, demo and technical report: InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output. Two days ago, Fahd published a good review on this model and some of its installation pains.

NVIDIA VLMs playground. NVIDIA just released a nice demo-playground for several of these VLMs above. Try it here: NVIDA VLMs.

A new multi-modal model benchmark. Reflecting the trend in VLMs, a week ago or so, LMSYS.org -the well known benchmark for AI Models- introduced a new multi-model benchmark that covers vision-language/ multi-modal capabilities. Still early days, but most of the multi-modal models covered are closed, proprietary models. Blogpost: The Multi-Modal Arena is here.

Have a nice week.

  1. My Fine-tuned Models Beat GPT-4

  2. Machine Learning in Trackmania Video Game

  3. Classical PCA Through a Latent Variable Lens

  4. Playing Probabilistic AI Chess (game board, blogpost)

  5. Databricks Mosaic AI Agent Framework & Agent Evaluation

  6. What If We Recaption Billions of Web Images with LLaMA-3?

  7. [free book] The Orange ML Book: Essentials for Tabular Data

  8. AutoGluon ML Beats 99% of Data Scientists with 4 Lines of Code

  9. Berkeley AI Hackathon 2024: All the Winners & Karpathy’s Keynote

  10. Gradient Boosting like LLMs: Zero-shot Forecasting with ML models

Share Data Machina with your friends

  1. Training Mixture-of-Experts at Scale with PyTorch

  2. MS GraphRAG – Graph-based Modular RAG (repo, paper)

  3. MindsDB – A Platform for Customising AI from Real-time Enterprise Data

  1. [tutorial] What is Max Out in Deep Learning?

  2. Image Self Supervised Learning (SSL) on a Shoestring

  3. [free ebook] Understanding DL, 02/07/24 (+68 notebooks)

  1. SMAT: Meta-Tuning for Few-shot Generalisation with Sparse MoEs

  2. A Benchmark of Tabular ML in-the-Wild with Real-world Datasets

  3. MIT Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion

  1. BlackDagger- DAG-based Automation for MLOps

  2. MLOps Pipeline: Churn Prediction with Kubeflow & KServe

  3. Best in Open Source MLOps: Platforms, Frameworks & Tools

  1. PERSONA HUB – 1 billion Personas Auto-Synth Generated

  2. Megalith-10m Dataset: 10 Million Links to Flickr Images

  3. OpenVid 1M: A Large-Scale Dataset for HQ Text-to-Video Generation

Enjoyed this post? Tell your friends about Data Machina. Thanks for reading.

Share

Tips? Suggestions? Feedback? email Carlos

Curated by @ds_ldn in the middle of the night.





Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.