06
Aug
[Submitted on 18 Jul 2024] View a PDF of the paper titled VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance, by Divyansh Srivastava and 2 other authors View PDF HTML (experimental) Abstract:Concept Bottleneck Models (CBMs) provide interpretable prediction by introducing an intermediate Concept Bottleneck Layer (CBL), which encodes human-understandable concepts to explain models' decision. Recent works proposed to utilize Large Language Models (LLMs) and pre-trained Vision-Language Models (VLMs) to automate the training of CBMs, making it more scalable and automated. However, existing approaches still fall short in two aspects: First, the concepts predicted by CBL often mismatch the input image,…