25
May
Stanford’s Center for Research on Foundation Models (CRFM) recently released a new piece of work called the Foundation Model Transparency Index (FMTI), which sets out to score all large language models (LLMs) on the different aspects of transparency around building and deploying a model. Work on understanding the transparency of LLMs is crucial to building trust and creating realistic evaluation standards for this extremely powerful technology. However, the FMTI makes many claims that are misleading concerning both the spirit and facts around transparency of LLMs, and is detrimental to recent progress in transparency. Our core issues are: The FMTI misleadingly…