Evaluating LLMs is a minefield

Evaluating LLMs is a minefield



We have released annotated slides for a talk titled Evaluating LLMs is a minefield. We show that current ways of evaluating chatbots and large language models don’t work well, especially for questions about their societal impact. There are no quick fixes, and research is needed to improve evaluation methods.

The challenges we highlight are somewhat distinct from those faced by builders of LLMs or by developers interested in comparing between LLMs for adoption. Those challenges are better understood and tackled by evaluation frameworks such as HELM

You can view the annotated slides here.

The slides were originally presented at a launch event for Princeton Language and Intelligence, a new initiative to strengthen LLM access and expertise in academia.

The talk is based on the following previous posts from our newsletter:



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.