Developing an LLM: Building, Training, Finetuning

Ahead of AI



If your weekend plans include catching up on AI developments and understanding Large Language Models (LLMs), I’ve prepared a 1-hour presentation on the development cycle of LLMs, covering everything from architectural implementation to the finetuning stages.

The presentation also includes and overview and discussion of the different ways LLMs are evaluated, along with the caveats of each method.

Below, you’ll find a table of contents to get an idea of what this video covers (the video itself as clickable chapter marks, allowing you to jump directly to topics of interest):

00:00 – Using LLMs

02:50 – The stages of developing an LLM

05:26 – The dataset

10:15 – Generating multi-word outputs

12:30 – Tokenization

15:35 – Pretraining datasets

21:53 – LLM architecture

27:20 – Pretraining

35:21 – Classification finetuning

39:48 – Instruction finetuning

43:06 – Preference finetuning

46:04 – Evaluating LLMs

53:59 – Pretraining & finetuning rules of thumb

It’s a slight departure from my usual text-based content, but if you find this format useful and informative, I might occasionally create and share more of them in the future.

Happy viewing!



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.