Let’s Understand the “Engine Room” Behind ChatGPT (LLMs). Most of us in HSE use AI like a smart typing tool. This week let’s go one level deeper, so we know when to trust AI, when to challenge it, and how to control it on real site work.
Course # 2: Andrej Karpathy – Intro to Large Language Models (LLMs)
This is a general-audience lecture by Andrej Karpathy that explains what LLMs are, how they are trained, and how they generate answers – the technical “engine room” behind tools like ChatGPT. It is published on his YouTube channel as “Intro to Large Language Models”
Andrej Karpathy is an AI researcher and founder of Eureka Labs, focused on modernizing education in the age of AI. He previously served as the Director of AI at Tesla and was a founding member of OpenAI. During his PhD at Stanford, he was the architect and lead instructor of the first deep learning course at Stanford (CS231n), which has become one of its most popular classes. Karpathy also maintains an index of his LLM learning resources on his site, where this talk is referenced alongside related videos. (Link given below)
Why this matters for HSE professionals..? Most HSE teams use AI like a “smart typing tool.” This course helps students understand: * Why AI sometimes sounds confident but can be wrong * Why adding the right context changes the quality of output * What “tokens,” “context window,” and “training vs inference” mean * How to set realistic expectations when using AI for incident summaries, SOP drafts, checklists, and training content
That understanding is the difference between “using AI” and controlling AI.
What will we learn..? * Why AI can sound confident and still be wrong * Training vs inference (AI isn’t “checking the site”; it’s predicting text) * Tokens & context window (AI reads chunks, not pages—too much noise = missed critical controls)
If we understand tokens, context window, and training vs inference, we stop treating AI like magic and start treating it like a tool under our control.