AI buzz is everywhere, but when you actually dive into the code, terms like “vectors,” “embeddings,” and “RAG” can feel a bit like alphabet soup. In this video, Kevin starts by reminding us that these aren’t just marketing slogans: they’re the very primitives your next AI-powered service will rely on. Getting them straight up front means smoother development, fewer surprises in production, and easier conversations when spreading the AI joy to your colleagues.
We can think of vectors as being points along a journey to some spot. The more points there are in the journey, the more detail we have. The more detail we have, the more easily we can compare one journey with another. This is where embeddings and semantic search come in.
By the end of this section, you’ll have a mental picture of your data living in a giant multi-dimensional galaxy—and embeddings as the star charts guiding your quest.
If embeddings are the map, prompts are the directions you give to your AI agent. Kevin emphasizes that writing great prompts is part art, part science:
Prompt engineering is about iterating these three elements until your model reliably does what you expect. Kevin walks through a simple “summarize this paragraph” example, showing how small tweaks—like adding “in two sentences” or “as a bullet list”—dramatically change the output. It’s a bit like calibrating a lens: adjust the focus until the picture is sharp.
Except with LLMs, the thing you’re trying to focus your lens on can be unpredictable; random.
Large language models are amazing, but they only know what they were trained on. Most model providers should be able to tell you when the “cut off” or trained-on date is. Possibly even more important than keeping a model up to date is giving it the ability to use private, application- or enterprise-specific information in its inference.
Retrieval-Augmented Generation fixes that by:
Kevin shows a flowchart: user question → vector search against your knowledge base → pass top-N docs + question to the LLM → return an answer that hopefully doesn’t hallucinate.
Up to now, we’ve talked about reactive AI: you ask, it answers. Agentic AI is the next step—AI that can make decisions, take actions, and iterate on its own:
Kevin breaks down several simple examples of how orchestration of one or more agents can add super powers to any application.
In Kevin’s demo, he:
Kevin closes with a reminder that building agentic AI isn’t about replacing humans; it’s about amplifying their abilities. By automating routine decisions and integrating seamlessly with your existing systems, you free your team to focus on the parts of the job that truly need creativity and judgment.