top of page
Typing

Agentic AI 101: What We Taught (and Learned) in the Classroom

Updated: Oct 14

The story of artificial intelligence has always been one of evolution. From symbolic reasoning in the 1950s, to IBM’s Deep Blue defeating Garry Kasparov in the 1990s, to the release of ChatGPT in 2022 that gained one million users in just five days, each milestone has built toward something bigger. That “something” is Agentic AI.


Where traditional AI processes information or makes predictions, Agentic AI can plan, decide, and act autonomously. It represents a shift from tools that respond to human input to systems that work alongside us, extending what teams and individuals can achieve without constant supervision.


At our recent Agentic AI 101 training with students at ITE, we explored this shift in depth. Here’s a primer on the concepts we covered together.


ITE students learning Agentic AI with Good Bards

A Short History of AI


AI was formally established in 1956 at the Dartmouth Workshop. In its early years, research focused on symbolic AI, encoding human reasoning into rules and logic. These early systems could play checkers or solve word problems, but they struggled with complexity due to limited computing power.


Over time, progress came in waves. Funding surged, then dropped in “AI winters,” only to rise again as breakthroughs emerged. The 1980s and 90s brought neural networks, which took inspiration from the human brain. By 1997, IBM’s Deep Blue had beaten Kasparov at chess, proving AI could rival humans in structured, rule-heavy tasks.


The 2000s and 2010s unlocked a new era powered by big data, machine learning, and deep learning. AI systems became capable of recognizing images, interpreting speech, and processing natural language. The release of ChatGPT in 2022 marked another turning point: for the first time, generative AI became mainstream, enabling anyone to produce text, stories, or code on demand.


Each of these stages paved the way for today’s leap into Agentic AI, where systems don’t just answer questions but begin to act with initiative.


Generative vs. Discriminative AI


Traditional machine learning has been largely discriminative, classifying and predicting outcomes. A discriminative model might, for instance, determine whether an image shows a cat or a dog.


Generative AI changed the game by moving from classification to creation. A generative model doesn’t just identify cats; it can invent an entirely new image of a cat that never existed. By learning the underlying probability distribution of data, these models can generate original text, images, audio, and more.


This ability transformed AI from being a passive analyst of existing data into an active creator, opening the door to new applications in marketing, design, education, and beyond. It also set the foundation for the autonomy that Agentic AI demands.


Large Language Models (LLMs)


At the heart of generative AI are LLMs. Built on neural networks structured into input, hidden, and output layers, they are trained on datasets to predict the most likely next word or token in a sequence.


That simple prediction task is powerful enough to generate essays, translate languages, write code, and even simulate strategic reasoning. Yet not all LLMs are created equal. Some excel at accuracy, others lean into creativity, and each comes with trade-offs, including the risk of hallucination.


Choosing the right model becomes less about which is “best” and more about which is best-suited to the task. For a tech team, precision is critical. For a marketing team, creativity may matter more. Understanding those differences ensures AI is a partner that fits the problem at hand.


Prompt Engineering: The New Literacy


If LLMs are the engines of generative AI, prompts are the steering wheel. The way you frame a prompt determines not just the answer, but the type of output you receive.


Prompts aren’t limited to plain questions, they can also include different kinds of inputs, such as instructions, context, or style guidelines. Similarly, the outputs don’t have to be limited to text; they can include summaries, lists, tables, or even structured data.


This shift in thinking showed that prompt engineering is more than just asking questions. It’s about learning how to shape both the inputs and the expected outputs in order to get the most useful results.


Hallucinations: When AI Gets It Wrong


Generative AI does not “know” facts; it works by calculating probabilities. This means it can sometimes produce errors. Some are inconsistencies, where the same question yields different answers. Others are fabrications, where the system confidently generates information that simply isn’t true.


In our training, we looked at data comparing hallucination rates across the top 25 LLMs. It was a reminder that while these models are powerful, they are not infallible. The solution is not to abandon them, but to use them wisely by grounding their outputs with retrieval-augmented generation, and applying human review for high-stakes scenarios. The lesson for students was clear: understanding limitations is as important as understanding capabilities. Effective use of AI comes from knowing where it can shine, and where safeguards are needed.


Good Bards teaching Agentic 101

Agents: Orchestrating Autonomy


If LLMs are the engines, agents are the orchestration layer. An agent combines prompts, models, tools, web search, and retrieval into a workflow that can solve problems step by step. In our training, we showed how this comes to life inside Good Bards. Creating an agent is a guided process:


  1. Start with a prompt. This defines the task you want the agent to tackle.

  2. Enable tools and models. Connect the capabilities the agent will need, whether it is search, retrieval, or content creation.

  3. Switch between options. Choose the right model or tool depending on the context of the task.

  4. Save the output. Capture the results in a structured way so the workflow can be reused or scaled.


By the end of the session, participants were well versed in building their own simple agents and saw how prompts, models, and tools come together to form a working system. The experience showed that agents are not just theoretical concepts but practical workflows that can be assembled on Good Bards today.


Why It Matters for Education


The energy in the classroom confirmed it. Students weren’t just learning about AI; they were learning how to build with it. And as they experimented, they weren’t just imagining what AI could do in the future. They were experiencing how Agentic AI is already beginning to change how we work.


At Good Bards, we believe that if we are teaching it, it is because we are building it.

bottom of page