In 1996, IBM’s Deep Blue AI defeated Gary Kasparov, becoming the first computer to defeat a reigning world champion chess player. While Deep Blue was an AI agent that could play chess, it was a deterministic system, meaning that all of the moves made were based on predefined rules and principles, all programmed by a team of developers.
For each play, Deep Blue did not learn or generalize; it simply followed the rules it was programmed with. Clearly, this approach was successful, but it also meant that Deep Blue was good at chess, and chess only.
In the last several years, AI itself has evolved from narrow, specialized systems into more generalized models capable of learning, reasoning, and adapting. This has enabled a new class of agentic systems: AI agents that can achieve goals, plan, make decisions, and interact with complex environments.
A modern chess-playing AI no longer needs a rigid list of rules and chess principles. Knowing just the rules of chess, it can play millions of games against itself and learn best practices in the game. While playing a match against a human opponent, the AI evaluates millions of potential moves on each move, leveraging the memory of millions of other games to adapt its play as the game progresses.
Additionally, this pattern of learning means that, given the rules for any other game, the same AI is no longer limited to chess: it can become an expert in any game.
With this shift towards general-purpose learning and reasoning, modern AI agents are no longer confined to narrow, pre-defined rules. They can make decisions, adapt to context, and coordinate across systems, significantly expanding their flexibility and utility. Beyond chess (and other games), these agentic AIs can be applied to the enterprise: automating complex workflows, initiating actions, and autonomously adapting to changing business landscapes.
This guide will introduce agentic architectures and how they can be adapted for many different tasks to be rolled out in the enterprise.
As discussed in the introduction, Deep Blue was a chess AI — but it was not an agentic AI. AIs of the past were highly programmed deterministic platforms. Every time Deep Blue found the board in a certain configuration, it *always* made the same play.
Modern AI agents are stochastic, meaning that given the same situation, the results from the AI can vary. But what makes an AI agent unpredictable? Why are the results different every time a workflow or process is run?
Let’s walk through the architectural components of agentic AI, and how these components lead to variable results from agentic AIs. There are four core architectural components that make AIs Agentic:
Perception is the process of processing information and developing an understanding of what is happening. Humans use their senses to perceive the world around them. Similarly, agentic AIs can use prompts and provided tools (APIs, databases, etc.) to understand the situation. From these initial pieces of data, the agentic AI can begin to understand the situation, and ways it may be able to respond.
Here are a few examples of how Agentic AIs may use perception:
Most everyone has interacted with a customer service chatbot. Non-agentic bots can handle a few things: “Your checking balance is $1,432.12, and your last three transactions are...” or “Your next payment of $200 is due on July 14. Would you like to make a payment now?”
Anything deeper falls beyond the scope of the reactive AI and requires a human representative.
Here are some of the inputs that could be given to a customer service chatbot to provide perception:
With these tools, the agentic AI can understand if the customer is unhappy and wants a refund, or if they should process a return. The data and interactions with the customer provide cues that the agent can “perceive” and use in decision-making.
Factory lines have long used IoT sensors and cameras to monitor temperature, vibration, pressure, and product quality. Traditional automation reacts to predefined thresholds: if pressure exceeds a limit, stop the machine.
Agentic AIs go further by combining this sensor data with structured and unstructured information such as maintenance logs, shift handover notes, supplier quality reports, and even operator chat logs to reason about why issues occur, not just when.
For example, an agentic AI might notice:
Instead of simply stopping production, the agent builds a hypothesis: “The malformed seals may be due to a worn-out bearing introduced during a shift change, aggravated by high humidity affecting the adhesive cure time.” It recommends a proactive inspection and adjusts machine tolerances dynamically.
This kind of multi-step reasoning, pattern recognition across domains, and action planning is where agentic AIs can unlock value beyond what traditional ML systems can deliver.
Agentic AIs use the data provided to them to perceive the environment they are in. To make decisions, form goals, and plan actions, they rely on reasoning techniques. There are three principal reasoning approaches commonly used in agentic AI:
These approaches can be used independently or in combination to create agents capable of sophisticated, context-aware decision-making.
Symbolic reasoning is the closest to early reactive AI systems. Symbolic AIs are heavily programmed and rule based agents (think, If X, then Y).
Rules determine each step of the pathway, and the AI must follow the guidelines with no deviation. The AI may use perception at each step, allowing the logic to change, but the agent still remains on the defined rails of the process.
Symbolic AIs are great for structured domains with regular and well interpreted rules. Symbolic reasoning AIs are generally more rigid, and do not handle new scenarios well (as there is no programming to guide them). Symbolic reasoning AIs can have a long development time (all those rules have to come from somewhere), and often lack learning — they don’t learn and adapt from past tasks.
Chain-of-thought uses a series of questions to establish the goal to be solved, and interacts with LLMs in order to ‘walk through’ the solution of the goal.
When presented with a task, the algorithm ‘chats’ with one or more LLMs to break the task into steps. This is done by asking questions: ‘how might I solve this task?’ and ‘what are the steps needed to complete this action?’
By thinking through the problem step by step, chain-of-thought creates a process on-demand (unlike the rigid preprogrammed symbolic AIs).
Source: THA
Chain-of-thought agents are very flexible, as the chain-of-thought reasoning allows the agent to consider many options, and if the options will help it achieve its goal. By walking through the solution, chain-of-thought agents perform quite well with little additional training, even on new tasks.
A variation of chain-of-thought uses two LLMs — one that asks the questions, and the other that ‘reasons’ on the questions to build the response, almost as if the LLMs were having a conversation to solve the task.
Chain-of-thought can be extended into “self-consistency” where the chain-of-thought agent performs the same query multiple times, and then the most common result is returned.
Planning agents are typically given an initial state, a goal state, and a set of actions that can be undertaken to reach the goal. The planning agent selects and orders the steps required and then executes the steps. This can use chain-of-thought to plan and evaluate the pathway, but it does not require chain-of-thought — the evaluation of the steps is enough to complete the task.
Alone, each of these models are great at solving different milieus of problems or goals. But they can also be combined to increase the reasoning power and flexibility of the agentic AI.
Source: Analytics Vidhya
LLMs are stateless, and retain no memory of a conversation. Systems like ChatGPT implement a memory system that retains the ongoing conversation. This memory gives the LLM context over the conversation that is being held, and permits the LLM to provide better answers based on the context.
For example:
Q: What is the fastest mammal?
A: <LLM answer about Cheetahs>
Q: Fish?
A: <the LLM knows that “fish” is in context of “fastest” and answers sailfish>
Without the context, the LLM might give a recipe for braised haddock, or provide information about the fish that live in the Great Barrier Reef.
Short-term memory: Agentic AIs hold short-term memory during the task at hand. This means the customer service agent does not have to ask repeatedly for an order number.
Long-term memory: When the Agentic AI recalls user’s preferences over multiple conversations.
Action is where the agentic AI takes in the perception, memory and has reasoned out the steps required to reach a solution. Action is the process of completing the tasks required to complete the goal. The agentic AI will have tools like databases, APIs, and other AI agents to aid it in completing the steps. The agentic AI can complete each step, sometimes evaluating the process and changing course mid-stream.
Depending on subtle differences, agentic AI will not always proceed exactly the same way. In general, agentic AIs that are built with proper business rules, prompts, and guardrails come up with acceptable solutions — but just not always the same exact solution!
The enterprise is on board — it is time to build agentic AIs. One way to think about building an agentic AI is to think about the design principles of agentic systems, and what tools or systems might be required in order to implement the desired system.
What tools will the agentic AI have access to? APIs, other workflows, agents, databases?
Will you need a chain-of-thought reasoning layer? Or can you leverage a more directed symbolic reasoning layer? The tradeoffs here are speed of response, cost and the types of responses that are expected.
After a careful analysis of the agent’s design, a blueprint for the tools required may begin to form.
As the team begins to build, the challenges around implementing an agentic AI will begin to appear. The biggest step is choosing the LLMs that will power the agent.
Costs
Security:
Organization
As the team begins designing and building agentic AIs, it is suggested to start off small: begin with simpler structured agentic AIs. As the team gains expertise, they can begin to tackle agents with more complex features.
Common AI architectures (listed in order of complexity) are:
As the team becomes experienced at deploying single agent systems, they can begin looking to combine agents to work together. Two common paradigms are:
Source: Towards Data Science
When designing agentic AI architectures, look to frameworks that can be used to speed your team’s agentic AI journey.
Agentic AI has transformed the way the enterprise thinks about automation and intelligence. Agentic AIs can understand the situation, perform reasoning, and make a decision on the proper way to solve issues across the enterprise. Building a production-ready agentic AI is another story.
While agentic AIs have many of the same challenges of traditional IT projects, there are also significant differences that must be accounted for when deploying an agentic AI.
One approach to deploying agentic AI agents is to lean on the expertise platforms that have expertise in agentic AI platforms, including leaders in the space like Akka.
Akka's agentic AI platform provides four integrated components: Orchestration for multi-agent workflows, Agents for goal-directed reasoning, Memory for durable context retention, and Streaming for real-time data processing. The platform addresses the core implementation challenges around security, cost management, and scalability discussed earlier in this guide.
What sets Akka apart is proven enterprise scale —customers run agentic systems processing over 1 billion tokens per second with 99.9999% availability. The platform delivers 3x development velocity, 1/3 the compute cost, and enterprise SLA guarantees.
For companies ready to move beyond prototypes, Akka provides production-grade infrastructure that allows teams to focus on business logic rather than building distributed systems from scratch. Schedule a demo today to get started!