Announcing the Akka Agentic Platform

28 minute read

Introducing Akka Orchestration, Akka Agents, Akka Memory, and Akka Streaming; integrated offerings that deliver 3x the velocity, 1/3rd the compute and any SLA for every agentic system whether autonomous, adaptive, real-time, transactional or edge.

We are pleased to introduce new offerings that address creating agentic AI systems. This release culminates years of Akka customer collaboration that have been running scalable agentic systems, the largest processing more than 1 billion tokens per second!

These offerings are immediately available and included as part of our customers' existing license.

Join our announcement Webinar on July 17 at 11 am ET. We will show live demos and share examples.

 

akka-agentic-platform-alt

We are introducing four offerings – seamlessly integrated – together are a comprehensive solution to build, operate, and evaluate agentic systems: intelligent AI automation, autonomous AI (no human-in-the-loop with AI that creates and executes its own plan), adaptive (dynamically changing goals), ambient (passive background), multi-modal (audio, video, edge IOT metrics), transactional, digital twin, or analytic

Install the Akka CLI and run any of the examples with akka code init

Akka Orchestration

Guide, moderate, and control long-running, multi-agent systems even across crashes, delays, or infrastructure failures with sequential, parallel, hierarchical, and human-in-the-loop workflows. Embed a registry for agent, tool, API, and resource governance.

Akka Agents

Create goal-directed agents, MCP tools, and HTTP/gRPC APIs that reason, act, and analyze. Integrate any 3rd party broker, agent or system.

Akka Memory

Durable, in-memory, and sharded data for agent context, history retention and personalized behavior. Nano-second writes and replication for failover. Supports short-term and long-term memory.

Akka Streaming

High performance stream processing for ambient, adaptive, and real-time AI. Continuous processing, aggregation and augmentation of live data, metrics, audio and video. Streams ingest from any source and stream between agents, Akka services and the outside world.

Today, we are also introducing an AI-assisted DevEx, which leverages the power of LLMs to accelerate creation and delivery of Akka systems with agents that create, modify, test, and evaluate Akka services built with the Akka SDK.

Akka console binary

Now includes a native binary local console for executing distributed services during development and tracing agentic interactions. The local console is available through the Akka CLI and Docker is no longer a console requirement.

AI-assisted DevEx

Build services with an AI-assisted DevEx that uses event-driven, component composition. Support for Qodo, Windsurf, and Cursor.

Agent testkit

Build unit, integration, and multi-agent system tests that can run locally or within your CI/CD systems, mimicking the production environment. Incorporate custom evaluation strategies to automate testing of non-deterministic LLM interactions. 

Evaluation console

An agentic console with separate flows for development, operations and context engineering for auditing, tracing, debugging, and evaluating multi-agent systems. The console enables tracing of agentic systems across networks and Akka components, even if they are separated with event-driven decoupling. Traces include performance, functional, and cost intelligence so that you can tune and evaluate with the same experience.

The evaluation console will be made available in a follow-on release later this summer.

evaluation-console

Akka’s agentic design partners

Akka is adopted when the SLA must be guaranteed. Akka has a rich history of customers that have been building and operating scalable and reliable AI in production. We are thrilled to have collaborated with some of the industry’s most innovative AI projects in the design of Akka’s agentic offering. 

Akka’s AI design goals

Our customers and partners shaped our design:

  1. Production-ready. Agents should work identically in development and production without any code changes. Leverage the JVM environment and Akka’s actor-based runtime for operating systems with 99.9999% uptime, regardless of scale.
  2. Elastic. Create agents that can process over 1 billion tokens per second with consistent latency, barring of course, the limitations of the model itself.
  3. Batteries-included. Provide a single SDK and runtime with all the components necessary to create autonomous AI systems: orchestration, agents, memory (short-term and long-term), and streaming (for adapting goals, evaluations, and guardians).
  4. Evaluatable. Provide in-line evaluation execution and control flow to measure accuracy and enable enterprise governance controls.
  5. Interoperable. Provide simple event-driven and client interoperability between agents, any third party system, MCP tools, and between other Akka components (using low latency brokerless service-to-service messaging).
  6. Reason & Transact. Provide components that can perform as AI agents and also act as API endpoints, external non-agent durable memory, MCP endpoints, timers, and multi-data source data projections. This enables agents to reason and other services to transact and analyze at scale.
  7. Adaptable. Enable dynamic multi-agent orchestration, which can alter coordination flows with actor supervision as an agent’s goals or plans change. Provide streaming consumers that can continuously process a stream of events (i.e., IOT metrics, audio, video, a Kafka topic, etc) to aggregate data and take action with agents, workflows, or memory.
  8. Composable. Services can combine any number of Akka components to create any kind of agentic, transactional, analytics, edge, and digital twin system. Let Akka unlock your distributed systems artistry!

component-composability-alt

Enterprise platform for agentic AI

Akka goes beyond the framework. It’s a platform to maximize organizational velocity for enterprises that develop and operate agentic services across different teams, each with a range of skills.

Akka provides flows for development, context engineering, and operations - separating concerns so that the edit, test, deploy, and feedback loops are tailored to the goals of each group, while providing project structure and standardization to scale your organization’s agentic adoption as much as Akka scales your services at runtime. 

Developer self-service

Akka’s SDK and components structure and standardize agents everywhere, while enabling self-service provisioning, versioning, and development.

Env & deployment mgmt

Flexible environment control and robust deployment across all stages. Golden paths, isolated environment, and multi-tenant operations for multi-team cooperation. 

Insights & cost controls

Improve DORA metrics, incorporate runtime inline agent cost controls, and optimize costs through Akka’s shared compute model. 

Akka agentic services execute on our award-winning actor-based runtime that clusters agents from within to create infinite scale and guarantee resilience.

Akka agentic services can run on any infrastructure, cloud, or location. You can deploy them with self-managed instances on bare metal, Kubernetes, Docker, VM, or edge environments.  Or, optionally add the Akka Control Plane to gain multi-region failover, auto-elasticity, persistence management, and multi-tenancy. 

announcing-akka-agentic-ai-release-image2

The Akka value prop: faster dev, lower cost, any scale

Enterprise agentic AI should create certainty in a world of uncertain LLMs. For 15 years, Akka has been powering the world’s most demanding systems, creating confidence in the face of uncertainty while accelerating development velocity and lowering the cost to operate.

The Akka promise: 3x the velocity, ⅓ the compute, and any SLA. We guarantee it. Reach out and we’ll POC your agentic use case in 48 hours. 

3x the Velocity

How we do it

A single DevEx to build, test, integrate, and upgrade agentic systems. Akka provides composable components, rapid vibe development, visual debuggers, and no-downtime rolling updates.

With others you…

Cobble together 4 SDKs each with their own DevEx. Meld with an additional integration testing system. Deploy four different architectures (orchestration, memory, agents, streaming) that make coordinated, rolling updates near impossible.

The bottom line

1 Akka HC is more productive than 3 full-stack developers. 

The proof

llaama_logo_gray

Case study: 16 agents,
1 petabyte, 2 months, 2 devs

doctolib-logo

Case study: Supports 1M healthcare
providers with two engineers

⅓ the Compute

How we do it

By executing your services on a shared compute model which eliminates siloed allocations. Akka’s actor model for concurrency and non-blocking event architecture enables safe compute utilization that approaches 100%. 

With others you…

Separate compute allocation and utilization across 4 different runtimes: orchestration, memory, agents, and stream processing with multi-threading concurrency that blocks and locks. This waste compounds further with Python runtimes. 

The bottom line

Every core that Akka consumes reduces your total infrastructure spend.

The proof

verizon-logo

Verizon 70% reduction in cloud compute costs

esdiac-logo

Esdiac cuts 65% in infrastructure costs

Any SLA

How we do it

Akka services self-cluster to millions of nodes and can adapt to change due to hardware or network failures, LLM hallucinations, or shifting environments. Akka services are their own system of record with data that is sharded, in-memory, and replicated, independent of a database. Multi-region operations add failover and disaster recovery. 

With others you…

Delegate SLA achievement to cloud infrastructure, databases, or messaging brokers which lack visibility to the application’s intent and execution. No built-in data replication, fail over, or disaster recovery for services requires full-stack duplication for assurance. 

The bottom line

0b RPO
10M IOPS
180ms RTO
2ms read latency
9ms write latency
99.9999% availability
No-downtime updates

The proof

akka-master_akka-logo-grey

See our benchmarks

swiggy-logo

Swiggy’s 3M inference / seconds

Join our announcement Webinar on July 17 at 11 am ET.
We will show live demos and share examples.

When AI Needs an SLA