We often recommend asynchronous communication between Microservices by using a message broker, such as Kafka. What if you could have the same advantages of loosely coupled and reliable message delivery without the burden and cost of operating the broker infrastructure? As a bonus, you would gain low latency cross region delivery of messages.
In this post we will look at a preview of this brand new feature, which will be included in Akka Projections very soon.
In many cases we recommend using Event Sourcing in Akka for the source of truth and Akka Projections for the CQRS pattern to separate operations that write data from those that read the data.
In Akka Projections you process a stream of events from a source to a projected model or external system. Each event is associated with an offset representing the position in the stream. This offset is used for resuming the stream from that position when the projection is restarted.
When designing your Microservices, each service should own its data and have direct access to the database. Other services should then use the Service API to interact with the data. There must be no sharing of databases across different services as that would result in a too-tight coupling between the services. In this way, each microservice operates within a clear boundary, similar to the Bounded Context strategic pattern in Domain-driven Design.
The new Projections over gRPC module makes it easy to build brokerless service-to-service communication. It uses the event journal on the producer side and Akka Projections event processing and offset tracking on the consumer side. The transparent data transfer between producers and consumers is implemented with Akka gRPC.
- An Entity stores events in its journal in service A.
- Consumer in service B starts an Akka Projection which locally reads its offset for service A's replication stream.
- Service B establishes a replication stream from service A.
- Events are read from the journal.
- Event is emitted to the replication stream.
- Event is handled.
- Offset is stored.
- Producer continues to read new events from the journal and emit to the stream. As an optimization, events can also be published directly from the entity to the producer.
For simplicity, the above diagram shows one consumer service and one producer service. In reality, many consumers can subscribe to the event stream of a producer and consume the events at its own pace. The offset is tracked on the consumer side so that it can resume from the same point where it left off in case of restarts. Network connectivity issues don’t impact producers, and consumers will catch up with old events when the connection is back again.
The producer and consumer services can live in their own Akka Clusters and slices of all events will automatically be spread over the nodes in the cluster for load balancing.
The consumer is started as a ProjectionBehavior with an EventSourcedProvider. The only difference compared to other Event Sourced Projections is the event query and some configuration to define how to establish the gRPC stream to the producer service.
Consumer code:
GrpcReadJournal eventsBySlicesQuery = GrpcReadJournal.create(
system,
List.of(ShoppingCartEvents.getDescriptor()));
SourceProvider<Offset, EventEnvelope<Object>> sourceProvider =
EventSourcedProvider.eventsBySlices(
system,
eventsBySlicesQuery,
eventsBySlicesQuery.streamId(),
sliceRange.first(),
sliceRange.second());
return ProjectionBehavior.create(
R2dbcProjection.atLeastOnceAsync(
projectionId,
Optional.empty(),
sourceProvider,
() -> new EventHandler(projectionId),
system));
Details left out for brevity, see full code for consumer example.
The producer is started as an Akka HTTP server with a gRPC service handler provided by the library. The producer will read events from the journal using the configured eventsBySlices query when consumers attach.
Producer code:
Function<HttpRequest, CompletionStage<HttpResponse>> service =
ServiceHandler.concatOrNotFound(
eventProducerService,
ShoppingCartServiceHandlerFactory.create(grpcService, system),
// ServerReflection enabled to support grpcurl
ServerReflection.create(
Collections.singletonList(ShoppingCartService.description),
system));
CompletionStage<ServerBinding> bound =
Http.get(system).newServerAt(host, port).bind(service::apply);
Details left out for brevity, see full code for producer example .
Events can be transformed on the producer side. The purpose is to support different public representation from the internal representation, which is stored in the journal. By having separate representations the internals can evolve without breaking the API for the consumers.
Projections over gRPC provide reliable delivery of events. Some events may be transferred more than once in failure scenarios but those are automatically de-duplicated based on the detailed offset tracking on the consumer side. Depending on the type of Projection handler the processing semantics is at-least-once or even exactly-once if the target of the Projection processing is the same database as where the offset is stored.
The eventsBySlices
query and the offset storage that are needed for Projections over gRPC is implemented by the R2DBC plugin, which has support for Postgres and Yugabyte.
We see Projections over gRPC as a first step of Akka Edge, although it can be useful for more traditional environments than the edge. We have exciting plans for Akka Edge and this feature will be enhanced with ways to dynamically select which events to replicate to consumers.
Another idea is to use this as a network-friendly replication mechanism of active-active entities with Akka’s Replicated Event Sourcing.
Resources:
Posts by this author