I've taken a look at a few demos, including the Chirper demo app: https://github.com/lagom/lagom-java-sbt-chirper-example
Adding a chirp and retrieving a live stream of chirps is added to the same service. This seems to be common practice:
public interface ChirpService extends Service {
ServiceCall<Chirp, NotUsed> addChirp(String userId);
ServiceCall<LiveChirpsRequest, Source<Chirp, ?>> getLiveChirps();
ServiceCall<HistoricalChirpsRequest, Source<Chirp, ?>> getHistoricalChirps();
@Override
default Descriptor descriptor() {
// @formatter:off
return named("chirpservice").withCalls(
pathCall("/api/chirps/live/:userId", this::addChirp),
namedCall("/api/chirps/live", this::getLiveChirps),
namedCall("/api/chirps/history", this::getHistoricalChirps)
).withAutoAcl(true);
// @formatter:on
}
}
My question revolves around the idea that you could submit the addChirp
message to a topic of a message broker (Kafka process) with the purpose of decoupling reads from writes. That is, the write will return a success even when the read-side (consumer) is temporarily unavailable (i.e., the chirp gets temporarily stored by Kafka to disk, to be processed by the read-side once it is available again).
Wouldn't it be logical to separate the write-side from the read-side into separate services and run them on different ports altogether? Or does this approach have common pitfalls?
When writing read-side
's in Lagom you have two options:
effectively-once
semantics. The other advandtage of intra-service read-sides is that modelling stays behind doors and you can refactor your tables freely as long as the public REST endpoints offer the same API.at-least-once
(what you generally want) or at-most-once
so the end-to-end guarantees are no longer effectively-once
, (3) the topic is accessible by other services (this isn't bad, it's just an extra consideration), (4) write-side and read-side live in different services which is a bit unnatural.There's a demo of a remote read-side in online-auction-java
demo app: the search-service
is a remote read-side that consumes events from many topics consolidating information into a single elasticsearch index. In this case it makes a lot of sense to use a remote read-side because: (a) we're using a specific storage tech (elastic search) and (b) we're merging streams coming from two different upstream services.
HTH,