Home | Integration | Demo | Details | Architecture | Project |
This section is about how are designed the most significant interaction flows, between the roles leader and follower, trying to be brief and code-accurate as well.
When we create the server:
new Minka<String, Integer>("localhost:2181/minka");
Minka starts an embedded tcp server on port 5748, and communicates with other instances of the same host application: to organize distribution and balance of user entities.
When we integrate the library into our host application and call load()
, triggers the Minka spring context, with all the starting agents.
new Minka<String, Integer>("localhost:2181/minka")
.load();
So the bootstrap
resolves the current shard’s host address and port, and starts the broker
that will be used for communication. It also starts a scheduler
that control agents so they can safety run without interferring in other’s work. Then starts both a follower
and a leader
process.
The Leader will register itself as a candidate at the zookeeper
ensemble, if it wins it will start two agents: proctor
and distributor
, otherwise it will remain dormant until the last elected leader fails or shutsdown.
The Follower locates the current leader and thru the heartpump
sends heartbeats to it.
It will also start a policies
agent to release duties in case the follower ceases to receive clearance from the leader.
This phase affects the supplier provided on the following events:
final Set<Duty<String>> myCapture = new HashSet<>();
server.onDutyReport(()-> myCapture );
heartpump
agent with a predefined frequency, which creates heartbeats containing information of the shard’s identity, capacity, location, and attached duties reported by the user’s delegate, and send them to the leader thru the Broker
. bookkeeper
function that analyzes the current cluster state, if there’s a roadmap in progress, if it’s the first HB of a shard, or anything of importance to account it. At first it will add a new Shard entity to the partition table
. Proctor
agent starts, checks the partition table’s shards and calculates a shard state involving the distance in time between the HBs and a predefined time-window range to rank up the shard, and move it to an ONLINE state, which will make after some periods in that state: to be considered for distribution of duties.
All shards starts in the JOINING state at the proctor’s ranking, then go ONLINE, if there’s some communication interruption strong enough it may cause the QUARANTINE status, and in case the shard orderly shutsdown goes to QUITTED, in case the shard ceases to send HBs or the Proctor ceases to see it: goes to OFFLINE.
The lasts states are urecoverables.
If there’re shards in a state different than ONLINE: the phase doesnt moves ahead, but no shard is allowed to stay too much time in a temporal state like JOINING or QUARANTINE.
When all shards are ONLINE after a number of predefined periods: the phase moves to balancing. In case there’s a roadmap in progress, it goes on even if a shard is not ONLINE.
The following events occurrs when Minka needs to know the user entities at the first distribution, which happens only once in a leader’s lifetime, and each time a new leader is reelected.
ctx.onPalletLoad(()->...);
ctx.onDutyLoad(()->...);
The distributor
agent checks the cluster state
and if all shards are online it proceeds.
It asks the partition master
(usually the partition delegate instance) for pallets and duties, which were defined thru Minka::[onPalletLoad(..)|onDutyLoad(..)]
, this’s the only moment that both domain entities are loaded and kept in partition table’s custody thru adding them as a CRUD operation, the same one that can be executed thru MinkaClient::add()
.
Then an arrangement
function reads duties in the Stage (already distributed) and duties in the NextStage (CRUD ones), detects for missing duties (distributed but recorded as absent by the Bookkeeper), detects offline shards (and subsequently dangling duties to be redistributed).
The Arranger traverses all pallets with a Balancer
predefined by the user thru the BalancingMetadata set at Pallet creation: with a NextTable composed of the Stage, the NextStage, and a Migrator
that can be used to affect distribution process.
All provided Minka balancers will check for the NextTable’s Shards and ShardEntities, and operate the Migrator for overriding shard’s duties, transferring duties between shards, etc.
The Migrator validates every operation to be coherent and applies the changes to a Roadmap
element that properly organizes them for the transportation process to be safety and smoothly applied.
this phase affects consumers defined on the following events:
server.onDutyCapture((d)->...);
server.onPalletCapture((d)->...);
server.onDutyRelease((d)->...);
server.onPalletRelease((d)->...);
The distributor
agent has the roadmap
plan already built and starts driving it.
Thru the broker
sends all duty detachments to their shard locations.
At the follower, the partition manager
invokes the user’s partition delegate
to honor the duty contract: release the duties, stop them, kill them, whatever they mean for the host application.
So the heartpump
continues to send heartbeats thru the broker: but without those released duties.
At the leader, the bookkeeper
notes the absence as a part of a roadmap plan, after ensuring this is a constant situation, it updates the partition table
to reflect the new reality, and moves the roadmap pointer to the next phase: attachments. The same control flow applies for this step.
After both dettachments and attachments steps conclude, the distribution process starts again, after the proctor phase.