Home | Integration | Demo | Details | Architecture | Project |
Project repository: https://github.com/gcristian/minka
<dependency>
<artifactId>minka-server</artifactId>
<groupId>io.tilt.minka</groupId>
<version>0.2.1</version>
</dependency>
Minka provides a compact API at io.tilt.minka.api
to do really few things:
DutyBuilder/PalletBuilder
to wrap and internalize user entitiesMinkaClient
to input CRUD duties & pallets, after first balance.Minka
to start the server and listen for events.Minka is designed to embrace incidents, that unfolds an event-driven architecture, which also applies its API.
These are the events the client must listen and react to:
event type | required | when | what for |
---|---|---|---|
load |
yes | balance *phase | enter the initial duties and pallets to minka, only called at the first balancing phase of a new elected leader. |
capture , release |
yes | distribution phase | honour the Inversion of Control contract: take or discard duties and pallets (start or stop user tasks) |
report |
yes | proctor phase | return the passed and currently captured duties, in permanent custody |
update , transfer |
no | anytime | reaction of a duty or pallet’s payload update or a transferred message, to a duty or pallet previously captured |
*see the phases in the architecture section.
This creates a Minka server shard, starting with 1 pallet with a fair weight balancer, and 1 duty weighing 200. When Minka yields control of the duty to us: we simply add it to myDuties set.
In a real use case all application instances must reuse the same routine and behave identically approaching Minka.
final Pallet<String> p = PalletBuilder<String>
builder("pal").with(Balancer.Strategy.FAIR_WEIGHT).build();
final Set<Duty<String>> myDuties = new TreeSet<>();
new Minka<String, String>("localhost:2181/minka/sampleApp")
.onPalletLoad(()-> newHashSet(p))
.onDutyLoad(()-> newHashSet(DutyBuilder.<String>
builder("aDuty", "pal").with(200d).build()))
.onDutyCapture(duties->myDuties.addAll(duties))
.onDutyRelease(duties->myDuties.removeAll(duties))
.onDutyReport(()->myDuties)
.setCapacity(p, 3000d)
.load();
// and we're distributed !
Every Minka class instance is a Shard, and can define which cluster is member of, by default is unnamed. In case we plan to share the zookeeper cluster with other minka-enabled applications: we must provide a custom name, or use a zookeeper’s chroot folder.
Every JVM should only have one member, only for testing we could specify different TCP ports, and under the same cluster name (unnamed is fine) or zk’s chroot folder: we’ll have a “standalone” cluster, communicating thru loopback interfase.
final Config custom = new Config();
// if we're sharing ZK
custom.getBootstrapConf().setServiceName("sampleApp");
// if we're shooting several servers in the same machine
custom.getBrokerConfg().setHostPort("localhost:5749");
new Minka<String, String>(custom);
As in current version Minka lacks of a distributed-transactional storage, CRUD requests operate only in the current leader’s lifecycle. At the event of a new leader reelection, all previous CRUD requests rely on their inclusion in the suppliers provided at the new elected leader, in the events
onPalletLoad
andonDutyLoad
.
These are the CRUD requests that can be performed thru io.tilt.minka.api.MinkaClient
class:
method | action |
---|---|
add | adds a new pallet or duty to the system, in order for a shard to capture it and process it. |
remove | remove an existing pallet or duty, for the shard that captured it: to release it. |
update | update the pallet or duty payload at the shard held in custody |
transfer | send a message to a pallet or duty, at the shard held in custody |
This enters a newly created duty to the system, on the same existing pallet, and removes the previous one.
MinkaClient<String, String> cli = server.getClient();
cli.add(DutyBuilder.<String>builder("other", "pal").build());
cli.remove(aDuty);
What happens if your application is not subtled to process granularity ?
*see the roadmap