Rick

Rick
Rick

Tuesday, January 6, 2015

High speed microservices



This article endeavors to explain high speed microservices. If you are unfamiliar with the term microservices, you may want to first read this blog post on microservices by Michael Brunton and if have more time on your hands this one by James Lewis and Martin Fowler. 

High speed microservices is a philosophy and set of patterns for building services that can readily back mobile and web applications at scale. It uses a scale up and out versus a scale out model to do more with less hardware.  A scale up model uses in-memory operational data, efficient queue hand-off, and async calls to handle more calls on a single node.

In general the cloud scale out model, employs a sense of reckless abandon. If your app has performance issues, no problem spin up 100 servers. Still not fast enough. Try 1000 servers. This has a cost. This does not replace a cloud scale out model per se. It just makes a cloud scale out model more cost effective. Do more with less.

This is not to say that there are not advantages to the cloud scale out model. This is to say that an ability to do more with less hardware has cost savings, and you can still scale out as needed in the cloud.

The beauty of high-speed microservices is it gets back to OOP roots where data and logic live together in a cohesive understandable representation of the problem domain, and away from separation of data and logic. Since data lives with the service logic that operates on it. Also less time is spent dealing with cache coherency issues as the services own or lease the data (own for a period of time). The code tends to be smaller to do the same things.

You can expect to write less code. You can expect the code you write to run faster.  To a true developer and software engineer this is a boon. Algorithm speed matters again. It is not dust on the scale while you wait for a response from the database. This movement frees you to do more in less time and to have code that runs orders of magnitudes faster than typical IO bound, cloud lemming services. 

There are many frameworks that you can use to build a high speed microservice system.  Vertx, Akka, Kaftka, Redis, Netty, Node.js, Go Channels, Twisted, QBit, etc. are all great technology stacks to build these types of services.  This article is not about any particular technology stack or programming language but a more abstract coverage of what it means to build these type of high speed services devoid of language or technology stack preference.

The model described in this article is the inverse of how many applications are built. It is not uncommon to need 3 to 20 servers for a service where in a traditional system you might need 100s or even 1,000s. Your EC2 bill could be cut into 1/10th the cost for example. This is not just supposition but actual observation.

In this model, you typically add extra services to enable fail over support not to scale out per se. You will reduce the amount of servers needed and your code base will be more coherent if you adopt these strategies.

You may have heard, keep your services stateless. We are recommending the opposite. Make your services own their operational data.

Attributes of High speed services

High speed services have the following attributes:
  • High speed services are in-memory services
  • High speed services do not block
  • High speed services own their data
  • Scale out involves sharding services
  • Reliability is achieved by replicating service stores


An in-memory service is a service that runs in-memory. An in-memory services is non-blocking. An in-memory service can load its data from a central data store.

In-memory services can load data it owns asynchronously and does not block. It continues to service other requests while the data is loading.

At first blush it appears that an in-memory service can achieve its in-memory from using a cache. This is not the case. An in-memory service can use caches but and in-memory service owns its core data. Cached data is only from other services that own their data.

Single writer rule: Only one service at any point in time can edit service particular set of service data

In-memory services either own their data or own their data for a period of time. Owning the data for a period of time is a lease model.

Think of it this way. Data can only be written to by one service at any give point in time. Cache inconsistencies and cache control logic is the root of all evil. The best way to keep data in sync with caches is to never use caches or use them sparingly. It is better to use a service stores that can keep up with your application vending the data as needed in a lease model. Or to create longer leases on service data to improve speed. More on leasing and service stores is described later.

Avoid the following:
  • Caching (use sparingly)
  • Clustering
  • Blocking
  • Transactions
  • Databases for operational data


Embrace the following:

  • In-memory service data and data faulting
  • Sharding
  • Async callbacks
  • Replication / Batching / Remediations
  • Service Stores for operational data



Data faulting and data leasing seem a lot like caching. The key difference is ownership of data and the single writer principle.

Imagine a mobile app with a set of services that contains user data. The first call to any service checks to see if that users data is already loaded in the service. If the user data is not loaded into the service than the call from the mobile app is put into a queue and the call waits for the user data to get loaded asynchronously. The service continues to handle other calls and the service gets notified when the user data loads, and executes the call on the user data load event then. Since we can get many calls to load user data in a given second, we do not load each user one at a time but we load 100 user at a time or 1000 users at a time or we batch load all requests every 50ms (or both) of all user requests in that time. Loading the user data when it needed is called data faulting. Loading 1000 users at a time or all users in the last 50ms or all users since the last user load is called batching request. Batching requests is combining many requests into a single message to optimize IO throughput. Data faulting is the same way your OS loads disk segments into memory pages for virtual memory.


High speed services employ the following:

  • Timed/Size Batching
  • Callbacks
  • Call interception to enable data faulting from the service store
  • Data faulting for elasticity


Data ownership

The more data you can have in-memory the faster your services can run. Not all use cases and data fit this model. Some exceptions can be made. The more important principle is data ownership. This principle comes from the canonical definition of microservices.

In-memory is a means to an end. Mostly to facilitate non-blocking. The more important point is to have the service own the data instead of just being a view into shared data. The more important principle is the single writer principle and the avoidance of cache. 

Let's say that some data is historical data, and historical data rarely gets edited, but it does get edited. Then in this scenario it might make no sense at all to not load the historical data from a database and then update the database directly and skip the service store altogether since the usage is rare and unlikely to hamper the overall performance of the system. 

If size of the data is an issue remember that you can shard the services and you can also fault data into a service server in batches. These two vectors should allow most if not all of the operational data to be loaded into memory and enable the single writer principle. Think of this as more of the Pareto principle. You don't need all. You just need the set of data in-memory that is going to give you the SLA that you need. All would be nice. But you can only have 20% of the data that is faulted in and still have a really fast system. 

A lease can be 1/2 hour, 8 hours, or some other period of time. Once the lease has expired which could be based on the last time that service data was used, then the data just waits in the service store. 

Why Lease? Why not just own?

Why not just own data out right. Well you can if the service data is small enough. Leasing data provides a level of elasticity. This allows you to spin up more nodes. If you optimize and tune the data load from the service store to the service then loading users data becomes trivial and very performant. The faster and more trivial the data fault loading, the shorter you can lease the data and the more elastic your services are. In like manner services can save data periodically to the service store or even keep data in a fault tolerant local store (store and forward) and update the service store in batches to accommodate speed and throughput. Leasing data is not the same as getting data from the cache. In the lease model only one node can edit the data at any given time.


Service Sharding / Service Routing

Elasticity is achieved through leasing and sharding. A service server node owns service data for a period of time. All calls for that users data is made to that server. In front of a series of service servers is a service router. A service router could be an F5 (network load balancer) that maintains server/user affinity through an HTTP header. A service router could be a more complex entity that knows more about the problem domain and knows how to route calls to other back end services.


Fault tolerance

The more important the data and the more replication and synchronization that needs to be done. The more important the data the more resources that are needed to ensure data safety.

If a service node goes down, a service router can select another service node to do that work. The service data will be loaded in an async/data faulting/batch. If the service was sending updates to changes then no state is lost except the state that was not sent to the service store since the last update. The more important the state/data, the more synchronization that should be done when the data is modified. For example, the service store can send an async confirmation of a save, the service could enqueue a response to the client. The client or service tier could opt to add retry logic if it does not get a response from the server. You can also replicate calls to services. You can also create a local store and forward for important calls. 




Service Store

The primary store for a high speed service system is a service store. A service store can treat the service data as opaque. A service store is not a database. A service store is not a cache either. A service store may also keep the data in-memory. The primary function of a service store is vending data quickly to services that are faulting data in. 

A services store also takes care of data replication for data safety and safe storage. A service store should be able to bulk save data and bulk load data to/from a service and to/from replicas. A service store like the service itself should never block. Responses are sent asynchronously. WebSocket or sockets is a great mechanism to send responses from a service store to a given number of services. JSON or some form of binary JSON is a good transport and storage mechanism for a service store.

Service stores are elastic and typically sharded but not as elastic as service servers. Service stores employ replication and synchronization to limit data loss. Service stores are special servers so the rest of your application can be elastic and more fault tolerant. It is typical to over provision service stores to allow for a particular span of growth. Adding new nodes and setting up replication is more deliberate than it is with services. The service store and the leasing model is what enables the services to be elastic.

By special servers, we mean service store servers might use special hardware like disk level replication and servers which might employ additional monitoring. All service data saved to a service store should be saved in at least two servers. There is a certain level of replication that is expected. Service stores may also keep a transaction log so that others processes can follow the log and update databases for querying and reporting.

In high-speed services, databases are only for reporting, long term storage, backup, etc. All operational data is kept and vended out of the service stores which maintain their own replication and backups for recovery. All modifications to data is done by services. Service stores typically use JSON or some other standard data format for long term storage for both the transaction logs for storage into secondary databases.

A service store is the polar opposite of Big Data. A service store is just operational data. One could tail the transaction logs to create Big Data.

Active Objects / Threading model:



To minimize complex synchronization code that can become a bottle neck one should employ some form of Active Objects pattern for stateful, high-speed services.  One could use an event bus system like Vertx or Node.js or an Actor system like Akka or GO channels or Python Twisted and build their own Active Object service system.

The active object pattern separates method execution from method invocation for objects that each reside in their own thread of control (or in the case of QBit for a group of objects that are in the same thread of control).

The goal of Active Objects is to introduce concurrency, by using asynchronous method invocation and a scheduler for handling requests. The scheduler could be a resumable thread draining a queue of method calls. This scheduler could also check to see if data needed for this call was already loaded into the system and fault data in from the service store into the running service before the call is made.

The Active Object pattern consists of six elements:


  1. A client proxy to provide an interface for clients. The client proxy can be local (local client proxy) or remote (remote client proxy).
  2. An interface which defines the method request on an active object.
  3. A queue of pending method requests from clients.
  4. A scheduler, which decides which request to execute next which could for example delay invocation until service data if faulted in or which could reorder method calls based on priority or which could work with several related services from one scheduler allowing said services to make local non-enqueued calls to each other.
  5. The implementation of the active object methods. Contains your code.
  6. service callback for the client to receive the result.


Developing high-speed microservices (tools needed)

IN PROGRESS


Docker, Rocket, Vagrant, EC2, boto, Chef, Puppet, testing, perf testing. (TBD)




Service discovery and health

IN PROGRESS

Consul, etcd, Zookeeper, Nagios, Sensu, SmartStack, Serf, DNS, (TBD)


Similarities to plain microservices


IN PROGRESS

Data ownership, standalone process, container which has all parts needed, docker is the new war file, etc.

What makes high speed microservices different from plain microservices


IN PROGRESS 

Async, Non-blocking, more focus on data ownership and data faulting



Glossary:


Service Store : Sharded fault tolerant opaque storage of service data. The service store enables services to be elastic.
Service Server : A server that hosts one or more services.
High Speed Service: An in-memory high speed service that is non-blocking that owns its service data.
Database: Something that does reporting or long term backups for a high speed service system. A database never holds operational data (there is an uncommon exception to this rule beyond the scope of this article).
Service Router:  The first tier of servers which decide where to route calls to which services based on sharding rules which can be simple or complex. Simple rules could be handled by ha_proxy or an F5. Complex rules can be handled by service routing tier.


Monday, January 5, 2015

Quick Start QBIt programming

QBit is a micro service framework. Let's create a simple QBit example. We will create a TODO example with QBit and gradle. Later examples will show how to do this with maven and QBit.
QBit is very fast. The programming model seems easy, there is some powerful things going on underneath the covers. QBit enables development of async services and in-memory services. We will cover more details in future tutorials. This tutorial is to break the ice.
The example we create will be available via REST/JSON. We will make it a standalone application using Gradle. The example will be CURLable. You can access it from the command line utility called curl.

CURLable example

To query the size of the todo list:
curl localhost:8080/services/todo-service/todo/count
To add a new TODO item.
curl -X POST -H "Content-Type: application/json" -d '{"name":"xyz","decription":"xyz"}' http://localhost:8080/services/todo-service/todo 
To get a list of TODO items
curl http://localhost:8080/services/todo-service/todo/

Gradle build file

To run the sample app easily and to generate executable artifacts we will use gradle.
Here is the gradle build file.
group = 'io.advantageous.qbit.examples'

apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'application'

version = '0.1-SNAPSHOT'


sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8


sourceSets {
    main {
        java {
            srcDir 'src/main/java'
        }
        resources {
            srcDir 'src/main/resources'
        }
    }
}


mainClassName = "io.advantageous.qbit.vertx.http.PerfClientTest"

repositories {
    mavenLocal()
    mavenCentral()
}



dependencies {
    compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'
    compile "org.slf4j:slf4j-api:[1.7,1.8)"
    compile 'ch.qos.logback:logback-classic:1.1.2'
    testCompile group: 'junit', name: 'junit', version: '4.10'
}




idea {
    project {
        jdkName = '1.8'
        languageLevel = '1.8'
    }
}
There are plugins for IntelliJ and for Eclipse for gradle. You will need to install gradle. Go here for instructions for installing gradle.

Java POJO for TODO

QBit is simple, and easy to use framework for building REST services. You might be surprised just how easy. QBit can turn most Java POJOs into JSON with no annotations.
The gradle file will be more complicated than our Java code. :)
Here is the the TODO item for our example:
package io.advantageous.qbit.examples;

import java.util.Date;


public class TodoItem {


    private final String description;
    private final String name;
    private final Date due;

    public TodoItem(final String description, final String name, final Date due) {
        this.description = description;
        this.name = name;
        this.due = due;
    }

    public String getDescription() {
        return description;
    }

    public String getName() {
        return name;
    }

    public Date getDue() {
        return due;
    }
}

Java Service for TODO

The TODO service is defined as follows:
package io.advantageous.qbit.examples;


import io.advantageous.qbit.annotation.RequestMapping;
import io.advantageous.qbit.annotation.RequestMethod;

import java.util.ArrayList;
import java.util.List;


@RequestMapping("/todo-service")
public class TodoService {


    private List<TodoItem> todoItemList = new ArrayList<>();


    @RequestMapping("/todo/count")
    public int size() {

        return todoItemList.size();
    }

    @RequestMapping("/todo/")
    public List<TodoItem> list() {

        return todoItemList;
    }

    @RequestMapping(value = "/todo", method = RequestMethod.POST)
    public void add(TodoItem item) {

        todoItemList.add(item);
    }

}

Main method to run service

Notice the use of RequestMapping, it works in much the same way as the Spring MVC REST annotations. It provides a subset of what Spring MVC provides.
The add method gets called when someone POSTs to the URI /todo.
To run this service, you need to start it up. You do this with a ServiceServer. Starting up a ServiceServer is easier than you might think. A service bundle can specify different threading models so that all services in the bundle run in the same thread or run in different threads.
QBit uses apartment model threading for Services. It uses a very efficient queuing mechanism to limit the amount of handoff between the IO threads and the service threads.
Main method
package io.advantageous.qbit.examples;

import io.advantageous.qbit.server.ServiceServer;
import io.advantageous.qbit.server.ServiceServerBuilder;

public class TodoMain {

    public static void main(String... args) {
        ServiceServer server = new ServiceServerBuilder().build();
        server.initServices(new TodoService());
        server.start();
    }

}
ServiceServerBuilder allows you to setup properties like PORT and NIC interface that you are binding your service to. It also has tweak-able settings for performance, which will make more sense to cover in and advanced tutorial.
Services are available over REST and WebSocket.
To run this service, you need gradle.
Gradle commands you might care about:
gradle idea
The above generates an idea project. There is also a gradle plugin for Eclipse.
gradle run
The above runs the example from gradle.
gradle distZip
unzip ./build/distributions/qbit-example-0.1-SNAPSHOT.zip 
qbit-example-0.1-SNAPSHOT/bin/qt-example
Since we are using gradle we can easily distribute a zip file (or tar file) with all of the jar files we need to execute our service.
This concludes our first getting started tutorial. QBit is very fast. But although the programming model seems easy, there is some powerful things going on underneath the covers. QBit enables development of async services and in-memory services. We will cover more in future tutorials. Stay tuned.

Learn more about the goals of QBit and what we are trying to enable with this:

Monday, December 29, 2014

Quick guide to programming services in QBit

QBit (quick guide)
QBit is a queuing library for services. It is similar to many other projects. QBit is just a library not a platform. QBit has libraries to put a service behind a queue. You can use QBit queues directly or you can create a service. A service, in the QBit world, is a Java class whose methods are executed via queues. QBit implements apartment model threading and is similar to the Actor. QBit does not use disruptor (yet anyway). It uses regular Java Queues. QBit can do 100 million ping pong calls per second. QBit also supports calling services via REST, and WebSocket.
This is more code example and not much verbiage. Auto flush has been added but not explained well. This is more to whet ones appetite than a well thought out tutorial.

QBit Overview 1

  • library for services
  • library not a platform or framework
  • allows putting service behind a queue
  • services are only accessed by one thread
  • No thread sync is typically needed in services

QBit Overview 2

  • You can use QBit queues directly
  • or you can create a service
  • Embeddable (can work in Tomcat or Vertx or Spring Boot)
  • Service is a Java class whose methods are executed via queues

QBit Overview 3

  • implements apartment model threading and is similar to Actors
  • Does not use disruptor
  • Uses regular Java Queues
  • Fast 100 million ping pong calls per second

QBit Overview 4

  • Supports calling services via REST, and WebSocket
  • Uses batching to reduce thread hand off to queues
  • Items to be processed are collected and sent in batches not one at a time
  • Batching reduces thread sync time and accessing shared variables (volatile)

QBit queue example

        BasicQueue<Integer> queue = BasicQueue.create(1000);

/* In another thread */

        SendQueue<Integer> sendQueue = queue.sendQueue();
        sendQueue.send(index); /* send an item but sends them in batches */

        //Flush sends every so often (in timer or ...)
        sendQueue.flushSends();
        //Send and do an immediate flush
        sendQueue.sendAndFlush(code);

/* In another thread */
        ReceiveQueue<Integer> receiveQueue = queue.receiveQueue();
        Integer item = receiveQueue.take();

QBit Flush/Batch

  • There is automatic flush support at some layers
  • More is being added

QBit Service Example

Todo list.

Todo Item

 public class TodoItem {


    private final String description;
    private final String name;
    private final Date due;

    public TodoItem(final String description, final String name,
                    final Date due) {
        this.description = description;
        this.name = name;
        this.due = due;
    }

    public String getDescription() { return description; }

    public String getName() { return name; }

    public Date getDue() { return due; }
 }

Todo Service Class

@RequestMapping("/todo-manager")
public class TodoService {

    private final TodoRepository todoRepository =
               new ListTodoRepository();

    @RequestMapping("/todo/list")
    public List<TodoItem> list() {

        return todoRepository.list();
    }

    @RequestMapping(value = "/todo",
                  method = RequestMethod.POST)
    public void add(TodoItem item) {

        todoRepository.add(item);
    }
}

Todo Service Class

  • Exposes service under URI /todo-manager
  • exposes method list under /todo-manager/list
  • exposes add under /todo-manager/todo

Server code

public class TodoServerMain {

    public static void main(String... args) {
        ServiceServer server =
                  new ServiceServerBuilder().build();
        server.initServices(new TodoService());
        server.start();

    }
}

ServiceServer Builder

public class ServiceServerBuilder {

    private String host = "localhost";
    private int port = 8080;
    private boolean manageQueues = true;
    private int pollTime = 100;
    private int requestBatchSize = 10;
    private int flushInterval = 100;
    private String uri = "/services";


    public ServiceServer build() {...

ServiceServer Builder

  • ServiceServer Builder builds a service server.
  • flushInterval is how often you want it to flush queue batches
  • requestBatchSize is how large you would like the batch to the queue
  • uri is the root URI
  • pollTime is a low level on how long you would it to park between queue polls
  • More params will be exposed. (pipelining, HTTP compression, websocket buffer size)

Client Code REST POST Todo Items

        TodoItem todoItem = new TodoItem("Go to work",
                "Get on ACE train and go to Cupertino",
                new Date());

        final String addTodoURL =
                "http://" + host + ":" + port + "/services/todo-manager/todo";

        final String readTodoListURL
                = "http://" + host + ":" + port + "/services/todo-manager/todo/list";

        //HTTP POST
        HTTP.postJSON(addTodoURL, Boon.toJson(todoItem));

        todoItem = new TodoItem("Call Jefe", "Call Jefe", new Date());

        //HTTP POST
        HTTP.postJSON(addTodoURL, Boon.toJson(todoItem));

REST Client Code read TODO items

        //HTTP GET
        final String todoJsonList =
                HTTP.getJSON(readTodoListURL, null);

        final List<TodoItem> todoItems =
                Boon.fromJsonArray(todoJsonList, TodoItem.class);

        for (TodoItem todo : todoItems) {
            puts(todo.getName(), todo.getDescription(), todo.getDue());
        }

Websocket client

        Client client = new Client(host, port, "/services");
        TodoServiceClient todoService =
          client.createProxy(TodoServiceClient.class, "todo-manager");

        client.run();

        /* Add a new item. */
        todoService.add(new TodoItem("Buy Milk", ...);
        todoService.add(new TodoItem("Buy Hot dogs", ...);

        /* Read the items back. */
        todoService.list(todoItems -> { //LAMBDA EXPRESSION Java 8

            for (TodoItem item : todoItems) {
                puts (item.getDescription(), item.getName(), item.getDue());
            }
        });

Websocket client

  • Needs builder like ServiceServer.
  • ClientServiceBuilder will build ServiceClient
  • Creates proxy
  • Proxy allows async callbacks

Websocket Client Proxy Interface

public interface TodoServiceClient {

        void list(Callback<List<TodoItem>> handler);

        void add(TodoItem todoItem);


}

Callback

public interface Callback<T> extends Consumer<T> {

    default void onError(Throwable error) {

        LoggerFactory.getLogger(Callback.class)
                .error(error.getMessage(), error);
    }
}


QBit designed to be pluggable

  • Could be used with Spring Boot or Spring MVC
  • Can be used in Tomcat
  • Can be used in Vertx
  • Can be run standalone
  • Can be run without websocket REST

QBit Works with any class no annotations needed

        SomeInterface myService = new SomeInterface() ...

        final Factory factory  = QBit.factory();
        final ServiceBundle bundle = factory.createServiceBundle("/root");


        bundle.addService(myService);


        final SomeInterface myServiceProxy =
              bundle.createLocalProxy(SomeInterface.class, "myService");

        myServiceProxy.method2("hi", 5);

QBit series of factories, interfaces and builders allow plug-ability


public interface Factory {

    JsonMapper createJsonMapper();

    HttpServer createHttpServer(...);

    HttpClient createHttpClient(...);

    ServiceServer createServiceServer(...);

Factory SPI

public class FactorySPI {

    public static Factory getFactory() { ... }
    public static void setFactory(Factory factory) { ... }
    public static HttpServerFactory getHttpServerFactory() { ... }
    public static void setHttpServerFactory(HttpServerFactory factory) { ... }
    public static void setHttpClientFactory(HttpClientFactory factory) { ... }
    public static HttpClientFactory getHttpClientFactory() { ... }
Discovery mechanism finds factories and implementations.

Complex REST mappings

    @RequestMapping("/boo/baz")
    class Foo {

        @RequestMapping("/some/uri/with-uri-params/{0}/{1}/")
        public void someMethod(String a, int b) {

            methodCalled = true;
            puts("called a", a, "b", b);
        }
    }

Internals

  • Service is a queue system for a service
  • ServiceBundle is a collection of Services
  • You can work with Service directly w/o a proxy

Example working with Service Directly (INTERNAL)

    public static class Adder {
        int add(int a, int b) { ... } //your implementation
        void queueIdle() { ... } //optional callback
        void queueEmpty() { ... } //optional callback
        void queueShutdown() { ... } //optional callback
        void queueLimit() { ... } //optional callback
    }

Using a Service (INTERNAL)

        Service service = Services.regularService("test", adder, 1000,
                       TimeUnit.MILLISECONDS, 10);
        SendQueue<MethodCall<Object>> requests = service.requests();
        ReceiveQueue<Response<Object>> responses = service.responses();

        requests.send(MethodCallImpl.method("add", Lists.list(1, 2)));

        requests.sendAndFlush(MethodCallImpl.methodWithArgs("add", 4, 5));

        Response<Object> response = responses.take();
        Object o = response.body();

Batching method calls (INTERNAL)

        Service service = Services.regularService("test", adder, ...);
        SendQueue<MethodCall<Object>> requests = service.requests();
        ReceiveQueue<Response<Object>> responses = service.responses();

        List<MethodCall<Object>> methods = new ArrayList<>();

        for (int index = 0; index < 1000; index++) {
            methods.add(MethodCallImpl.method("add", Lists.list(1, 2)));
        }

        requests.sendBatch(methods);

Using JSON From Service (INTERNAL)

        Adder adder = new Adder();
        Service service = Services.jsonService("test", adder, ...;

        ReceiveQueue<Response<Object>> responses = service.responses();
        SendQueue<MethodCall<Object>> requests = service.requests();



        requests.send(MethodCallImpl.method("add", "[1,2]"));

        requests.send(MethodCallImpl.method("add", "[4,5]"));
        requests.flushSends();

Using JSON from Service Bundle (Internal)


    ServiceBundle serviceBundle = QBit.factory().createServiceBundle("/services");
    serviceBundle.addService(new TodoService());

    Todo todoItem = new Todo("call mom", "give mom a call",
                new Date());

    MethodCall<Object> addMethod = QBit.factory()
                .createMethodCallByAddress("/services/todo-manager/add", "client1",
                todoItem, null);

    serviceBundle.call(addMethod);

    MethodCall<Object> listMethod = QBit.factory()
                .createMethodCallByAddress("/services/todo-manager/list", "client1",
                null, null);
    serviceBundle.call(listMethod);
    serviceBundle.flushSends();
    //Handle returns
    ReceiveQueue<Response<Object>> responses = serviceBundle.responses().receiveQueue();
    Response<Object> response = responses.take();

HTTP Client fast Async part of QBIT

                    final HttpClient client = new HttpClientBuilder().setPort(port)
                            .setHost(host)
                            .setPoolSize(poolSize).setRequestBatchSize(batchSize).
                                    setPollTime(pollTime).build();
                    client.run();


                    client.sendHttpRequest(perfRequest);

                    client.flush();

                    client.stop();

HTTP Request Builder

        final HttpRequestBuilder httpRequestBuilder = new HttpRequestBuilder();

        final HttpRequest perfRequest = httpRequestBuilder
                                        .setContentType("application/json")
                                        .setMethod("GET").setUri("/perf/")
                                        .setResponse((code, mimeType, body) -> {
                                            if (code != 200 || !body.equals("\"ok\"")) {
                                                errorCount.increment();
                                                return;
                                            }

                                            receivedCount.increment();


                                        })
                                        .build();

        client.sendHttpRequest(perfRequest);

HTTP Client Builder

public class HttpClientBuilder {


    private String host = "localhost";
    private int port = 8080;
    private int poolSize = 5;
    private int pollTime = 10;
    private int requestBatchSize = 10;
    private int timeOutInMilliseconds=3000;
    private boolean autoFlush = true;

    public HttpClient build(){...}

HTTP Server

        final HttpServer server = new HttpServerBuilder()
                                    .setPort(port)
                                    .setHost(host)
                                    .setPollTime(pollTime)
                                    .setRequestBatchSize(batchSize)
                                    .setFlushInterval(flushRate)

                                    .setHttpRequestConsumer(request -> {

                                        if (request.getUri().equals("/perf/")) {
                                            request.getResponse()
                                            .response(200, "application/json",
                                            "\"ok\"");
                                        }
                                    }).build();


        server.start();

HTTP Server

  • Implementations in Vertx and Netty
  • Faster than Tomcat and Jetty (on benchmark tests I wrote)
  • Faster than Vertx alone on some tests

HTTP Server Builder

public class HttpServerBuilder {

    private String host = "localhost";
    private int port = 8080;
    private boolean manageQueues = true;
    private int pollTime = 100;
    private int requestBatchSize = 10;
    private int flushInterval = 100;

    public HttpServer build(){...}

Using callbacks 1

    public static interface MyServiceInterfaceForClient {

        void method1();

        void method2(String hi, int amount);

        void method3(Callback<String> handler, String hi, int amount);
    }

Using callbacks 2

        @RequestMapping("myService")
        class MyServiceClass implements SomeInterface {
            @Override
            public void method1() {

            }

            @Override
            public void method2(String hi, int amount) {

            }

            @Override
            public String method3(String hi, int amount) {
                return "Hi" + hi + " " + amount;
            }
        }

Using callbacks 3

       SomeInterface myService = new MyServiceClass();


        final Factory factory  = QBit.factory();
        final ServiceBundle bundle = factory.createServiceBundle("/root");


        bundle.addService(myService);
        bundle.startReturnHandlerProcessor();



        final MyServiceInterfaceForClient myServiceProxy = bundle.createLocalProxy(
                MyServiceInterfaceForClient.class,
                "myService");

Using callbacks 4

       Callback<String> returnHandler = new Callback<String>() {
            @Override
            public void accept(String returnValue) {

                puts("We got", returnValue);

                ok = "Hihi 5".equals(returnValue);

            }
        };
        myServiceProxy.method3(returnHandler, "hi", 5);
        bundle.flushSends();






JMeter vs. Gatling: Fact Checking: SHILL! ASTROTURFING SHILL!