This section introduces OSGi™ Promises and Asynchronous Services. The
fractal example demonstrates these concepts along with the Service Fabric's asynchronous load balancer service. The
fractal example also provides the opportunity to demonstrate System Versioning, and the Service Fabric's update and rollback capabilities.
The source code reviewed in the next section is available at https://github.com/paremus/async-examples
Introducing OSGi™ Promises and Asynchronous Services
OSGi™ Enterprise R6 includes the
Promise interface, which can be implemented on any version of Java.
- Abstract the user from calls with high latency.
- Provide a simple and consistent mechanism for error handling.
- Represent units of work, and can be flexibly sequenced using Monadic programming techniques (see https://en.wikipedia.org/wiki/Monad_(functional_programming) ).
When properly organised and sequenced, Promises can be used to control program flow without explicit branching, making code more readable and business logic easier to identify (see https://www.infoq.com/interviews/tim-ward-osgi-promises).
As will be now demonstrated, OSGi™ Promises are simply to use and offer the same powerful, composable functional behaviours as you might expect from other react asynchronous functional frameworks; e.g. RxJava or Akka. However, unlike these alternatives, OSGi™ Promises are part of an increasing rich eco-system of OSGi Specifications and Implementations.
While the Java Future interface went some way toward addressing this generic need in Java, futures lack the ability to register callbacks notifying the program when an asynchronous result is available. There have been improvements to Java 8's support for Promises via the
CompletableFuture type, but the API is complex and difficult to use for many simple use cases.
Obtaining a Promise
Creating a Promise is very simple. The Promise API includes a default implementation that can be created using the
Deferred type. This can be used to create, and later resolve (i.e. set the value of) or fail (i.e. set an exception for) a Promise.
Whist the above code is simple, it has a few disadvantages:
- The code creating the promise is responsible for its own thread management, and there is no reuse of the thread
- Error management is manual, hence the ugly
- The code is quite verbose, hiding the fact that what we really care about is the call to
- Every client has to reproduce this code pattern if they want to use the service asynchronously.
Ideally, what we really want is a way to directly return a promise from the service without the client having to worry about how this happens. This abstraction is provided by the Async Service.
The Async Service
The Async Service is another OSGi specification in OSGi Enterprise R6: see http://www.china.osgiusers.org/wiki/uploads/Main/JavaOne2014-BuildingSystemsWithAsynchronousMicroservices.pdf. Async Services provide a mechanism for transforming any OSGi service call into an asynchronous call represented by a Promise. This is accomplished through the use of an
Asynchronous Mediator, which behaves a bit like a `
mock` that you might use in testing. The client code creates a mediator object, records a method call using the mediator, and then starts the execution obtaining the async service. This is very similar to the create/use/verify pattern that test mocks use.
Using Asynchronous Services, the update
calculateDifficultSum now has the following form:
This is significantly more compact than using the
Deferred object, avoids thread lifecycle management, and prevents mistakes in error handling. The Async Service also has the significant advantage that it transparently delegates to services that implement the
AsyncDelegate interface. In this case the service is given the opportunity to optimise the asynchronous call. For example if the service is remote, and the distribution provider supports it, then the invocation request can be sent without blocking for the return value, returning a
Promise directly instead.
These techniques are now demonstrated in the Fractal example.
The Fractal Viewer
The fractal viewer is a simple REST application. This is expressed in the
fractal system by the following
system.part has an AngularJS front-end, and a pair of JAX-RS services. One service is used to populate the front-end form, the other is used to render fractal data on the server, and return the data to the client. The fractal viewer allows different fractal equations to be plugged in using the OSGi™ Whiteboard Pattern. Equation services are then offered to the client for rendering requests. A similar pattern is used to provide different colour schemes for the rendering.
Fractal Render implementation
In a synchronous implementation the
viewer might generate sections of the image, and delivers them as Server Sent Events. This improves the responsiveness of the user interface. However, the
viewer is generating and transmitting each section of the
fractal image in sequence to the remote workers.
This synchronous code looks as follows:
As shown, this code makes no attempt to reduce the overall waiting time, but it could do so if executions ran in parallel. However managing parallel executions can be difficult and error prone.
This becomes much simpler when using OSGi™ promises and asynchronous primitives.
Updating the renderer
As shown, the above renderer code can be made asynchronous very easily:
This isn't quite enough, however.
Now that the rendering is asynchronous there's nothing to stop our code from sending the final event before any rendering has actually finished! To help here we make use of two things.
then() method returns a
Promise. This Promise is a "chained" promise, whose value depends upon the result of its parent, and on the result of the
Success callback. The chained Promise only resolves when the
Success callback has completed, and the Promise that the callback returns has resolved (or immediately after if the callback returns null).
Secondly we can make use of the Promises utility class to create a "latch" Promise. The latch Promise resolves when all of the Promises used to create it have resolved.
The resulting code now looks like this:
The synchronous and asynchronous implementations of the render are included in the
system.part. To see the performance difference between these implementations, when running the Fractal example below, try checking the `
Run Asynchronously` checkbox and then clicking `
Display Fractal` or zooming in.
The Fabric's Replication Handler & Asynchronous Load Balancer
The performance of the application, particularly under load, will be limited by local CPU resource and the threads available to the Async Service. So in addition to the use of Promises and Asynchronous services we need to:
- Move the computation load to remote workers.
- Scale out those computational workers
- Load balance across that population.
Of these, (1) and (2) are achieved in the same way as with the distributed
Hello examples in the introductory tutorial. Specifically:
renderingAPI as a remote service ( `
property name="service.exported.interfaces" value="com.paremus.demo.fractal.api.Equation`) .
fixedreplication handler is included the
system.partto allow manual scaling of the number of worker instances.
As shown, the
fixed replication handler has a default of zero instances.
Having scaled out multiple
workers we need some mechanism to allow the
viewer to load-balance across these. Rather than complicating the implementation of
viewer with load-balancer logic, we can instead composed the Service Fabric's runtime load-balancer into the
As shown in the following extract, the load-balancer is configured for each exported interface.
The Fractal System
The completed fractal system example, including runtime load-balancing and worker scaleout behaviours is shown.
Running the Fractal Example
Lets run the examples!
Having create a Fabric, import the following two
These systems are identical with the exception that the calculation engine in version
1.0.1 deliberately introduces algorithmic noise.
As shown, once imported the revisions of the system (i.e.
1.0.0, 1.0.1) are displayed under the same shared system name (i.e.
1.0.0 of the
fractal system resultants in the
viewer and a single instance of a
system.part being hosted by different
As shown version
1.0.0 is now highlighted in blue on the main system page, this indicating that it is this version of
com.paremus.example.fabric-system that is currently deployed.
The resultant fractal URL is registered in Entire's
Clicking on the
fractal URL will open the fractal example in a new browser.
By default the
fractal example is synchronous. This can be seen by the sequenced responses from the remote
worker. To help to see the re-rendering process either:
- Select a different equation.
- Use a different colour scheme.
- Or use a different version of the fractal system.
To make the example asynchronous, select the
`Run Asynchronous` option and re-run. The throughput of the
fractal system is now seen to be significantly higher; the
worker engine sending the results back to the
viewer as soon as the calculations have completed, and these immediately displayed by the
System Update & Roll Back
Now return to the
fractal system view, and deploy version
1.0.1 of the example. The first point to note is that version
1.0.0 is now marked as un-deployed.
Looking at the
fractal example, we see that the
worker has now changed to the version that includes an algorithmic noise component. Go to the
fractal URL and select
Display Fractal to see the effect.
To back out this change, simple re-deploy version
fractal workers may be horizontal scale across more fibres by changing the scaling value assigned to the replication handler associated with the
workers; i.e. `
com.paremus.example.fractal-system/workers.replication`; this property located below the runtime topology view.
As shown, in response the
workers are scaled across additional
Ensure that the
load balanced version of the equation is used, and that `
Run Asynchronously` is selected: the rendering throughput of the
fractal application will been seen to increase by the same factor as the
workers were horizontally scaled.
Try adding a few additional fibres (at least one!) to your Service Fabric (see ), and then reconfiguring the
fractal system to deploy additional
worker instances. You should see that adding progressively more workers makes rendering faster, and that removing them makes it slower again.