Automated provisioning of JMS resources in Java EE 7

JMS 2.0 (part of the Java EE 7 Platform) introduced lots of nice features. One of these was the ability to declare JMS resources for automatic deployment.

Pre Java EE 7

  • Inject Connection Factory using @Resource
  • Lookup Destination (Queue/Topic) using @Resource
  • Pull out the Session object and use it to create the Message, Message Producer and send the message

Most importantly, you had to make sure that the resources i.e. the Connection Factory and the physical destinations were configured in your application server in advance

In the Java EE 7 era ….

You can leverage JMS 2.0 goodies

  • Use injected JMS Context (in most of the cases) to ease the sending process with less boilerplate code
  • Most importantly, you can declaratively configure auto provisioning of JMS Resources using annotations or deployment descriptors

Let’s look at the new JMS 2.0 annotations in action. You can also pick up this Maven project on Github and deploy it in your favourite IDE

@JMSConnectionFactoryDefinition, @JMSConnectionFactoryDefinitions

Used to declare one or more connection factories

@JMSDestinationDefinition, @JMSDestinationDefinitions

Used to declare one or more physical destinations (queues or topics)

Oh and you can also use XML

These can be a part of the web deployment descriptor (web.xml) or the EJB deployment descriptor (ejb-jar.xml)

Possible variations

There are several ways to use this feature

  • Declare your JMS resources using a @Startup powered @Singleton EJB
  • You can also declare it on a Servlet or any CDI managed bean for that matter

What’s the point of all this ?

The container/Java EE application server makes sure that the JMS artefacts are available to your application logic on-demand

  • It’s valuable in PaaS, microservices, dockerized and any other environment which heavily leverage automated deployments
  • Good for automated testing
  • It’s one less item to think about and configure!

Additional resources


Posted in Java, Java EE | Leave a comment

Java WebSocket API: difference b/w Endpoint and RemoteEndpoint

If you encounter the Endpoint and RemoteEndpoint artefacts from the Java WebSocket API for the first time, you might think they represent the same concept or you might even guess that they are hierarchical in nature. It is not the case.

javaee-logo websocket_logo

Endpoint: the class

javax.websocket.Endpoint is a simple (abstract) class which represents a WebSocket endpoint itself – either a server or a client endpoint. The Java WebSocket API itself provides both annotation and programmatic APIs to develop and design endpoints. An annotation based endpoint can be developed with the help of the following annotations

  • @ServerEndpoint or @ClientEndpoint
  • @OnMessage
  • @OnClose
  • @OnMessage and
  • @OnError

When you deploy an annotated WebSocket endpoint using Tyrus (WebSocket Reference Implementation), it internally creates an instance of org.glassfish.tyrus.core.AnnotatedEndpoint which extends (and implements the abstract onOpen method)


In case you want to use the programmatic API, you would need to extend the Endpoint class and override the (abstract) onOpen method yourself.

RemoteEndpoint: the interface

WebSocket is just a protocol using which two parties (client and server) communicate. javax.websocket.RemoteEndpoint is an abstraction which represents the entity at the other end. It is available is two avatars

  • Synchronous: RemoteEndpoint.Basic
  • Asynchronous: RemoteEndpoint.Async

A RemoteEndpoint instance is encapsulated within the javax.websocket.Session object and can be obtained using the getBasicRemote or getAsyncRemote methods (for sync and async operation respectively). Here is the Tyrus implementation – org.glassfish.tyrus.core. TyrusRemoteEndpoint


Posted in Java, Java EE | Tagged , , , | Leave a comment

Implementing auto retry in Java EE applications

Initially, I wanted to call this blog – ‘Flexible timeouts with interceptor driven retry policies‘ – but then I thought it would be too ‘heavy’. This statement, along with the revised title should (hopefully) give you an idea of what this post might talk about ;-)

The trigger

This post is primarily driven by one of the comment/question I received on one of my earlier posts about which briefly discussed timeout mechanisms and how they can be used to define ‘concurrency policies’ for Stateful and Singleton EJBs.


The Problem

While timeouts are a good way to enforce concurrency policies and control resource allocation/usage in your the EJB container, there problem arises when the timeouts are inconsistent and unpredictable. How do you configure your timeout policy then ?

Of course, there is no perfect solution. But, one of the work around which popped into my mind was, to ‘retry‘ the failed method (this might not be appropriate or possible for your given scenario, but can be applied if the use case permits). This is good example of a ‘cross-cutting‘ concern or in other words, an ‘aspect‘. The Java EE answer for this is – Interceptors. These are much better than the default ‘rinse-repeat-until-xyz with a try-catch block‘ because of

  • code reuse
  • flexibility

The gist (of the solution)

Here is the high level description (code available on Github)

  • Define a simple annotation which represents the ‘retry policy metadata’ e.g. number of retries


  • Define an interceptor with implementation to retry the target method – this would use the above mentioned ‘retry policy’ metadata and behave accordingly


  • Attach this interceptor to the required method (caller)


  • Optionally, use @InterceptorBinding

The sample code

  • Uses a Singleton EJB to simulate a sample service and introduces latency via the obvious Thread.sleep() [which of course is forbidden inside a Java EE container]
  • Uses a JAX-RS resource which injects and calls the Singleton EJB and is a candidate for ‘retry’ as per a ‘policy’
  • Can be tested by deploying on any Java EE (6 or 7) compatible server and using Apache JMeter to simulate concurrent clients/requests (Invoke HTTP GET on http://serverip:port/FlexiTimeouts/test)

Without the retry (interceptor) configuration, the tests (for concurrent requests) will result in a HTTP timeout (408).


Once retry interceptor is activated, there will be some latency because the task will be automatically retried once it fails. This of course will depend on the volume (of concurrent requests) and the threshold would need to be tuned accordingly – higher threshold for a highly concurrent environment (usually, not ideally)

Additional thoughts

  • It is not mandatory to have the threshold or retry policy defined in the code. It can be externalised as well (to make things more flexible) e.g. use the @RetryPolicy to point to a file which contains required policy metadata
  • A Retry Threshold is not the only configurable attribute. You can have other criteria and use it in your Interceptor logic
  • One can expose statistics related to success/failure/retries. Its better to do this in an async fashion (push it to JMX via an @Async EJB?) so that it does not hamper the Interceptor performance itself


Posted in Java, Java EE | Tagged , , , , , | 2 Comments

Basics of scaling Java EE applications

To be honest, ‘scalability’ is an exhaustive topic and generally not well understood. More often than not, its assumed to be same as High Availability. I have seen both novice programmers and ‘experienced’ architects suggest ‘clustering‘ as the solution for scalability and HA. There is actually nothing wrong with it, but the problem is that it is often done by googling rather than actually understanding the application itself ;-)

I do not claim to be an ‘expert’, just by writing this post ;-) It just (briefly) lays out some strategies for scaling Java EE applications in general.

The problem…

Scalability is not a standardized component within the Java EE Platform specification. The associated techniques are mostly vendor (application server) specific and often involve using more than one products (apart from the app server itself). That’s why, architecting Java EE applications to be scalable can be a little tricky. There is no ‘cookbook’ to do the trick for you. One really needs to understand the application inside out.

Types of Scaling

I am sure it is not the first time you are reading this. Generally, scaling is classified into two broad categories – Scale Up, Scale Out

The first natural step towards scaling, is to scale up

  • Scaling Up: This involves adding more resources to your servers e.g. RAM, disk space, processors etc. It is useful in certain scenarios, but will turn out to be expensive after a particular point and you will discover that it’s better to resort to Scaling Out

  • Scaling Out: In this process, more machines or additional server instances/nodes are added. This is also called clustering because all the servers are supposed to work together in unison (as a Group or Cluster) and should be transparent to the client.

High Availability!=Scalability

Yes! Just because a system is Highly Available (by having multiple servers nodes to fail over to), does not mean it is scalable as well. HA just means that, if the current processing node crashes, the request would be passed on or failed over to a different node in the cluster so that it can continue from where it started – that’s pretty much it! Scalability is the ability to improve specific characteristics of the system (e.g. number of users, throughput, performance) by increasing the available resources (RAM, processor etc.) Even if the failed request is passed on to another node, you cannot guarantee that the application will behave correctly in that scenario (read on to understand why)

Lets look at some of the options and related discussions

Load Balance your scaled out cluster

Let’s assume that you have scaled up to your maximum capacity and now you have scaled out your system by having multiple nodes forming a cluster. Now what you would do is put a Load Balancer in front of your clustered infrastructure so that you can distribute the load among your cluster members. Load balancing is not covered in detail since I do not have too much insight except for the basics :-) But knowing this is good enough for this post


Is my application stateless or stateful ?

Ok so now you have scaled out – is that enough ? Scaling out is fine if your application is stateless i.e. your application logic does not depend on existing server state to process a request e.g. RESTful API back end over JAX-RS, Messaging based application exposing remote EJBs as the entry point which use JMS in the back ground etc.

What if you have an application which has components like HTTP session objects, Stateful EJBs, Session scoped beans (CDI, JSF) etc. ? These are specific to a client (to be more specific, the calling thread), store specific state and depend on that state being present in order to be able to execute the request e.g. a HTTP session object might store a user’s authentication state, shopping cart information etc.

In a scaled out or clustered application, subsequent requests might be served by any cluster in the node. How will the other node handle the request without the state data which was created in the JVM of the instance to which the first request was passed to?



Hello Sticky Sessions!

Sticky Session configuration can be done on the load balancer level to ensure that a request from a specific client/end user is always forwarded to the same instance/application server node i.e server affinity is maintained. Thus, we alleviate the problem of the required state not being present. But there is a catch here – what if that node crashes ? The state will be destroyed and the user will be forwarded to an instance where there is no existing state on which the server side request processing depends.


Enter Replicated Clustering

In order to resolve the above problem, you can configure your application server clustering mechanism to support replication for your stateful components. By doing this you can ensure that your HTTP session data (and other stateful objects) are present on all the server instances. Thus the end user request can be forwarded to any server node now. Even if a server instance crashes or is unavailable, any other node in the cluster can handle the request. Now, your cluster is not ordinary cluster – it’s a replicated cluster


Cluster replication is specific to your Java EE container/app server and its best to consult its related documentation on how to go about this. Generally, most application servers support clustering of Java EE components like stateful and stateless EJBs, HTTP sessions, JMS queues etc.
This creates another problem though – Now each node in the application server handles session data resulting in more JVM heap storage and hence more garbage collection. Also, there is an amount of processing power spent in replication as well

External store for stateful components

This can be avoided by storing session data and stateful objects in another tier. You can do so using RDBMS. Again, most application servers have inbuilt support for this.


If you notice, we have moved the storage from an in-memory tier to a persistent tier – at the end of the day, you might end up facing scalability issues because of the Database. I am not saying this will happen for sure, but depending upon your application, your DB might get overloaded and latency might creep in e.g. in case of a fail over scenario, think about recreating the entire user session state from the DB for use within another cluster instance – this can take time and affect end user experience during peak loads

Final frontier: Distributed In-Memory Cache

It is the final frontier – at least in my opinion, since it moves us back to the in-memory approach. Yo can’t get better than that! Products like Oracle Coherence, Hazelcast or any other distributed caching/in-memory grid product can be used to offload the stateful state storage and replication/distribution – this is nothing but a Caching Tier. The good part is that most of these products support HTTP session storage as a default feature


This kind of architectural setup means that application server restarts do not effect existing user sessions – it’s always nice to patch your systems without downtime and end user outage (not as easy as it sounds but definitely and option!). In general, the idea is that the app tier and web session caching tier can work and scale independently and not interfere each other.


There a a huge difference b/w these words and it’s vital to understand the difference in terms of your caching tier. Both have their pros and cons

  • Distributed: Members of the cache share data i.e. the data set is partitioned among cache cluster nodes (using a product specific algorithm)

  • Replicated: All cache nodes have ALL the data i.e. each cache server contains a copy of the entire data set.

Further reading (mostly Weblogic specific)

Before I sign off…

  • High/Extreme Scalability might not be a requirement for every Java EE application out there. But it will be definitely useful to factor that into your design if you are planning on building internet/public facing applications
  • Scalable design is a must for applications which want to leverage the Cloud Platforms (mostly PaaS) like automated elasticity (economically viable!) and HA
  • Its not too hard to figure out that stateful applications are often more challenging to scale. Complete ‘statelessness’ might not be possible, but one should strive towards that

Feel free to share tips and techniques which you have used to scale your Java EE apps.


Posted in Java, Java EE | Tagged , , , , | 6 Comments

JAX-RS and JSON-P integration

This short post talks about support for JSON-P in JAX-RS 2.0


The JSON Processing API (JSON-P) was introduced in Java EE 7. It provides a standard API to work with JSON data and is quite similar to its XML counterpart – JAXP. JSON-B (JSON Binding) API is in the works for Java EE 8.

Support for JSON-P in JAX-RS 2.0

JAX-RS 2.0 (also a part of Java EE 7) has out-of-the-box support for JSON-P artifacts like JsonObject, JsonArray and JsonStructure i.e. every JAX-RS 2.0 compliant implementation will provide built in Entity Providers for these objects, making it seamless and easy to exchange JSON data in JAX-RS applications

Some examples

Sending JSON array from your JAX-RS resource methods

Here is another example of how you can accept a JSON payload from the client

These are pretty simple examples, but I hope you get the idea….

Few things to be noted

  • No need to write custom MessageBodyReader or MessageBodyWriter implementations. As mentioned previously, the JAX-RS implementation does it for you for free !

  • This feature is not the same as being able to use JAXB annotations on POJOs and exchange JSON versions of the payload (by specifying the application/xml media type). This is not a standard feature yet, although I have experimented with this and observed that GlassFish 4.1 (Jersey) and Wildfly 8.x (RESTEasy) support this by default

Further reading


Posted in Java, Java EE | Tagged , , | 1 Comment

The ‘Java Caching’ Refcard is live on DZone !

Happy to announce the availability of the Java Caching Refcard on DZone :-)

The Refcard is not the holy grail of everything related to caching using Java. It primarily focuses on a couple of areas

  • JCache API foundations: The idea was to give a good enough overview of the JCache API in order to enable developers to get up and running with it quickly
  • Caching strategies: Discusses some of the common caching techniques and strategies


Actually, it went live on September 28 and managed to generate healthy interest [ suggested by ~6000 downloads so far – not bad for my first Refcard ;-) ]

I hope the readers are able to extract some value out of this. If have not read it yet, please do try it out. As always, feedback would be appreciated.


Posted in Caching, Java | Tagged , , , , , , | Leave a comment

Native CDI Qualifiers: @Any and @Default

Let’s take a look at the out-of-the-box qualifiers in CDI

There are three qualifiers declared by the CDI specification – @Any, @Default, @New

  • @Any: Think of it as an omnipresent qualifier. It’s there even if its not ;-)

  • @Default: As the name suggests, this qualifier treated as a default when none other qualifiers have been specific. The only exception to this rule is when the @Named (javax.inject) qualifier is used as well

  • @New: Used to obtain a new instance of a bean on-demand. The new instance is scope independent. This has been deprecated since CDI 1.1

Here are some simple examples

Qualifiers at Bean (class) level

Qualifiers at Injection point

What’s so special about @Any ?

As stated earlier, the @Any qualifier is omnipresent i.e. it is always there, no matter what. The interesting part is that if you explicitly mention this annotation, it opens up the following options

  • You have access to all possible implementations of a bean

  • It does not suppress the default bean (if any) or any of the explicit (qualified) implementations. You can still look them up dynamically (at run time)

That’s all for a quickie on default CDI qualifiers. You might want to check out one of my earlier posts on basics of custom qualifiers in CDI


Posted in Java, Java EE | Tagged , , , | Leave a comment

Introductory NoSQL stuff

While some of you might be NoSQL gurus, there is often a lack of solid knowledge about NoSQL in general and some common myths as well. Specifically, topics like NoSQL applicability/use cases and its comparison (fair and unfair) against relational databases are often driven by incomplete knowledge. I do not claim to be an expert in the NoSQL domain, but I guess it might be useful to jot down what ever little I know about NoSQL (in general). Someone looking at NoSQL from the view point of a (relative) beginner can benefit from this post (maybe?). So let’s dive in.

What’s in a name ? NoSQL …..

It does not mean ‘No SQL’ (see how a white space can screw things up)!. That’s far from the truth. It actually means, ‘Not Only SQL’ ! See the difference ? Even NoSQL based technologies use some kind of (mostly proprietary) language to ‘query’ against their native store – it is not very different from a Structured Query Language then !

Ok, so it’s ‘Not Only SQL’. But….

What are its main characteristics ?

It’s hard to list down all the specific attributes of a NoSQL solution with precision. Some solutions might or might not support all of these properties, but by and large, these are the most common ones

No schema

This is perhaps the most fundamental attribute of all. NoSQL solutions do not have the notion of a schema (which is nothing but a bunch of metadata about your data containers – rows and columns)

Distributed by nature

Most of the NoSQL solutions support a distributed architecture where the data itself is partitioned and load balanced (entirely different terms!) across multiple instances


It is an abbreviation for Basically Available, Soft state, Eventual consistency’. Well I do not want to go into its detail [ because I do not completely understand it ! ;-) ]. But what I do know for a fact that, BASE (for NoSQL) vs ACID (for RDBMS) are often major points of debate

Types of NoSQL solutions

These are the most common categories/types/variants of NoSQL solutions

  • Key-Value pair: Stores data as key value pairs where values has no fixed representation e.g. Redis, Oracle NoSQL
  • Document store: Store documents (XML, JSON, BSON etc) as values e.g. MongoDB, Couchbase
  • Graph: store information in a graph like structure e.g. Neo4j, OrientDB
  • Column based: store data in columns but no rows e.g. Apache Cassandra

…. or maybe a combination of one or more of the above !

Ok, so why should I choose a NoSQL solution over my good old RDBMS ?

Alright, so this is the right moment to talk about some differences. Hopefully I can get these approximately correct if not 100% accurate


It’s easy (at least theoretically) to add additional nodes/instances of a NoSQL data store to meet the increasing demands of your application. This is made possible by the fact that NoSQL solutions are designed to work well in a distributed fashion.


This is related to the ‘No Schema’ characteristic. NoSQL solutions are flexible in the sense that they either have no schema at all or they allow relatively unstructured data to be stored without tinkering with the administration side of things (e.g. evolving the schema in case of a RDBMS)

Highly Available

This might sound silly at first. You might say, anything (including a RDBMS) can be made redundant (highly available) by adding more instances. That’s true. With NoSQL solutions, it’s just more easier to do this since they are (generally) designed (from the ground up) with extreme scaling in mind which automatically makes them highly available – if one node fails, your application does not halt. The data gets re-distributed (re-partitioned) among the remaining nodes and the show goes on.


The performance of a distributed NoSQL solution shines in problem domain involving large data sets since it can be scaled horizontally (by adding more nodes)

More suitable for the Cloud

In my opinion, Cloud computing (specially the ones related to PaaS) services are about elastic scaling (cost effective management of resources where your instances increase/decrease based on policies which are further based on factors like load/volume/time etc.), easier setup & provisioning along with smooth upgrade/patching process. NoSQL solutions fit the bill perfectly (I am pretty certain, at least from the scaling point of view)

Cost effective

Again, its all about horizontal scaling and not vertical scaling. Horizontal scaling means spinning up more instances (much more cheaper) rather than upgrading the hardware of a single machine (can get costly after a certain point of time)

Ok.. so ‘No Caveats’ with NoSQL ?

Absolutely not! As with any technology, there are pros and cons, even though its usage might be perfectly for your use case

  • For RDBMS purists, Eventual Consistency of a NoSQL solution is not good enough. Lack of ACID properties is often cited as the top most drawback of NoSQL stores in specific use cases/domains.
  • Heterogenous products and lack of standards: There has been an explosion of NoSQL solutions. Although many of the basic concepts and characteristics remain the same, learning NoSQL solutions from different vendors makes for a steep learning curve! It is because, there is no specific standard/API around this technology yet (at least I have not seen one)
  • Relatively new: does not sound like a serious caveat, but it can take time for teams to ramp up with the technology as compared to RDBMS (which has been in force since decades!)

When should I use a NoSQL solution?

I have no personal experience of implementing a NoSQL solution in production, but from a common sense perspective, this is what I think.
The best answer, as you all know is ‘it depends’ ;-) Well, maybe not ? Whether you are thinking about NoSQL vs RDBMS or comparing various NoSQL offerings, you should look at your use case and then take things from there. If you need ACID properties, avoid NoSQL. If you have large data sets and the type of data is non relational in nature, its better to leverage NoSQL and its scalability properties. As far as choosing from a graph, key-value, document or column based NoSQL store is concerned, the answer (or maybe the question) still remains the same – ‘what does your use case require ?’

Curious? Explore !


Posted in NoSQL | Tagged , , , , , | Leave a comment

New in JMS 2.0 . . .

This post lists ALL of the new APIs (interfaces/classes/annotations etc.) introduced in JMS 2.0 (part of the Java EE 7 platform). These have been categorized as follows

  • API simplification
  • Ease of use
  • Exception Handling
  • Miscellaneous

Here is a quick summary along with some code snippets

API simplification


Simpler abstraction on top of Connection and Session objects which eliminates the need for interacting with these classes/interfaces in order to send/receive messages.


Used during JMSContext injection to specify the JNDI name of the JMS ConnectionFactory

JMSProducer and JMSConsumer

As the name suggests, a JMSProducer and JMSConsumer encapsulate the process of sending JMS messages to and from destinations (topics and queues), respectively. Instances to these objects can be obtained from the JMSContext object and they are important from an API ease-of-use perspective. Here is a ‘fluent’ API example


Transactional equivalent of the vanilla JMSContext object. The implementation of this interface provides support for JTA within JMS

Ease of use

These annotations empower less reliance on manual/administrative configuration and drive automated deployment of Java EE applications. These are perfect examples of ‘configuration as code’ and invaluable in Cloud (PaaS) deployment scenarios

JMSConnectionFactoryDefinition and JMSConnectionFactoryDefinitions

Specify the JNDI name of one/multiple JMS ConnectionFactory object. This resource will be automatically provisioned at deployment time

JMSDestinationDefinition and JMSDestinationDefinitions

Specify the JNDI name of one/more JMS Destinations (queues/topics). This resource will be automatically provisioned at deployment time

Exception Handling

JMS 1.1 and earlier versions did not have a notion of unchecked exceptions. From JMS 2.0, JMSRuntimeException has been introduced to act as the base/parent from which all other unchecked exceptions have been extended. Here is a list all the new exceptions introduced in JMS 2.0 (these are mostly unchecked versions of their checked counterparts)

  • JMSRuntimeException
  • IllegalStateRuntimeException
  • InvalidClientIDRuntimeException
  • InvalidDestinationRuntimeException
  • InvalidSelectorRuntimeException
  • JMSSecurityRuntimeException
  • MessageFormatRuntimeException
  • MessageNotWriteableRuntimeException
  • ResourceAllocationRuntimeException
  • TransactionInProgressRuntimeException
  • TransactionRolledBackRuntimeException



Used to secure access to JMS provider before attempting any operations using an injected JMSContext object


Specifies session mode to be used during JMSContext injection

That’s it for the new stuff in JMS 2.0 from an API perspective.

Cheers !

Posted in Java EE | Tagged , , , | 1 Comment

Random JCache stuff: multiple Providers and JMX beans

JCache (JSR 107) is the Java standard for Caching… enough said. No more introductory stuff.

This is a quick fire post which talks about

  • Multiple JCache provider configurations, and
  • Feature: JCache stats via JMX Mbeans

Managing multiple JCache providers

In case you are dealing with a single JCache provider, javax.jcache.Caching.getCachingProvider() returns an instance of the one and only CachingProvider on your classpath.
If you have multiple JCache implementations on your application class path, an attempt at using the above snippet to bootstrap your JCache provider will greet you with the following exception (which is surprisingly friendly !)

javax.cache.CacheException: Multiple CachingProviders have been configured when only a single CachingProvider is expected

Overloading to the rescue!

There are overloaded versions of the getCachingProvider method, one of which allows you to specify the fully qualified class name of a specific JCache provider implementation. The exact class name would be provided as a part of your JCache vendor documentation e.g. com.tangosol.coherence.jcache.CoherenceBasedCachingProvider and com.hazelcast.cache.HazelcastCachingProvider are the provider classes for Oracle Coherence and Hazelcast respectively.

This would work just fine:

CachingProvider coherenceJCacheProvider = Caching.getCachingProvider(“com.tangosol.coherence.jcache.CoherenceBasedCachingProvider”).getCacheManager()

You can also grab the same from the META-INF/services/javax.cache.spi.CachingProvider of the JCache provider JAR file

JCache Provider SPI Configuration

JCache Provider SPI Configuration

JMX statistics

JCache offers configuration and run time performance statistic for free! This is driven by provider specific implementations.

  • – make sure you enable this by calling setManagementEnabled(true) on the JCache MutableConfiguration object
  • – – make sure you enable this by calling setStatisticsEnabled(true) on the JCache MutableConfiguration object

Example snippet

MutableConfiguration config = new MutableConfiguration().setManagementEnabled(true).setStatisticsEnabled(true);

Introspect the Mbeans from JConsole or any equivalent client

JCache Configuration stats

JCache Configuration stats

JCache runtime performance stats

JCache runtime performance stats

Nice ha ?

Cheers! :-)

Posted in Java, Java EE, Java SE | Tagged , , , , , | Leave a comment