Handling time outs in Async requests in JAX-RS

JAX-RS 2.0 provides support for asynchronous programming paradigm, both on client as well as on the server end. This post which highlights the time out feature while executing asynchronous REST requests on server side using the JAX-RS (2.0) API

Without diving into too many details here is a quick overview. In order to execute a method in asynchronous fashion, you just

  • need to specify an instance of AsyncResponse interface as one of the method parameters
  • annotate it using using the @Suspended annotation (JAX-RS will inject an instance of AsyncResponse for you whenever it detects this annotation)
  • need to invoke the request in a different thread – recommended way to do this in Java EE 7 is to use Managed Service Executor

Behind the scenes ??

The underlying I/O connection b/w the server and the client continues to remain open. But there are scenarios where you would want not want the client to wait for a response forever. In such a case, you can allocate a time out (threshold)

The default behavior in case of a time out is a HTTP 503 response. In case you want to override this behavior, you can implement a TimeoutHandler and register it with your AsyncResponse. In case you are using Java 8, you need not bother with a separate implementation class or even an anonymous inner class – you can just provide a Lambda Expression since the TimeoutHandler is a Functional Interface with a Single Abstract Method

Cheers!

Posted in Java, Java EE | Tagged , , , , , | Leave a comment

Book review: WildFly Configuration, Deployment, and Administration

This is a review of the book WildFly Configuration, Deployment, and Administration. Thanks to Packtpub for providing a review copy !

6232OS_WildFly Configuration, Deployment_Cov

This book has been written by Christopher Ritchie. It covers the breadth and depth of WildFly 8 application server and talks about topics such as

  • Installation and configuration of WildFly, its subsystems, Enterprise services and containers as well as Undertow (web container)
  • Domain configuration, application deployment, high availability and security management
  • OpenShift cloud (PaaS) platform

Let’s look at the book contents in greater detail. The book consists of 11 chapters dealing with various facets of Wildfly.

Chapter 1: Installing Wildfly

This lesson is a gentle introduction to Wildfly. It’s aimed at helping you get up and running with an environment so that you can work on the server. It discusses basic topics and covers

  • Details around installation of Java and Wildfly application server
  • Basic administrative operations such as startup, shutdown and reboot
  • Set up of Eclipse (for Java EE) and JBoss Tools (Eclipse plugin)
  • Wildfly file system (server installation directory) along with kernel and server modules

For me, the highlight of the chapter were the sections around Wildfly Kernel and Modules. It provides a great launching pad for upcoming topics!

Chapter 2: Configuring the Core WildFly Subsystems

This chapter provides an overview of Wildfly configuration which is composed of several sub systems which are configured as a part of the standalone.xml or domain.xml (configuration file)

  • Server subsystems such as Extensions, Profiles, System properties, Deployments etc
  • Thread pool system – discusses configuration revolving around thread factory and thread pools such as bounded queue thread pool, scheduled thread pool, queueless thread pool etc
  • Dwells into the server logging configuration discusses different handlers of the logging subsystem – console, async, size-rotating, custom handlers and more

Chapter 3: Configuring Enterprise Services

This lesson discusses Wildfly subsystems corresponding to Java EE services and their respective configurations

  • JDBC driver and Datasource configuration
  • EJB container configuration including session, stateless, MDB and EJB timers related details
  • Configuring the JMS/Messaging system
  • JTA (transaction manager) configuration
  • Tuning and configuring container resources related to Concurrency utilities API – Managed thread factory, Managed Executor service and managed scheduled executor service

Chapter 4: The Undertow Web server

As the name indicates, this chapter introduces the basic concepts of Undertow (new web container implementation in Wildfly) as well as its configuration related artifacts.

  • Quick tour of the Undertow component architecture
  • Configuration of server listener. host along with servlet container components
  • Walkthrough of creation and deployment of a Maven based Java EE 7 project along with components like JSF, EJB, CDI, JPA

Chapter 5: Configuring a Wildfly Domain

Chapter 5 is all about the concept of a domains in Wildfly and the nuances related to their configuration. It concludes with a section which helps the reader configure a custom domain and apply the concepts introduced in the lesson

  • An overview of domains and related configuration in Wildfly and administrative basics such as start/stop
  • Configuration related to core facets of a domain – host.xml, domain.xml, domain controller etc
  • Tuning domain specific JVM options
  • Configuration of a test (custom) domain

Chapter 6: Application Structure and Deployment

The chapter discusses important topics such as application package structure of different Java EE components and how to deploy various flavors of Java EE applications on Wildfly

  • Review of application packaging options – JAR, EAR, WAR
  • Deployment of applications on Wildfly – standalone and domain mode using both command line and Admin console
  • The chapter ends with a solid section covering Java EE classloading requirements and how Wildfly handles it

Chapter 7: Using the Management Interfaces

Again, as evident in the name of the chapter itself, its not surprising to know that this lesson covers the management options available in Wildfly

  • Usage of the command line interface for execution of various tasks
  • The Wildfly Web Console and the options it offers

Chapters 8,9: Clustering and Load-balancing

Chapters 8,9 cover Wildfly clustering, HA and load balancing capabilities in detail.

  • Basic setup and configuration of a Wildfly clusters
  • Infinispan subsystem configuration along with details of Hibernate cache setup
  • Clustering of JMS system, EJBs and JPA entities
  • Installation and configuration of Apache web server and modules like mod_jk, mod_proxy and mod_cluster
  • Using CLI for mod_cluster and web context management as well as troubleshooting tips

Chapter 10: Securing Wildfly

This chapter covers

  • Introduction to the Wildfly security subsystem and details of commonly used Login Modules – Database and LDAP
  • Securing Java EE components – Web tier, EJBs and web services
  • Protecting the Web Admin console along with configurations related to transport layer security

Chapter 11: WildFly, OpenShift and Cloud Computing

This lesson is dedicated to Cloud Computing as a technology and how OpenShift

  • Overview of Cloud Computing – basics, advantages, types, options
  • Introduction to OpenShift and setup process of the client related tools
  • Walkthrough of cartridge installation and the process of building and deploying a sample application to OpenShift
  • Log management and basic administrative tasks using CLI tools- start, stop, restart etc
  • Using OpenShift with Eclipse and instructions on how to scale your applications

Standout features

Some of the points which in my opinion were great

  • We often spend years working with containers (application servers, web servers etc) without really knowing what’s going on beneath. There is a reason behind this – the core concepts (application kernels, modules, container configuration, threading setup etc) are pretty complex and often not that well explained. This book does a great job at covering these topics and presenting the core of Wildfly in a simple manner
  • Intuitive and clear diagrams (lots of them!)
  • Enjoyed the chapter on OpenShift – provides a great platform for exploring PaaS capabilities and getting up and running with OpenShift

Conclusion

This book is a must have for someone looking to gain expertise on WildFly 8. Since WildFly 8 is Java EE 7 compliant, it also means that the reader will benefit a lot by picking up a lot of Java EE related fundamentals as well as cutting edge features of Java EE 7. Grab your copy from Packtpub !

Posted in Books, Java, Java EE | Tagged , , , | Leave a comment

Approval specific web services in Oracle IDM

This is quick post with regards to the web service endpoints which are leveraged by OIM and SOA in the context of an approval related scenario – basic stuff, but can be useful for beginners.

Oracle IDM integrates with and leverages the SOA suite for approval related features (SOA is quite rich to be honest and is utilized as the back bone for Web Services connector as well). SOA is not just for namesake – SOA suite does in fact rely on the concept of loosely coupled and independent services.

The approval engine makes use of three such web services

  • Request web service: this is deployed on the OIM server
  • Request Callback web service: this is deployed on SOA server
  • Provisioning Callback web service: this too is deployed on OIM and used in context of approvals related to Disconnected application instances

But how/when are these (SOA) services leveraged ?

Consider an example of a basic approval process

  • OIM approval engine calls a SOA composite (from within an approval policy) in response to evaluation of a self service request. The internals of this call are out of scope of this post (maybe some other time!)
  • Operations within the SOA composite are executed and here is where the Request Callback web service comes in to play. The SOA composite calls the Request Callback web service and appraises it of the result of the SOA composite execution (approval/rejection)
  • The Request Callback web service calls relays the result back to approval/request engine within OIM which then proceeds accordingly
Request Callback Web Service

Request Callback Web Service

So what is the Request web service all about ?

This is a generic purpose web service available OOTB in OIM (all you need to do is deploy it). It exposes information within OIM such as users, catalog, organizations etc. You can leverage it within SOA composite (just a few click!) to make your life easier (its not mandatory, but you might need to use this more often than not in order to make dynamic decision making)

Provisioning Callback web service

This is used by the OOTB SOA composite (for disconnected applications) to relay the approval decision back to OIM provisioning engine so that it can mark the task as completed and hence the disconnected instance would show up as Provisioned (this of course is the OOTB behavior which is subject to customization if needed)

Provisioning Callback Web Service

Provisioning Callback Web Service

Note: the snapshots presented above are nothing but the BPEL composites as seen in JDeveloper

Until next time…
Cheers !

Posted in Oracle Identity Governance, Oracle Identity Manager | Tagged , , , , , | Leave a comment

Quick peek at JAX-RS request to method matching

In this post, let’s look at the HTTP request to resource method matching in JAX-RS. It is one of the most fundamental features of JAX-RS. Generally, the developers using the JAX-RS API are not exposed to (or do not really need to know) the nitty gritty of the matching process, rest assured that the JAX-RS runtime churns out its algorithms quietly in the background as our RESTful clients keep those HTTP requests coming!

Just in case the term request to resource method matching is new to you – it’s nothing but the process via which the JAX-RS provider dispatches a HTTP request to a particular method of your one of your resource classes (decorated with @Path). Hats off to the JAX-RS spec doc for explaining this in great detail (we’ll just cover the tip of the iceberg in this post though!)

Primary criteria

What are the factors taken into consideration during the request matching process ?

  • HTTP request URI
  • HTTP request method (GET, PUT, POST, DELETE etc)
  • Media type of the HTTP request
  • Media type of requested response

High level steps

A rough diagram should help. Before we look at that, here is the example scenario

  • Two resource classes – Books.java, Movies.java
  • Resource methods paths in Books.java – /books/, /books/{id} (URI path parameter), /books?{isbn} (URI query parameter)
  • HTTP request URI – /books?isbn=xyz

Who will win ?

jaxrs_matching_process

JAX-RS request to method matching process

 

Break down of what’s going on

  • Narrow down the possible matching candidates to a set of resource classes

This is done by matching the HTTP request URI with the value of the @Path annotation on the resource classes

  • From the set of resource classes in previous step, find a set of methods which are possible matching candidates (algorithm is applied to the filtered set of resource classes)
  • Boil down to the exact method which can server the HTTP request

The HTTP request verb is compared against the HTTP method specific annotations (@GET, @POST etc), the request media type specified by the Content-Type header is compared against the media type specified in the @Consumes annotation and the response media type specified by the Accept header is compared against the media type specified in the @Produces annotation

I would highly recommend looking at the Jersey server side logic for implementation classes in the org.glassfish.jersey.server.internal.routing package to get a deeper understanding. Some of the classes/implementation which you can look at are

Time to dig in….?

Happy hacking !

Posted in Java, Java EE | Tagged , , , , | Leave a comment

Simplifying JAX-RS caching with CDI

This post explains (via a simple example) how you can use CDI Producers to make it a little easier to leverage cache control semantics in your RESTful services

The Cache-Control header was added in HTTP 1.1 as a much needed improvement over the Expires header available in HTTP 1.0. RESTful web services can make use of this header in order to scale their applications and make them more efficient e.g. if you can cache a response of a previous request, then you obviously need not make the same request to the server again if you are certain of the fact that your cached data is not stale!

How does JAX-RS help ?

JAX-RS has had support for the Cache-Control header since its initial (1.0) version. The CacheControl class represents the real world Cache-Control HTTP header and provides the ability to configure the header via simple setter methods. More on the CacheControl class in the JAX-RS 2.0 javadocs

 

jaxrs-cache-control

So how to I use the CacheControl class?

Just return a Response object around which you can wrap an instance of the CacheControl class.

Although this is relatively convenient for a single method, repeatedly creating and returning CacheControl objects can get irritating for multiple methods

CDI Producers to the rescue!

CDI Producers can help inject instances of classes which are not technically beans (as per the strict definition) or for classes over which you do not have control as far as decorating them with scopes and qualifiers are concerned.

The idea is to

  • Have a custom annotation (@CacheControlConfig) to define default values for Cache-Control header and allow for flexibility in case you want to override it

  • Just use a CDI Producer to create an instance of the CacheControl class by using the InjectionPoint object (injected with pleasure by CDI !) depending upon the annotation parameters

  • Just inject the CacheControl instance in your REST resource class and use it in your methods

Additional thoughts

  • In this case, the scope of the produced CacheControl instance is @Dependent i.e. it will live and die with the class which has injected it. In this case, the JAX-RS resource itself is RequestScoped (by default) since the JAX-RS container creates a new instance for each client request, hence a new instance of the injected CacheControl instance will be created along with each HTTP request
  • You can also introduce CDI qualifiers to further narrow the scopes and account for corner cases
  • You might think that the same can be achieved using a JAX-RS filter. That is correct. But you would need to set the Cache-Control header manually (within a mutable MultivaluedMap) and the logic will not be flexible enough to account for different Cache-Control configurations for different scenarios

Results of the experiment

Use NetBeans IDE to play with this example (recommended)

initial-request

  • A GET Request to the same URL will not result in an invocation of your server side REST service. The browser will return the cached value.

second-request

Although the code is simple, if you are feeling lazy, you can grab the (maven) project from here and play around

Have fun!

Posted in Java, Java EE | Tagged , , , , , , | 3 Comments

Valid CDI scopes for Session (EJB) beans

CDI enriches the EJB specification (Session beans to be specific) by providing contextual life cycle management. Session beans are not ‘contextual’ instances in general.

If you are comfortable with CDI in general, the idea of ‘being contextual’ should be pretty clear.

Here are the valid permutations and combinations of EJB session beans and corresponding CDI scopes (Application, Session or Request)

  • Stateless beans can only belong to the @Dependent scope i.e. you can either choose to use the @Dependent pseudo-scope explicitly or just flow with the @Stateless annotation in which case the CDI container will pretty much use @Dependent by default (convention).

The CDI container will not let you get away with any other annotation and the end result would be a deployment failure

  • With Singleton beans, @ApplicationScoped is the only valid CDI scope (@Dependent is the default in case you do not use any other explicit CDI scope)

Again, any other scope annotation and the CDI god will crush your WAR/EAR !

  • Stateful EJBs can have any scope – no restrictions whatsoever! (although I do not see too much value in using @ApplicationScoped for Stateful beans – but that’s just me! feel free to chime in case you think otherwise)

Stay safe !
Cheers ;-)

Posted in Java, Java EE | Tagged , , , , , | 2 Comments

14 February is drawing closer… again !!!

Once in a blue moon, I choose to post non-technical ramblings on this blog

14 February is dawning upon us… yet again! And this time, I (like many of you guys) am spoilt for choices. It’s Friday after all! So lighten up and help me figure this one out ;-)

Posted in Uncategorized | Tagged , , | Leave a comment

Integrating CDI and WebSockets

Thought of experimenting with a simple Java EE 7 prototype application involving JAX-RS (REST), WebSockets and CDI.

Note: Don’t want this to be a spoiler – but this post mainly talks about an issue which I faced while trying to use web sockets and REST using CDI as a ‘glue’ (in a Java EE app). The integration did not materialize, but a few lessons learnt nonetheless :-)

The idea was to use a REST end point as a ‘feed’ for a web socket end point which would in turn ‘push’ data to all connected clients

  • JAX-RS end point which receives data (possibly in real time) from other sources as an input to the web socket end point
  • Use CDI Events as the glue b/w JAX-RS and WebSocket end points and ‘fire’ the payload

  • Use a CDI Observer method in the WebSocket endpoint implementation to push data to connected clients

Of course, finer details like performance, async communication etc have not being considered at this point of time. More of an experiment

But is this even possible ?

Here are the steps which I executed

start

  • Fired a HTTP POST request on the REST end point using Postman

fire-rest

 

Boom! A NullPointerException in the Observer method – I waited for a few seconds and then reality hit me!

NPE

 

Root cause (from what I understand)

  • Behavior of WebSocket end points

WebSocket end points are similar to JAX-RS resource classes in the sense that there is one instance of a web socket endpoint class per connected client (at least by default). This is clearly mentioned in the WebSocket specification. As soon as a client (peer) connects, a unique instance is created and one can safely cache the web socket Session object (representation of the peer) as an instance variable. IMO, this a simple and clean programming model

WS-spec

  • But the CDI container had other plans !

As soon as the REST end point fires a CDI event (in response to a POST request), the CDI container creates a different instance of the WebSocket endpoint (the CDI Observer in this case). Why? Because CDI beans are contextual in nature. The application does not control the instances of CDI beans. It just uses them (via @Inject). Its up to the container to create and destroy bean instances and ensure that an appropriate instance is available to beans executing in the same context. How does the container figure out the context though ? It’s via Scopes – Application, Session, Request etc…..

(again, clearly mentioned in the CDI specification)

CDI-spec

So, the gist of the matter is that there is NO instance of the WebSocket endpoint current context – hence a new instance is created by CDI in order to deliver the message. This of course means that the instance variable would point to null and hence the NPE (Duh !)

So the question is . . .

Which CDI scope is to be used for a WebSocket end point ??? I tried @ApplicationScoped, @SessionScoped and @RequestScoped without much luck – still a new instance and a NPE

Any other options ??

  • Defining a Set of Session as static variable will do the trick

But that IMO is a just a hack and not feasible in case one needs to handle client specific state (which can only be handled as instance variables) in the observer method – it’s bound to remain uninitialized

  • Server Sent events ? But at the end of the day, SSE != WebSocket. In case the use case demands server side push ‘only’, one can opt for it. SSE is not a Java EE standard yet – Java EE 8 might make this possible

Solution ?

I am not an expert – but I guess it’s up to the WebSocket spec to provide more clarity on how to leverage it with CDI. Given that CDI is an indispensable part of the Java EE spec, it’s extremely important that it integrates seamlessly with other specifications – specially HTML5-centric specs such as JAX-RS, WebSocket etc

This post by Bruno Borges links to similar issues related to JMS, CDI and WebSocket and how they integrate with each other.

Did I miss something obvious? Do you have any inputs/solutions? Please feel free to chime in ! :-)

The sample code is available on GitHub (in case you want to take a look). I tried this on GlassFish 4.1 and Wildfly 8.2.0

That’s all for now I guess…. :-)

Cheers!

Posted in Java, Java EE | Tagged , , , | 3 Comments

Sneak peek into the JCache API (JSR 107)

This post covers the JCache API at a high level and provides a teaser – just enough for you to (hopefully) start itching about it ;-)

In this post ….

  • JCache overview
  • JCache API, implementations
  • Supported (Java) platforms for JCache API
  • Quick look at Oracle Coherence
  • Fun stuff – Project Headlands (RESTified JCache by Adam Bien) , JCache related talks at Java One 2014, links to resources for learning more about JCache

What is JCache?

JCache (JSR 107) is a standard caching API for Java. It provides an API for applications to be able to create and work with in-memory cache of objects. Benefits are obvious – one does not need to concentrate on the finer details of implementing the Caching and time is better spent on the core business logic of the application.

JCache components

The specification itself is very compact and surprisingly intuitive. The API defines high level components (interfaces) some of which are listed below

  • Caching Provider – used to control Caching Managers and can deal with several of them,
  • Cache Manager – deals with create, read, destroy operations on a Cache
  • Cache – stores entries (the actual data) and exposes CRUD interfaces to deal with the entries
  • Entry – abstraction on top of a key-value pair akin to a java.util.Map
jcache-high-level-components

Hierarchy of JCache API components

 

JCache Implementations

JCache defines the interfaces which of course are implemented by different vendors a.k.a Providers.

From the application point of view, all that’s required is the implementation to be present in the classpath. The API also provides a way to further fine tune the properties specific to your provider via standard mechanisms.

You should be able to track the list of JCache reference implementations from the JCP website link

JCache provider detection

  • JCache provider detection happens automatically when you only have a single JCache provider on the class path
  • You can choose from the below options as well

Java Platform support

  • Compliant with Java SE 6 and above
  • Does not define any details in terms of Java EE integration. This does not mean that it cannot be used in a Java EE environment – it’s just not standardized yet.
  • Could not be plugged into Java EE 7 as a tried and tested standard
  • Candidate for Java EE 8

Project Headlands: Java EE and JCache in tandem

  • By none other than Adam Bien himself !
  • Java EE 7, Java SE 8 and JCache in action
  • Exposes the JCache API via JAX-RS (REST)
  • Uses Hazelcast as the JCache provider
  • Highly recommended !

Oracle Coherence

This post deals with high level stuff w.r.t JCache in general. However, a few lines about Oracle Coherence in general would help put things in perspective

coherence-logo

  • Oracle Coherence is a part of Oracle’s Cloud Application Foundation stack
  • It is primarily an in-memory data grid solution
  • Geared towards making applications more scalable in general
  • What’s important to know is that from version 12.1.3 onwards, Oracle Coherence includes a reference implementation for JCache (more in the next section)

JCache support in Oracle Coherence

  • Support for JCache implies that applications can now use a standard API to access the capabilities of Oracle Coherence
  • This is made possible by Coherence by simply providing an abstraction over its existing interfaces (NamedCache etc). Application deals with a standard interface (JCache API) and the calls to the API are delegated to the existing Coherence core library implementation
  • Support for JCache API also means that one does not need to use Coherence specific APIs in the application resulting in vendor neutral code which equals portability
    How ironic – supporting a standard API and always keeping your competitors in the hunt ;-) But hey! That’s what healthy competition and quality software is all about !
  • Talking of healthy competition – Oracle Coherence does support a host of other features in addition to the standard JCache related capabilities.
  • The Oracle Coherence distribution contains all the libraries for working with the JCache implementation

coherence_lib

  • The service definition file in the coherence-jcache.jar qualifies it as a valid JCache provider implementation

service-definition

 

Curious about Oracle Coherence ?

JCache at Java One 2014

Couple of great talks revolving around JCache at Java One 2014

Hope this was fun :-)

Cheers !

Posted in Java, Java EE, Java SE | Tagged , , | Leave a comment

Stress testing the OIM web (UI) layer

The default configuration in Oracle IDM reserves 20 threads dedicated for serving front end (UI) requests. This basically means that the application server has a pool of 20 threads which it can utilize to serve users who are accessing OIM via the web console (/identity or /sysadmin).

In case of Weblogic, this is how it is configured

oim-ui-thread-conf

What typically happens is

  • User accesses the OIM URL e.g. http://oimhost:14000/identity
  • Browser sends a simple (HTTP) GET request with some added HTTP request headers and other info of course
  • The application server (e.g. Weblogic) picks up a thread from the pool and uses it to process the request
  • OIM responds back and the browser renders the login page and the user is delighted .. well most of the time! ;-)
  • After the request is served, the thread on the application server is sent back to the pool (remember that pool of 20 threads I just mentioned about) and thus it can be reused by another request

I just wanted to play around with this and executed some simple tests via JMeter

Note: This is merely a front end/UI stress testing – not related to business logic

Steps

Well there are a few configurations you would need to setup in JMeter – they are pretty much standard and have nothing to do with OIM in specific

  • Set up a Thread Group (represents users)
  • Configure HTTP requests e.g. configure the OIM URL, context path, port (again – pretty basic)
  • Configure Result viewer – tree or table mode. This is for real time tracking of results

The JMeter configuration (.jmx) file is available for your reference – just import it in JMeter and you should be able to figure out the exact configuration parameters and tweak them if interested

jmeter-test-plan

Testing parameters

I tried testing with various permutations and combinations by changing the Number of Threads and Ramp-Up Period attributes in the Thread Group setup within JMeter

Number of Threads – equivalent to the number of users you want to simulate
Ramp-Up Period (seconds) – equivalent to the time period/range during which you want JMeter to trigger all the requests

e.g. Number of Threads = 100 and Ramp-Up Period = 20 seconds basically means simulating a scenario where 100 users are accessing your application (OIM in this case) over a period of 20 seconds.

  • Attempt 1: Number of Threads = 100 and Ramp-Up Period = 20 seconds
  • Attempt 2: Number of Threads = 200 and Ramp-Up Period = 20 seconds
  • Attempt 3: Number of Threads = 500 and Ramp-Up Period = 20 seconds
  • Attempt 4: Number of Threads = 1000 and Ramp-Up Period = 20 seconds
  • Attempt 5: Number of Threads = 2000 and Ramp-Up Period = 20 seconds

 

thread-group-config

What I was expecting

To be honest, I expected some delay/latency when 2000 threads (potential users) were fired in a space of 20 seconds. Looks like that did not happen.

Actual Result

All in all, the response was quite healthy.

  • Green results i.e. HTTP 200 (OK) response
  • Low latency and load times

result

To be noted

  • This was executed in a personal test VM (running OIM 11g R2 PS2) and hence there was not much load on the system
  • Can’t expect much latency when the server I am connecting to is just a guest VM ;-)

Still, this was fun and it would be interesting to execute the same test on a server which has running processes in the back end e.g scheduler, some access request processes etc.

If the out of the box configuration of 20 threads does not work for your environment, you can change it using the Weblogic Admin Console – rinse and repeat :-)

Until then.. Cheers !

Posted in Oracle Identity Governance, Oracle Identity Manager | Tagged , , , | Leave a comment