Oracle IDM 11g R2 PS3: What’s new ??

Self Service and End User empowerment

Simplified Self Service UI

Better look and feel. More intuitive. Modern skin/widgets

simplified-UI

 

 

simplified-UI-1

 

Guided Access Catalog

Easier for end users. Displays steps to be performed much like e-commerce web sites

 

guided-catalog

 

Front end

New skin
Changes to how UI customizations are to be made
Few Design console operations have been deprecated

design-console

 

design-console-2

 

Core

Concept of a Home Organization

A user will be automatically added to an organization based on Home Organization Policy, which is nothing but a set of rules based on user attributes. OIM has default rules (OOTB) and one can build custom rules as well

 

home-org-1

 

home-org-2

 

Introduction of custom Admin Roles (in a simplified avatar)

More dynamic in nature as opposed to static Admin Roles in previous versions. Admin Roles are still there for backward compatibility but NOT recommended

 

admin-roles-1

 

admin-roles-2

 

No dependency on APM and OES

Need not deploy OES component for tweaking fine grained authorization

Temporal Grants for New and Existing Access

Users can specify start and end date while requesting access. This can be overridden by authoritative users and helps manage access more seamlessly

Self Service Capability Policy

Rules within this policy would help determine what operations can a user perform on his own profile. This is also driven by user attributes

 

self-service-cap-policy-1

 

self-service-cap-policy-2

Role Life cycle Management

When working with roles (adding members etc), a user will be able to see related statistics in a graphical fashion which can help him/her make a more informed decision w.r.t the action being executed on the Role

 

role-lcm

 

OIM Role categories are NOT recommended going forwards and usage of Catalog category attribute is advised.
Enhanced Password Policy Management

Enables common password policies for OIM and OAM. More flexibility in terms of defining Challenge Qs (system/user defined)

 

password-[policy

SoD replaced by Identity Audit capability

Needs to be explicitly enabled

identity-audit-1

 

identity-audit-2

Process forms are not required and hence not supported
Form Upgrade and FVC Utility have been dropped
Attestation is no more supported

Reporting and Auditing

Lightweight Audit Engine

A brand new audit engine has been introduced in PS3. This is synchronous in nature (unlike current engine which depends on JMS), pushes data into a single AUDIT_EVENT table. The new auditing engine also supports new entities

BI Publisher exposed via OIM

You can run Identity Audit (some of the reports) from OIM console itself

bi-reports

Approval layer

Introduction of Workflow Policies

Workflow Policies have replaced Approval Policies in PS3. However, in upgraded scenarios, Approval policies will continue to work

 

workflow-policies-1

 

workflow-policies-2

 

workflow-policies-3

 

Running OIM without workflows (disabled state)

A system property can be toggled to disable SOA all together. Although the capability can be re-enabled, but the caveat is that it is NOT recommended/supported

 

workflow-enabled

 

Request Catalog

  • The out of the box search form in Catalog can be replaced by a custom form (taskflow) and configured with the help of a system property called Catalog Advanced Search Taskflow

Catalog Advanced Search Taskflow

  • Its possible to add more attributes to the catalog search form (via UI customization of course)

Displaying additional information for catalog entities

Displaying additional information for App Instance, Role and Entitlement (post checkout) can be driven with help of customized taskflows which can be configured by using system properties Catalog Additional Application Details Task Flow, Catalog Additional Role Details Task Flow and Catalog Additional Entitlement Details Task Flow respectively.

request-catalog

 

Integration layer

REST services based in SCIM (Simple Cross Domain Identity Management) protocol

Finally! A standards based REST interface on top of OIM. Supports limited operations as of now, but its a good start

 

scim-1

 

scim-2

 

Remote Manager usage is NOT recommended any more and has been removed from a documentation standpoint (might be deprecated from future releases)

SPML support dropped
Callback Service support dropped
Simplified SSO Integration (without OAM)

Use basic HTTP (web) servers and integrate SSO with OIM on basis of HTTP headers

Diagnostics

Orchestration Engine MBean

This is a nice addition which helps probe the Orchestration kernel (engine) related information (its actually is a standard JMX bean implementation). Its accessible via Enterprise Manager and exposes operations like pushing orchestration info to a file, finding event handlers, finding events per process etc. Also aids in debugging orchestration process failures

orch-mbean

 

Enjoy !!!

Posted in Oracle Identity Governance, Oracle Identity Manager | Tagged , , , , | Leave a comment

Taking Ozark for a test drive…

Hey… my first screencast.. aka video blog ;-) Although it’s 20 minutes long, but I am pretty sure that actually writing a blog would have taken longer. Feels good… pretty efficient ha !

So what’s this about ?

Trying to experiment with Ozark, the Reference Implementation for MVC 1.0 which is a candidate for inclusion in the upcoming Java EE 8 Platform release..

The code is available on my github account (just in case!)

Enjoy!

Posted in Java, Java EE | Tagged , , , | 1 Comment

True IDaaS . . . .

Pondering over the current state of IDaaS……

So what’s IDaaS to begin with ??

This post is not dedicated to defining IDaaS in depth. There are loads of other material you can read up if you are just interested in general theoretical stuff. But I’ll cover this briefly, just to set the tone…. I am not a sales rep, so let me keep this short and simple (just like an ideal program !) – IDaaS (Identity As A Service) is just a way to provide Identity Management solutions via a SaaS (Software As A Service) model. All your on-premise IDAM setup will be hosted in the cloud – that’s basically it. Of course its not black and white – there are pros and cons etc – but hey I am not Gartner or Forrester, so I am not going to deep dive into that stuff

What’s the perception of IDaaS today ?

For e.g. if I take a IDAM product from vendor X, deploy it in an IaaS (Infrastructure As A Service) provider Y (or even my own private cloud! doesn’t matter), does that make it an IDaaS solution ?

NO. I don’t think so. But this what I have been hearing… This is how people imagine IDaaS – IDAM product hosted in the cloud. From a customer perspective, it might be a big relief. Agreed ! No more infrastructure management cost and overhead and other good stuff like better pricing models etc….

But how does all this solve the problem from a technical standpoint??

Behind the scenes (from a technical implementation perspective) one still needs to go through the same set of processes – install, configure, deploy, provide HA, upgrade, migrate …. and the list just goes on …..
So, from what I see, everything is still the same, it’s just executed on some remote machine on the internet rather that the customer’s premise. In fact there are other things like exposing the customer’s internal systems to the cloud – of course they are going to be skeptical about this! One would need to resort to alternate solutions (none of which really makes sense to me.. I am not a networking wizard). We are introducing another variable. I was better off on-premise !

SO, is that really what we are looking for? Is that how we leverage the Cloud as we know it today in 2015 ?

Here is what I think…. [ and of course I am not perfect ;-) ]

True IDaaS cannot be realized without PaaS. This also implies that the IDAM product should be cloud and PaaS compatible (to an extent at least)

An IDAM SaaS product on top of PaaS = IDaaS. Yes. Think about things from a hardcore technical point of view – ease of deployment and installation, flexible configuration options, automatic scaling based on load (elastic), monitoring, developer friendly cloud tools, integration of continuous deployment and build tools and much more….

Let me explain it with an example. Think of Oracle Identity Manager implementation in a true IDaaS format.

  • Automated install: Ideally, I should be able to provision a cloud ready instance of Oracle IDM by using a simple GUI or a remote CLI rather than downloading 10 installer packages and hopping from machine to machine (on the cloud !!!)
  • Highly Available.. out of the box: I should be able to choose how many instances need to provisioned based on HA requirements, rather than going through a 2 month process for scaling out an instance.
  • On-demand scalability: I should be able to define policies based on which there should be automatic provisioning of additional instances based on load (think about OAM in cloud catering to millions of authentication requests in a day). I should be able to scale down on demand as well based on usage spikes (cost savings for the customer)
  • Simpler upgrades: Upgrading to the latest version should be simpler than what it is now. IMO upgrades are generally quite tricky but there should be some components which offer one click (ok maybe 3-5 clicks) upgrade
  • Monitoring: I should be able to monitor my IDAM components like Application Server, Database, LDAP directories etc
  • Developer friendly: MY development team should be able to leverage cloud ready dev tools (like IDEs etc) as well as automatic build tools (avoid manual intervention in deployment)

I am sure there are things I am missing.. but I hope you get the point.

PaaS is not a magic pill

Let’s not fool ourselves into thinking that way. Deep down, a lot depends on the core technology stack on top of which the IDAM product is implemented, since that what the PaaS product be closely tied to! e.g. For Oracle IDM, its Weblogic (As well as Webshpere). A lot depends on this container (or application server as we commonly call it). Hardcore SaaS products need to multi-tenant – without a doubt. The same applies for IDaaS products…. Dwelling into how Weblogic or Java EE supports cloud is another book in itself. So I’ll stop here.

End of rant, and I live happily ever after :-)

Until next time….
Stay curious !

Posted in Cloud, Oracle Identity Governance, Oracle Identity Manager | Tagged , , , , | Leave a comment

Java EE in embedded and micro avatars

I was reading up on Payara in general and was pleased to see them release a Micro version – which essentially enables you to launch Payara from command line [ java -jar payara-micro.jar ] without really setting up the entire application server. Basically, the payara-micro.jar IS your application server – it’s just that it can now fit in your pocket! More details on the Payara blog

Payara Micro CLI options

Payara Micro CLI options

Payara also offers embedded versions, both, Full Java EE 7 profile as well as the Web Profile.

I was wondering about …..

The differences b/w Payara Micro and Payara Embedded offerings ?

Payara Micro can be run in both embedded mode as well a CLI [ fat JAR from command line ] mode but the embedded versions need to be invoked from within another Java class .

from the Payara blog

from the Payara blog

I think the the micro version is cool but the embedded version also allows for some flexibility in terms of being able to bootstrap some configurations .. (and several other use cases maybe ?). Payara Micro is supposed to be implementing Java EE Web Profile with some additional functionality on top of it (as per blog post content). From what I observed, it offers Concurrency Utilities and Java EE Batch API as well (these are not required as part of Java EE Web Profile spec). Are there other differences? When should I use micro over Embedded Java EE Web Profile version ? Not quite sure

from the Payara blog

from the Payara blog

What positives can one extract out of these embedded and micro Java EE avatars ?

I know micro services are all the rage today but I am not knowledgeable to comment regarding them. Think of it this way – you have been itching to use Java EE 7 and found it to be perfect fit for that project at your workplace. But as usual, the sticking point is considering things like getting hold of a compliant application server (the runtime/container) – you might not be allowed to use that fancy piece of technology called Java EE 7 yet ! Your ideas crash [true story ;-)].

I think that’s about to change now. If you want to build that app where you can leverage all the Java EE goodies starting from EJB, CDI, REST to fancy stuff like Web Sockets and SSE, well just keep calm and build you WAR ! Don’t worry about the container – you need not procure an Application server and convince the entire management/architects etc. Now, the compliant runtime / container is just a simple JAR. Its more about creating the functionality and making it available for consumers rather than debating about app servers, compatibility, certification matrices etc.

Cloud…? what about cloud !

Not that tough! Imagine this – if you wanted to deploy a Java EE 7 application on cloud, you would need a PaaS provider which has support for Java EE 7 container (e.g. OpenShift). That’s fine. But do you realize that with a JAR as your application server, you do not really need to worry about a PaaS? Actually, all you need is IaaS (the infrastructure) e.g. a linux box with adequate RAM, disk etc should be enough to install Java and fire java -jar myappserver.jar … right ?

Testing and rapid prototyping

This one is a no brainer. Just like Embedded EJB containers made it simpler to test EJBs in isolation, having a pocket sized app server JAR should ease testing as well as rapid development/prototyping. Open up an IDE (preferably Netbeans), write your business logic, build your WAR and your are ready to rock

java -jar payara-micro.jar –deploy /home/abhi/Netbeans/MyJavaEE7App/dist/MyJavaEE7App.war

Cons ?

I am probably being optimistic right now. There will definitely be issues, caveats and cons when it comes to embedded/micro Java EE approach [I am sure the experts are at it right now !], but hey, guess I am too excited to think about them. I will discover some when I play around a little more :-)

Not to forget Wildfly Swarm !

Wildfly Swarm is another step towards harnessing the power of Wildfly application server from the comfort of a fat JAR. You can learn more from this blog post by Arun Gupta

Until next time…
Cheers!

Posted in Cloud, Java, Java EE | Tagged , , , , , , , | 2 Comments

I know you love purging the OIM cache ;-)

I know you love PurgeCache.sh – even if you don’t, aren’t you curious about what it does ?

Oracle IDM uses OSCache from the OpenSymphony project for in memory caching of objects in order to avoid repetitive calls to database and improve performance (of course !). In case you are not familiar with caching in general, I am pretty sure that as someone working on OIM, you would have executed PurgeCache.sh at some point in your career – so there it is ! If you have ever purged OIM’s cache, you have indirectly used OSCache.. yay !

How is it implemented ?

  • OIM uses a facade/wrapper over the core OSCache caching APIs
  • XLCacheProvider is essentially used as the generic interface which is implemented by a class called OSCacheProvider (this is OIM specific). You should be able to see an entry of this class in oim-config.xml (caching categories config section). It’s FQDN is oracle.iam.platform.utils.cache.OSCacheProvider
  • This class implements the contract put forth in the XLCacheProvider interface and leverages internal OSCache APIs
  • It caters to operations like adding to cache, removing entry from a cache, purging the entire cache etc. It also supports the notion of cache categories or groups. Sounds familiar ? The category is something which you provide as an input to PurgeCache script e.g. MetaData, User, Catalog, LookupValues etc. Please note that these are constant values and need to provided as it is

What categories of objects does OIM cache ?

Well there is lots, from adapters, to application instance details, resource bundles etc Actually, the list is pretty long ;-)

How does OIM use this Cache ?

Pretty straightforward actually. The caching logic is implemented within the core server business logic itself and items from different categories (mentioned above) are explicitly pushed into the cache by calling the high level APIs e.g. look up related calls, user search details, MDS data etc (just the tip of the iceberg)

How much control/visibility do we have over the cache ?

From what I know, not much apart from disabling/enabling the cache per category and configuring things like expiry time etc (all via oim-config.xml) and of course purging it ;-)

From what I have observed, we cannot

  • introspect the cache
  • validate it contents
  • confirm whether out favorite PurgeCache is in fact working ;-)

Why ? Simply because it does not expose the internal interfaces of the OSCache API to us (figuring out how and why is left to you as homework) and as of now I am not aware of how to hook into an in memory OSCache instance (maybe its possible ?)

So that brings me to another question

Should we plug in our own caching implementation ?

Sounds risky doesn’t it ? Well that’s why I haven’t heard people doing it. But it should definitely be theoretically possible

  • Provide a custom implementation of XLCacheProvider interface
  • Package it as a JAR into APP-INF/lib folder within oim.ear (OIM_HOME/server/apps)
  • change the provider attribute in the cacheConfig tag within oim-config.xml to reflect your custom implementation.

Some more thoughts

  • If I decide to play with this, I’ll certainly opt for the JCache API [JSR 107] in order to implement this. At least this is a standard API !
  • Maybe even expose cache metrics as read only attributes over a RESTful interface ? I think this should be useful (from a geeko-meter perspective !)

What do you think ?? :-)

Until next time… Hack away!

Posted in Java, Oracle Identity Manager | Tagged , , | Leave a comment

Packt celebrates International Day Against DRM !

2015 Banner

To demonstrate their continuing support for Day Against DRM, Packt is offering all its DRM-free content at $10 for 24 hours only on May 6th – with more than 3000 eBooks and 100 Videos available on www.packtpub.com.

Hurry up since this is a special 24 hour flash sale where all eBooks and Videos will be $10 until tomorrow.

Cheers !

Posted in Books | Tagged , , , | Leave a comment

Using @Context in JAX-RS [ part 1 ]

JAX-RS provides the @Context annotation to inject a variety of resources in your RESTful services. Some of the most commonly injected components are HTTP headers, HTTP URI related information. Here is a complete list (in no specific order)

  • HTTP headers
  • HTTP URI details
  • Security Context
  • Resource Context
  • Request
  • Configuration
  • Application
  • Providers

Lets look at these one by one with the help of examples

HTTP headers

Although HTTP headers can be injected using the @HeaderParam annotation, JAX-RS also provides the facility of injecting an instance of the HttpHeaders interface (as an instance variable or method parameter). This is useful when you want to iterate over all possible headers rather than injecting a specific header value by name

HTTP URI details

UriInfo is another interface whose instance can be injected by JAX-RS (as an instance variable or method parameter). Use this instance to fetch additional details related to the request URI and its parameters (query, path)

Providers

An instance of the Providers interface can be injected using @Context. One needs to be aware of the fact that this is only valid within an existing provider. A Providers instance enables the current Provider to search for other registered providers in the current JAX-RS container.

Note: Please do not get confused between Provider and Providers.

Provider

  • A JAX-RS Provider is a is generic term for any class which supplements/extends the JAX-RS features by implementing standard interfaces exposed by the JAX-RS specification
  • It is annotated using the @Provider annotation for automatic discovery by the run time
  • Examples of JAX-RS providers are – Message Body Reader, Message Body Writer, Exception Mapper and Context Providers.

Providers

Refers to the (injectable) javax.ws.rs.ext.Providers interface which was discussed in this sub section

Security Context

Inject an instance of the javax.ws.rs.core.SecurityContext interface (as an instance variable or method parameter) if you want to gain more insight into identity of the entity invoking your RESTful service. This interface exposes the following information

  • Instance of java.security.Principal representing the caller
  • Whether or not the user if a part of a specific role
  • Which authentication scheme is being used (BASIC/FORM/DIGEST/CERT)
  • Whether or not the request invoked over HTTPS

That’s all for this part. Rest of the injectables will be covered in the next iteration.

Until then.. Cheers!

Posted in Java, Java EE | Tagged , , , | 5 Comments

Timeout policies for EJBs : how do they help…?

EJB 3.1 introduced timeout related annotations as a part of its API.

  • @AccessTimeout
  • @StatefulTimeout

Let’s quickly look at what they are and why are they important

@AccessTimeout

Specifies the time period after which a queued request (waiting for another thread to complete) times out.

When your session bean instances are bombarded with concurrent requests, the EJB container ensures sanity by serializing these calls i.e. blocking other threads until the current thread finishes execution. You can refine this behavior further by using this annotation.

Which beans can leverage this annotation ?

This is applicable for

  • Stateful (@Stateful) beans and
  • Singleton beans (@Singleton) configured with container managed concurrency option (ConcurrencyManagementType.CONTAINER)

Why is it important ?

Since the EJB container serializes concurrent requests, having this annotation ensures that the potential (waiting) threads are not kept blocked for ever and helps define a concurrency policy.

Where can I put this annotation?

  • On a class – globally applies to a all the methods
  • On a particular method only
  • On a particular method to override the settings of the class level annotation

How to use it ?

You can use the value and unit elements of this annotation to define its behavior

Here are a few options

  • @AccessTimeout(0) – this means that your method does not support concurrent access at all and the client would end up getting a java.ejb.ConcurrentAccessException
  • @AccessTimeout(-1) – your method will block indefinitely (I don’t think that’s good idea !)
  • @AccessTimeout(5000) – method will wait for 5000 ms (5 seconds) before the next thread in queue (if any) if given a chance

Few things to note

  • Default value for the unit element is java.util.concurrent.TimeUnit.MILLISECONDS
  • a timeout value of less than -1 is invalid

@StatefulTimeout

Defines the threshold limit for eviction of idle stateful session beans i.e. the ones which have not received client requests for a specific interval

Why is it important ?

Imagine you have a stateful session bean handling a user registration workflow. The user is inactive for certain time interval (probably doing other stuff). How long would you want your stateful session bean active in the memory ? Configuring this annotation can help prevent inactive bean instances from hogging the main memory.

Where can I put this annotation?

Same rules as the @AccessTimeout annotation !

How to use it ?

You can use the value and unit elements of this annotation to define its behavior

Here are a few options

  • @StatefulTimeout(0) – this means that your bean instance will be removed immediately after the completion of the method which holds this annotation
  • @StatefulTimeout(-1) – your method will not be sensitive to time outs (man that’s stubborn !)
  • @StatefulTimeout(15000) – method will wait for 15000 ms (15 seconds) for client requests before it becomes a candidate for eviction

Few things to note

  • Default value for the unit element is java.util.concurrent.TimeUnit.MILLISECONDS
  • a timeout value of less than -1 is invalid

Cheers !

Posted in Java, Java EE | Tagged , , , , | Leave a comment

Handling time outs in Async requests in JAX-RS

JAX-RS 2.0 provides support for asynchronous programming paradigm, both on client as well as on the server end. This post which highlights the time out feature while executing asynchronous REST requests on server side using the JAX-RS (2.0) API

Without diving into too many details here is a quick overview. In order to execute a method in asynchronous fashion, you just

  • need to specify an instance of AsyncResponse interface as one of the method parameters
  • annotate it using using the @Suspended annotation (JAX-RS will inject an instance of AsyncResponse for you whenever it detects this annotation)
  • need to invoke the request in a different thread – recommended way to do this in Java EE 7 is to use Managed Service Executor

Behind the scenes ??

The underlying I/O connection b/w the server and the client continues to remain open. But there are scenarios where you would want not want the client to wait for a response forever. In such a case, you can allocate a time out (threshold)

The default behavior in case of a time out is a HTTP 503 response. In case you want to override this behavior, you can implement a TimeoutHandler and register it with your AsyncResponse. In case you are using Java 8, you need not bother with a separate implementation class or even an anonymous inner class – you can just provide a Lambda Expression since the TimeoutHandler is a Functional Interface with a Single Abstract Method

Cheers!

Posted in Java, Java EE | Tagged , , , , , | Leave a comment

Book review: WildFly Configuration, Deployment, and Administration

This is a review of the book WildFly Configuration, Deployment, and Administration. Thanks to Packtpub for providing a review copy !

6232OS_WildFly Configuration, Deployment_Cov

This book has been written by Christopher Ritchie. It covers the breadth and depth of WildFly 8 application server and talks about topics such as

  • Installation and configuration of WildFly, its subsystems, Enterprise services and containers as well as Undertow (web container)
  • Domain configuration, application deployment, high availability and security management
  • OpenShift cloud (PaaS) platform

Let’s look at the book contents in greater detail. The book consists of 11 chapters dealing with various facets of Wildfly.

Chapter 1: Installing Wildfly

This lesson is a gentle introduction to Wildfly. It’s aimed at helping you get up and running with an environment so that you can work on the server. It discusses basic topics and covers

  • Details around installation of Java and Wildfly application server
  • Basic administrative operations such as startup, shutdown and reboot
  • Set up of Eclipse (for Java EE) and JBoss Tools (Eclipse plugin)
  • Wildfly file system (server installation directory) along with kernel and server modules

For me, the highlight of the chapter were the sections around Wildfly Kernel and Modules. It provides a great launching pad for upcoming topics!

Chapter 2: Configuring the Core WildFly Subsystems

This chapter provides an overview of Wildfly configuration which is composed of several sub systems which are configured as a part of the standalone.xml or domain.xml (configuration file)

  • Server subsystems such as Extensions, Profiles, System properties, Deployments etc
  • Thread pool system – discusses configuration revolving around thread factory and thread pools such as bounded queue thread pool, scheduled thread pool, queueless thread pool etc
  • Dwells into the server logging configuration discusses different handlers of the logging subsystem – console, async, size-rotating, custom handlers and more

Chapter 3: Configuring Enterprise Services

This lesson discusses Wildfly subsystems corresponding to Java EE services and their respective configurations

  • JDBC driver and Datasource configuration
  • EJB container configuration including session, stateless, MDB and EJB timers related details
  • Configuring the JMS/Messaging system
  • JTA (transaction manager) configuration
  • Tuning and configuring container resources related to Concurrency utilities API – Managed thread factory, Managed Executor service and managed scheduled executor service

Chapter 4: The Undertow Web server

As the name indicates, this chapter introduces the basic concepts of Undertow (new web container implementation in Wildfly) as well as its configuration related artifacts.

  • Quick tour of the Undertow component architecture
  • Configuration of server listener. host along with servlet container components
  • Walkthrough of creation and deployment of a Maven based Java EE 7 project along with components like JSF, EJB, CDI, JPA

Chapter 5: Configuring a Wildfly Domain

Chapter 5 is all about the concept of a domains in Wildfly and the nuances related to their configuration. It concludes with a section which helps the reader configure a custom domain and apply the concepts introduced in the lesson

  • An overview of domains and related configuration in Wildfly and administrative basics such as start/stop
  • Configuration related to core facets of a domain – host.xml, domain.xml, domain controller etc
  • Tuning domain specific JVM options
  • Configuration of a test (custom) domain

Chapter 6: Application Structure and Deployment

The chapter discusses important topics such as application package structure of different Java EE components and how to deploy various flavors of Java EE applications on Wildfly

  • Review of application packaging options – JAR, EAR, WAR
  • Deployment of applications on Wildfly – standalone and domain mode using both command line and Admin console
  • The chapter ends with a solid section covering Java EE classloading requirements and how Wildfly handles it

Chapter 7: Using the Management Interfaces

Again, as evident in the name of the chapter itself, its not surprising to know that this lesson covers the management options available in Wildfly

  • Usage of the command line interface for execution of various tasks
  • The Wildfly Web Console and the options it offers

Chapters 8,9: Clustering and Load-balancing

Chapters 8,9 cover Wildfly clustering, HA and load balancing capabilities in detail.

  • Basic setup and configuration of a Wildfly clusters
  • Infinispan subsystem configuration along with details of Hibernate cache setup
  • Clustering of JMS system, EJBs and JPA entities
  • Installation and configuration of Apache web server and modules like mod_jk, mod_proxy and mod_cluster
  • Using CLI for mod_cluster and web context management as well as troubleshooting tips

Chapter 10: Securing Wildfly

This chapter covers

  • Introduction to the Wildfly security subsystem and details of commonly used Login Modules – Database and LDAP
  • Securing Java EE components – Web tier, EJBs and web services
  • Protecting the Web Admin console along with configurations related to transport layer security

Chapter 11: WildFly, OpenShift and Cloud Computing

This lesson is dedicated to Cloud Computing as a technology and how OpenShift

  • Overview of Cloud Computing – basics, advantages, types, options
  • Introduction to OpenShift and setup process of the client related tools
  • Walkthrough of cartridge installation and the process of building and deploying a sample application to OpenShift
  • Log management and basic administrative tasks using CLI tools- start, stop, restart etc
  • Using OpenShift with Eclipse and instructions on how to scale your applications

Standout features

Some of the points which in my opinion were great

  • We often spend years working with containers (application servers, web servers etc) without really knowing what’s going on beneath. There is a reason behind this – the core concepts (application kernels, modules, container configuration, threading setup etc) are pretty complex and often not that well explained. This book does a great job at covering these topics and presenting the core of Wildfly in a simple manner
  • Intuitive and clear diagrams (lots of them!)
  • Enjoyed the chapter on OpenShift – provides a great platform for exploring PaaS capabilities and getting up and running with OpenShift

Conclusion

This book is a must have for someone looking to gain expertise on WildFly 8. Since WildFly 8 is Java EE 7 compliant, it also means that the reader will benefit a lot by picking up a lot of Java EE related fundamentals as well as cutting edge features of Java EE 7. Grab your copy from Packtpub !

Posted in Books, Java, Java EE | Tagged , , , | Leave a comment