In the next days I'll write a guide on how to install, configure and use all the features that I'll describe in this post.
There are a lot of new features in WLS 12.2.1 (my preferred one is Multitenancy).
In this post I'll describe how easy it is to perform a "scale up" and increase the number of the managed servers in a cluster if the traffic/throughput suddenly increases, so we can always guarantee the same quality of services with the same performances and no risk of crash of managed servers.
In the next image I'll describe a simple WebLogic Domain, with an Admin Server a Dynamic Cluster composed by 'N' WLS Managed Servers and a Load Balancer as front-end.
There are two console for WLS Admin server:
- console http://myserver:port/console
- em http://myserver:port/em this one is completely rewritten and has a different look & feel
RESTful management is improved.
OTD (Oracle Traffic Director) also outside of Engineered Systems (like Exalogic).
OTD is a powerful load Balancer with very good performances and low consumption of memory, it runs on WLS and can be updated from the Admin server of a 12.2.1 domain, so if you create new clusters or if you add new managed servers you do not need to upgrade/change the configuration in OTD.
For OTD management you can use only the em console.
In conclusion, now, if the number of managed servers in a dynamic cluster, increases or decreases, you do not need to update/change the configuration of the load balancer
Scale Up/Down action on a cluster was a manual operation, made by an Administrator as a result of monitoring decisions.
Now we can automate this operation configuring a Diagnostic Module.
The diagnostic module "communicate" with the WLDF (Weblogic Diagnostic Framework) and is able to know the state of each Managed Server, and if there are problems like not enought memory or bad performances ... etc etc.
[if you are interested in WLDF try to use the its dashbord at http://myserver:myport/console/dashboard]
We can write different type of rules/policies in diagnostic module and the evaluation of a rule is an action like the Scale Up or Down.
So for example we can create a rule that evaluate the "Idle Threads Count" and perform a Scale Up or Down action.
It is also possible to create Calendar based policy.
Now an administrator has the ability to prevent overloading database capacity on a scale up event through the new data source "interceptor" feature.
You can set in the Data Source Interceptor the maximum number of connections allowed on a database, and the Interceptor can stop a Scale-Up request.
There is also a script interceptor that provides call-out hooks where you can supply custom shell scripts, or other executables, to be called when a scaling event happens on a cluster.
In this way, you can write, for example, a script to interact with 3rd-party virtual machine hypervisors to add virtual machines prior to scaling up, or remove/reassign virtual machines after scaling down.
With multitenancy you can divide a single JVM in separate zones named Domain Partitions.
Each Domain Partition can managed its own heap memory, cpu and open files.
Each Domain Partition has its own security realm, jndi tree, web context-root, etc etc
Each Domain Partition has its own Resource Group in witch you can configure JDBC, JMS etc etc.
Each Domain partition has its own Virtual Target that is transmitted to the OTD so for each new domain partition you do not need to reconfigure the Load Balancer.
Example of url of a Domain Partition is http://myserver:port/myVirtualTarget/my Application
For example if we have a Cluster with to Managed Server (M1 & M2), if we decide to create two domain partitions (Dp1 & Dp2) and if we deploy App1 in Dp1 and App2 in Dp2 we can call the applications in this way :
Obviously you can use the load balancer :)
The advantages of Domain Partitions are many, for examples:
- you can export from the console a domain partition as a zip file, then, you can import and replicate a domain partition everywhere in a simple way.
- if there are problems with a domain partition for example because it is consuming a lot of memory (its own heap) , you can stop that partition without stopping the entire jvm so the other partitions are not affected.
- if you stop a domain partition on a managed server, the application continues to work in the same domain partition present in the other managed servers.
- you can redeploy the same application in the same jvm (same managed wls) in two different domain partitions; for example if you have a Managed Server M1 and you create two Domain Partitions Dp1 and Dp2 you can have:
This is useful for example if you whant replicate an error in a specific environment, or if you want to separate the different type of users per Application (remember that each one Domain Partition can have its own security realm) or different database (each one domain partition can have its own db or pluggable db)
After a scale up actions, we can find a new wls managed server with the same configuration of the other in the same cluster. This server is automatically started and activated and the OTD send to it new traffic.
Resource Consumption Management (RCM) policies for domain partitions.
You can set the amount of CPU, Heap or Open File that each domain partition in each jvm can consume.
The result of the evaluation of the policies set in the RCM, for each jvm, can be a Notify event or an action on the domain partition like Slow, Fail or Shutdown.
I can close this post with a question to all the people that are creating cluster or similar architectures with Docker:
Even if WLS is certified on Docker you do not think that Domain Partitions are better?