Friday, December 25, 2015

How to create a heap dump of your Java application

Heap in a JVM, is the place where it keeps all your runtime objects. The JVM create a dedicated space for the heap at the JVM startup, which can be controlled via JVM option -Xms<size> eg: -Xms100m (this will allocate 100MBs for the heap). JVM is capable of increasing and decreasing the size of the heap [1] based on the demand, and JVM has another option which allows to set max size for the heap, -Xmx<size>, eg: -Xmx6g (this allows the heap to grow up to 6GBs)

JVM automatically perform Garbage Collection (GC), when it detects its about to reach the heap size limits. But the GC can only clean the objects which are eligible for GC. If the JVM can't allocate required memory even after GC, JVM will crash with "Exception in thread "main" java.lang.OutOfMemoryError: Java heap space"

If your Java application in production crashes due to some issue like this, you cant just ignore the incident, and restart your application. You have to analyze the what cause the JVM crash, and take the necessary actions to avoid it happening again. This is where the JVM heap dump comes in to the play.

JVM heap dumps are by default disabled, you have to enable heap dumps explicitly by providing following JVM option, -XX:+HeapDumpOnOutOfMemoryError

The below sample code, tries to create a multiple, large arrays of chars, and keep the references in list. Which cause those large arrays ineligible for garbage collection.

package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
    public static void main(String[] args) {
        List<Object> list = new ArrayList<Object>();
        for (int i = 0; i < 1000; i++) {
            list.add(new char[1000000]);

If you run the above code with following command lines,

1. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx3g com.test.TestClass

Result: Program runs and exit without any error. The heap size starts from 10MB and then grows as needed. Above needs memory less than 3GB. So, it completes without any error.

2. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx1g com.test.TestClass

Result: JVM crashes with OOM.

If we change the above code a bit to remove the char array from the list, after adding to the list. what would be the result

package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
    public static void main(String[] args) {
        List<Object> list = new ArrayList<Object>();
        for (int i = 0; i < 1000; i++) {
            list.add(new char[1000000]);

3. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx10m com.test.TestClass

Result: This code runs without any issue even with a heap of 10MBs.

1. There is no impact to your application if you enable the heap dump in the JVM. So, it is better to always enable -XX:+HeapDumpOnOutOfMemoryError in your applications

2. You can create a heap dump of a running Java application with the use of jmap. jmap come with the JDK. Creating a heap dump of a running application cause the application to halt everything for a while. So, not recommended to use in production system. (unless there is a extreme situation)
eg: jmap -dump:format=b,file=test-dump.hprof [PID]

3. Above sample codes are just for understanding the concept. 



Following are few other important flags that could be useful in generating heap dumps;

-XX:OnOutOfMemoryError="kill -9 %p" : with this you can execute command at the JVM exit
-XX:+ExitOnOutOfMemoryError : When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors [2].
-XX:+CrashOnOutOfMemoryError : CrashOnOutOfMemoryError - If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled) [2].


Wednesday, December 23, 2015

The core of the next-generation WSO2 Carbon platform : WSO2 Carbon Kernel 5.0.0

A whole new revamp of the heart of all WSO2 products : WSO2 Carbon Kernel 5.0.0, was released 21 Dec 2015.

Previous versions of the WSO2 Carbon Kernel, (1.x.x to 4.x.x) had a much similar architecture and was tightly coupled with relatively old technologies (axiom, axis2, SOAP, etc.). Which is the same reason, which made us to re-think and re-architecture everything from the ground, and to come up with WSO2 Carbon Kernel 5.0.0.

The new Kernel is armed with the latest technologies and patterns. It will provide the key functionality for server developers on top of the underline OSGi runtime.

Key Features
  • Transport Management Framework
  • Logging Framework with Log4j 2.0 as the Backend
  • Carbon Startup Order Resolver
  • Dropins Support for OSGi Ready Bundles
  • Jar to Bundle Conversion Tool
  • Artifact Deployment Engine
  • Pluggable Runtime Support

You can download the product from [1], and find more information on [2].


Friday, December 18, 2015

Logging with SLF4J

The Simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks. It allows you to code just depending on a one dependency namely "slf4j-api.jar", and to plug in the desire logging framework at runtime.

It is very simple to use slf4 logging in your application. You just need to create a slf4j logger and invoke its methods.

Following is a sample code,

package com.test;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Main {
    private static final Logger logger = LoggerFactory.getLogger(Main.class);

    public static void main(String[] args) {"Testing 123");

You have to add the following dependency in your pom.xml file


This is the bare minimum configuration you need to enable sl4fj logging. But if you run this code. you will get a warning similar to the below.

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See for further details.

This is because, it can't find a binding in the class path, by default, if it can't find a binding in the class path, it will bind to no-op logger implementation.

Using java.util.logging

If you want to use the binding for java.util.logging in your code, you only need to add the following dependency in to your pom.file.


This will output the following,

INFO: Testing 123

Using Log4j

If you want to use the binding for log4j version 1.2 in your code, you only need to add the following dependency in to your pom.file.


Log4j needs an appender to log. Hence, you have to specify the log4j properties.

Create the file "", in resource directory of your project and add the following into it.

# Set root logger level to DEBUG and its only appender to A1.
log4j.rootLogger=DEBUG, A1

# A1 is set to be a ConsoleAppender.

# A1 uses PatternLayout.
log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

This will output the following,

0    [main] INFO  com.test.Main  - Testing 123


WSO2GREG : Upload files

WSO2 Governance registry (WSO2GREG) is a metadata repository. It supports storing, cataloging, indexing, managing and governing your enterprise metadata related to any kind of asset.

Uploading files directly in to the GREG is not recommended. What is recommended is to upload the files in to a file server and use a link to the file as a metadata.

But there may be situations where you really need to keep the files in GREG itself. Let's see how you can do this in GREG.

1. Download the file.rxt file from [1].
2. Create a new artifact type from the WSO2GREG management console. Extensions-> Configure->Artifact Types->Add new Artifact
3. Download the "file" extension from [2]
4. Copy the "file" directory to "[GREG_HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/"

This will add a new item "file" in to the menu and you should be able to upload and download files in WSO2GREG.

If you need to make any association to "files" you upload here, you have to do the following to list the files in the associations view.

Let's assume you want link these files to "soapservice" artifact. Then do the following.

1. Edit the Association types for "soapservice" in [GREG_HOME]/repository/conf/governance.xml file to have the file details

Following is a sample configuration for Association type="soapservice", once you add the "file" type

<Association type="soapservice">


Sunday, December 13, 2015

WSO2GREG : Categorized view of your assests

Let's assume you have five services, namely, Service1, Service2, Service3, Service4 and Service5. If you just create these service in the WSO2GREG, all those services will be displayed as follows,

Sometime it is needed to group the assets based on some criteria. Let's say, Service1 and Service2 belongs to DepartmentA and Service3, Service4, Service5 belongs to DepartmentB.

If we need to have a categorized view based on the department. Follow the below mentioned steps.

1. Add the following filed to the "soapservice" artifact. You edit it via management console. Extensions->Configure->Artifact Types->soapservice->Edit

<field type="options">
  <name label="Category">Category</name>

2. Download the asset.js file in the following location and copy it to the following location [GREG_HOME]/repository/deployment/server/jaggeryapps/publisher/extensions/assets/soapservice (replace the existing asset.js file)

3. Restart the server

4. You can now see, a new dropdown got appear in front of the search box.

5. Go to each of those service and edit those service to have its corresponding department in the "Category" field.

6. You can now view and search on the selected category.

Friday, December 11, 2015

WSO2GREG Publisher and Store

New WSO2 Governance registry (WSO2GREG) comes with a brand new two applications

1. Governance Center : Publisher
2. Governance Center : Store

These two will be the main UIs to deal with the assets (assets governance), WSO2 will no longer encourage to use the management console for artifact management. (you can use the management console to setup assets, lifecycles, users, etc. but not for artifact management)

Governance Center : Publisher

Governance Center : Store

Wednesday, July 29, 2015

Binding a processes into CPUs in Ubuntu

In this post I'm going to show you how to bind a process into a particular CPU in Ubuntu. Usually the OS manages the processes and schedules the threads. There is no guarantee on which CPU your process is running, OS will schedule it based on the resource availability.

But there is a way to specify the CPU and bind your process into a CPU.

taskset -cp <CPU ID | CPU IDs> <Process ID>

Following is an sample to demonstrate how you can do that.

1. Sample code which consumes 100% CPU (for demo purposes)

class Test {
    public static void main(String args[]) {
        int i = 0;
        while (true) {

2. Compile and run the above simple program

java Test

3. Use the 'htop' to view the CPU usage

In the above screen shot you can see that my sample process is running in the CPU 2. But its not guaranteed that it will always remain in CPU2. The OS might assign it to another CPU at some point.

4.  Run the following command, it will assign the process 5982 permanently into 5th CPU (CPU # start at zero, hence the index 4 refers to 5th CPU.)

taskset -cp 4 5982

In the above screen shot you can see, that 100% CPU usage is now indicated in the CPU 5.

Monday, July 13, 2015

WSO2 BAM : How to change the scheduled time of a scripts in a toolbox

WSO2 Business Activity Monitor (WSO2BAM) [1] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

WSO2 BAM comes with predefined set of toolboxes.

A toolbox consist of several components
1. Stream definitions
2. Analytics scripts
3. Visualizations components

Non of the above 3 components are mandatory.
You can have a toolbox which has only Stream definitions and Analytics scripts but not Visualization components.

In WSO2 BAM, toolbox always get the precedence. Which means if you manually change anything related to any component published via a toolbox. It will be override once the server is restarted.

If you update,
1. Schedule time
This will update the schedule time, and newly update value will be only effective until the next restart. This will not get persisted. Once the server is started, schedule time will have the original value form the toolbox

2. Stream definition
If you change anything related to stream definition, it might cause some consistency issues. When the server is restarted, it will find that there is already a stream definition exist with the given name and the configurations are different. So an error will be logged.

So it is highly discouraged to manually modify the components deployed via a toolbox

The recommended way to change anything associated with a toolbox, is to,
1. Unzip the toolbox.
2. Make the necessary changes.
3. Create a zip the files again.
4. Rename the file as <toolbox_name>.tbox
5. Redeploy the toolbox

So, if you need to change the scheduled time of Service_Statistics_Monitoring Toolbox,
Get a copy of Service_Statistics_Monitoring.tbox file resides in [BAM_HOME]/repository/deployment/server/bam-toolbox directory.

Unzip the file. Open the file Service_Statistics_Monitoring/analytics/

Set the following configuration according to your requirement
analyzers.scripts.script1.cron=0 0/20 * * * ?

Create a zip file and change the name of the file to Service_Statistics_Monitoring.tbox

And redeploy the toolbox.

Now your changes is embed into the toolbox and each time the toolbox is deployed, it will have the modified value.


Tuesday, July 7, 2015

Publishing WSO2 APIM Statistics to WSO2 BAM

WSO2 API Manager (WSO2APIM) [1] is a fully open source, complete solution for creating, publishing and managing all aspects of an API and its lifecycle.

WSO2 Business Activity Monitor (WSO2BAM) [2] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

Users can use these two products together, which collectively gives total control over management and monitoring of APIs.

In this post I'm going to explain how APIM stat publishing and monitoring happens in WSO2APIM and WSO2BAM.

Configuring WSO2 APIM to publish statistics

You can find more information on setting up statistics publishing in [3]. Once you do your configurations, it should look like the below.


    <!-- Enable/Disable the API usage tracker. -->
    <BAMServerURL>tcp://<BAM host IP>:7614/</BAMServerURL>
    <!-- JNDI name of the data source to be used for getting BAM statistics. This data source should
        be defined in the master-datasources.xml file in conf/datasources directory. -->


    <description>The datasource used for getting statistics to API Manager</description>
    <definition type="RDBMS">
            <validationQuery>SELECT 1</validationQuery>

Configuring WSO2 BAM

You can find more information on setting up statistics publishing in [3].

Note that you only need to copy API_Manager_Analytics.tbox into super tenant space. (No need to do any configuration in tenant space)

Above digram illustrate how the stat data is published and eventually view though the APIM Statistic view.

1. Statistics information about APIs from all the tenants are published to the WSO2 BAM via a single data publisher.

2.  API_Manager_Analytics.tbox has stream definitions and hive scripts needed to summarize statistics. These hive scripts get periodically executed and summarized data is pushed into a RDBMS.

3. When you visit statistics page in WSO2 APIM, it will retrieve summerized statistics form the RDBMS and shows it to you.

Note: If you need to view statistics of a API which is deployed in a particular tenant. Login in to WSO2 APIM in particular tenant and view statistics page.
(You don't need to do any additional configuration to support tenant specific statistics.)


Monday, April 6, 2015

Error occurred while applying patches {org.wso2.carbon.server.extensions.PatchInstaller}

I have seen people complaining that WSO2 servers logs the following error message at server start up.

[2015-04-06 15:48:57,572] ERROR {org.wso2.carbon.server.extensions.PatchInstaller} -  Error occurred while applying patches Destination '/home/jayanga/WSO2/wso2am-1.7.0/repository/components/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20120522-1813' exists but is a directory
 at org.wso2.carbon.server.util.FileUtils.copyFile(
 at org.wso2.carbon.server.util.PatchUtils.copyNewPatches(
 at org.wso2.carbon.server.extensions.PatchInstaller.perform(
 at org.wso2.carbon.server.Main.invokeExtensions(
 at org.wso2.carbon.server.Main.main(
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
 at java.lang.reflect.Method.invoke(
 at org.wso2.carbon.bootstrap.Bootstrap.loadClass(
 at org.wso2.carbon.bootstrap.Bootstrap.main(

The main reason to see such error message is, some erroneous entries in the patch metadata files for files itself.

This might happen if you try to forcefully stop the server as soon as you start the server

When the server start up, it copies the patch files. If the server is forcefully stopped at that time using [ctrl+c], patching processes get immediately stopped and patch meta-data will get corrupted.

You can get rid of this issue by removing corrupted patch related meta-data manually and restating the server, so that the server will apply all the patches form the beginning.

  1. remove [CARBON_HOME]/repository/components/patches/.metadata
  2. restart the server. (do not interrupt while starting up)

Thursday, March 26, 2015

Everyday Git (Git commands you need in your everyday work)

Git [1] is one of the most popular version control systems. In this post I am going to show you how to work with GitHub [2]. When it comes to GitHub there are thousands of public repositories. If you are interested in a project you can start working on it and contributing it. Followings are the steps and commands you will have to use while you work with GitHub.

1. Forking a repository
This is done via the GithHub [2] web site.

2. Clone a new repository
git clone

3. Get updates from the remote repository (origin/master)
git pull origin master

4. Push the updates to the the remote repository (origin/master)
git push origin master

5. Add updated files to staging
git add

6. Commit the local changes to the remote repository
git commit -m "Modifications to" --signoff

7. Set the upstream repository
git remote add upstream

8. Fetch from upstream repository
git fetch upstream

9. Fetch from all the remote repositories
git fetch --all

10. Merge new changes from upstream repository for the master branch
git checkout master
git merge upstream/master

11. Merge new changes from upstream repository for the "otherbranch" branch
git checkout otherbranch
git merge upstream/otherbranch

12. View the history of commits
git log

13. If needed to discard some commits in the local repository
First find the commit ID to which you want to revert back to. The user the following command
git reset --hard #commitId

14. To tag a particular commit
git checkout #commitid
git tag -a v1.1.1 -m 'Tagging version v1.1.1'
git push origin --tags


WSO2 Carbon : Get notified just after the server start and just before server shutdown

WSO2 Carbon [1] is a 100% open source, integrated and componentized middleware platform which enables you to develop your business and enterprise solutions rapidly. WSO2 Carbon is based on OSGi framework [2]. It inherits molecularity and dynamism from the OSGi.

In this post I am going to show you how to get notified, when the server is starting up and when the server is about to shut down. 

In OSGi, bundle start up sequence is random. So you can't rely on the bundle start up sequence.

There are real world scenarios where you have some dependencies amount bundles, hence need to perform some actions before other dependent bundles get deactivated in the server shutdown.

Eg. Let's say you have to send messages to a external system. Your message sending module use your authentication module to authenticate the request and send it to the external system and your message sending module try to send all the buffered messages before the server shutdown.

Bundle unloading sequence in OSGi not happened in a guaranteed sequence. So, what would happen if your authentication bundle get deactivated before your message sending bundle get deactivated. In this case message sending module can't send the messages

To help these type of scenarios WSO2 Carbon framework provide you with a special OSGi service which can be used to detect the server start up and server shutdown

1. How to get notified the server startup

Implement the interface org.wso2.carbon.core.ServerStartupObserver [3], and register it as a service via the bundle context.

When the server is starting you will receive notifications via completingServerStartup() and completedServerStartup()

2. How to get notified the server shutdown

Implement the interface org.wso2.carbon.core.ServerShutdownHandler [4], and register it as a service via the bundle context.

When the server is about to shutdown you will receive the notification via invoke()


protected void activate(ComponentContext componentContext) {
 try {
     componentContext.getBundleContext().registerService(ServerStartupObserver.class.getName(), new CustomServerStartupObserver(), null) ;
 } catch (Throwable e) {
     log.error("Failed to activate the bundle ", e);