Tuesday, January 24, 2017

How to increase max connection in MySql

When you try to make a large number of connections to the MySQL server, sometimes you might get the following error.

com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection,  message from server: "Too many connections"

There are two values governing the max number of connections one can create

  1. max_user_connections
  2. global.max_connections

1. max_user_connections

The max_user_connections is a user level parameter which you can set for each user. To let a user to create any number of connections set the above mentioned value to zero '0'.

First view the current  max_user_connections:

SELECT max_user_connections FROM mysql.user WHERE user='my_user' AND host='localhost';

Then set it to zero


GRANT USAGE ON *.* TO my_user@localhost MAX_USER_CONNECTIONS 0;

2. global.max_connections

The global.max_connections is a global parameter and has the precedence over 
max_user_connections. Hence just increasing the max_user_connections is not enough. Hence you have to increase the max_user_connections as well.


set @@global.max_connections = 1500;


Reference:

[1] http://dba.stackexchange.com/questions/47131/how-to-get-rid-of-maximum-user-connections-error

[2] https://www.netadmintools.com/art573.html

Friday, January 20, 2017

Installing and Configuring NGINX in ubuntu (for a Simple Setup)

In this post I am going to present you, how to install NGINX and setup it to operate with simple HTTP routing.

Below are the two easy steps to install NGINX in your ubuntu system.


sudo apt-get update
sudo apt-get install nginx

Once you are done go to any web browser and type in "http://localhost", in case  you are installing in the local machine or "http://[IP_ADDRESS]"

This will show you the default HTTP page hosted by NGINX


Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Below are few easy commands to "Stop", "Start" or "Restart"


sudo service nginx stop
sudo service nginx start
sudo service nginx restart


By now you have NGINX installed, up and running on your system.

We will next we how to configure NGINX to listen to a particular port and route the traffic to some other end points.

Below is a sample configuration file you need to create. Let's first see what each of these configuration means.

"upstream" : represents a group of endpoints that you need to route you requests.

"upstream/server" : an endpoint that you need to route you requests.

"server" : represent the configurations for listing ports and routing locations

"server/listen" : this is the port that NGINX will listen to

"server/server_name" : the server name this machine (where you install the NGINX)

"server/location/proxy_pass" : the group name of the back end servers you need to route your requests to. 


upstream backends {
    server 192.168.58.118:8280;
    server 192.168.88.118:8280;
}

server {
    listen 8280;
    server_name 192.168.58.123;
    location / {
        proxy_pass http://backends;
    }
}

The above configuration instructs NGINX to route requests that is coming into "192.168.58.123:8280", to be routed into "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

1. To make that happen you have to create a file with above configuration at "/etc/nginx/sites-available/mysite1". You can use any name you want. In this example I named it as "mysite1".

2. Now you have to enable this configuration by creating a symbolic link to the above file in "/etc/nginx/sites-enabled/" location
/etc/nginx/sites-enabled/mysite1 -> /etc/nginx/sites-available/mysite1

3. Now the last step. You have to restart the NGINX to get the new configurations affected.

Once restarted, any request you send to "192.168.58.123:8280" will be load balanced in to "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

Hope this helps you to quickly setup NGINX for you simple routing requirements

Friday, May 27, 2016

Deploying artifacts to WSO2 Servers using Admin Services

In this post I am going to show you, how to deploy artifacts on WSO2 Enterprise Service Bus [1] and WSO2 Business Process Server [2] using Admin Services [3]

Usual practice with WSO2 artifacts deployment is to, enable DepSync [4] (Deployement Synchronization). And upload the artifacts via the management console of master node. Which will then upload the artifacts to the configured SVN repository and notify the worker nodes regarding this new artifact via a cluster message. Worker nodes then download the new artifacts from the SVN repository and apply those.

In this approach you have to log in to the management console and do the artifacts deployment manually.

With the increasing use of continuous integration tools, people are looking in to the possibility of automating this task. There is a simple solution in which you need to configure a remote file copy to the relevant directory inside the [WSO2_SERVER_HOME]/repository/deployment/server directory. But this is a very low level solution.

Following is how to use Admin Services to do the same in much easier and much manageable manner.

NOTE: Usually all WSO2 servers accept deployable as .car file but WSO2 BPS prefer .zip for deploying BPELs.

For ESB,
  1. Call 'deleteApplication' in ApplicationAdmin service and delete the
    application existing application
  2. Wait for 1 min.
  3. Call 'uploadApp' in CarbonAppUploader service
  4. Wait for 1 min.
  5. Call 'getAppData' in ApplicationAdmin, if it returns application data
    continue. Else break
 For BPS,
  1. Call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with page=0 and
    packageSearchString=”Name_”
  2. Save the information
    <ns1:version>
    <ns1:name>HelloWorld2‐1</ns1:name>
    <ns1:isLatest>true</ns1:isLatest>
    <ns1:processes/>
    </ns1:version>
  3. Use the 'uploadService' in BPELUploader, to upload the new BPEL zip
    file
  4. Again call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with 15 seconds intervals for 3mins.
  5. If it finds the name getting changed (due to version upgrade. Eg:
    HelloWorld2‐4), then continue. (Deployment is success)
  6. If the name doesn't change for 3mins, break. Deployment has some
    issues. Hence need human intervention

[1] http://wso2.com/products/enterprise-service-bus/
[2] http://wso2.com/products/business-process-server/
[3] https://docs.wso2.com/display/BPS320/Calling+Admin+Services+from+Apps
[4] https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer

Sunday, March 6, 2016

How to write OSGi tests for C5 compoments

WSO2 C5 Carbon Kernel will be the heart of all the next generation Carbon products. With Kernel version 5.0.0 we introduced PAX OSGi testing.

Now we are trying to ease the life of C5 components developers by providing a utility, which will take care of most of the generic configurations need in OSGi testing. This will enable the C5 component developer to just specify a small number of dependencies and start writing PAX tests for C5 components.

You will have to depend on the following library, except for other PAX dependencies


1
2
3
4
5
6
<dependency>
    <groupId>org.wso2.carbon</groupId>
    <artifactId>carbon-kernel-osgi-test-util</artifactId>
    <version>5.1.0-SNAPSHOT</version>
    <scope>test</scope>
</dependency>


You can find a working sample on the following Git repo
https://github.com/jsdjayanga/c5-sample-osgi-test

Above will load the dependencies you need by default to test Carbon Kernel functionalities. But as a component developer you will have to specify your components jars into the testing environment. This is done via @Configuration annotation in your test class.

Lets assume you work on a bundle org.wso2.carbon.jndi:org.wso2.carbon.jndi, below is how you should specify your dependencies.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
    @Configuration
    public Option[] createConfiguration() {
        List<Option> customOptions = new ArrayList<>();
        customOptions.add(mavenBundle().artifactId("org.wso2.carbon.jndi").groupId("org.wso2.carbon.jndi")
                .versionAsInProject());

        CarbonOSGiTestEnvConfigs configs = new CarbonOSGiTestEnvConfigs();
        configs.setCarbonHome("/home/jayanga/WSO2/Training/TestUser/target/carbon-home");
        return CarbonOSGiTestUtils.getAllPaxOptions(configs, customOptions);
    }


Once these are done, your test should ideally work :)

Friday, December 25, 2015

How to create a heap dump of your Java application

Heap in a JVM, is the place where it keeps all your runtime objects. The JVM create a dedicated space for the heap at the JVM startup, which can be controlled via JVM option -Xms<size> eg: -Xms100m (this will allocate 100MBs for the heap). JVM is capable of increasing and decreasing the size of the heap [1] based on the demand, and JVM has another option which allows to set max size for the heap, -Xmx<size>, eg: -Xmx6g (this allows the heap to grow up to 6GBs)

JVM automatically perform Garbage Collection (GC), when it detects its about to reach the heap size limits. But the GC can only clean the objects which are eligible for GC. If the JVM can't allocate required memory even after GC, JVM will crash with "Exception in thread "main" java.lang.OutOfMemoryError: Java heap space"

If your Java application in production crashes due to some issue like this, you cant just ignore the incident, and restart your application. You have to analyze the what cause the JVM crash, and take the necessary actions to avoid it happening again. This is where the JVM heap dump comes in to the play.

JVM heap dumps are by default disabled, you have to enable heap dumps explicitly by providing following JVM option, -XX:+HeapDumpOnOutOfMemoryError

The below sample code, tries to create a multiple, large arrays of chars, and keep the references in list. Which cause those large arrays ineligible for garbage collection.

package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
    public static void main(String[] args) {
        List<Object> list = new ArrayList<Object>();
        for (int i = 0; i < 1000; i++) {
            list.add(new char[1000000]);
        }
    }
}

If you run the above code with following command lines,

1. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx3g com.test.TestClass

Result: Program runs and exit without any error. The heap size starts from 10MB and then grows as needed. Above needs memory less than 3GB. So, it completes without any error.

2. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx1g com.test.TestClass

Result: JVM crashes with OOM.

If we change the above code a bit to remove the char array from the list, after adding to the list. what would be the result


package com.test;

import java.util.ArrayList;
import java.util.List;

public class TestClass {
    public static void main(String[] args) {
        List<Object> list = new ArrayList<Object>();
        for (int i = 0; i < 1000; i++) {
            list.add(new char[1000000]);
            list.remove(0);
        }
    }
}

3. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx10m com.test.TestClass

Result: This code runs without any issue even with a heap of 10MBs.

NOTE:
1. There is no impact to your application if you enable the heap dump in the JVM. So, it is better to always enable -XX:+HeapDumpOnOutOfMemoryError in your applications

2. You can create a heap dump of a running Java application with the use of jmap. jmap come with the JDK. Creating a heap dump of a running application cause the application to halt everything for a while. So, not recommended to use in production system. (unless there is a extreme situation)
eg: jmap -dump:format=b,file=test-dump.hprof [PID]

3. Above sample codes are just for understanding the concept. 

[1] https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/garbage_collect.html


Edit:

Following are few other important flags that could be useful in generating heap dumps;

-XX:HeapDumpPath=/tmp/heaps
-XX:OnOutOfMemoryError="kill -9 %p" : with this you can execute command at the JVM exit
-XX:+ExitOnOutOfMemoryError : When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors [2].
-XX:+CrashOnOutOfMemoryError : CrashOnOutOfMemoryError - If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled) [2].

[2] http://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html

Wednesday, December 23, 2015

The core of the next-generation WSO2 Carbon platform : WSO2 Carbon Kernel 5.0.0

A whole new revamp of the heart of all WSO2 products : WSO2 Carbon Kernel 5.0.0, was released 21 Dec 2015.

Previous versions of the WSO2 Carbon Kernel, (1.x.x to 4.x.x) had a much similar architecture and was tightly coupled with relatively old technologies (axiom, axis2, SOAP, etc.). Which is the same reason, which made us to re-think and re-architecture everything from the ground, and to come up with WSO2 Carbon Kernel 5.0.0.

The new Kernel is armed with the latest technologies and patterns. It will provide the key functionality for server developers on top of the underline OSGi runtime.

Key Features
  • Transport Management Framework
  • Logging Framework with Log4j 2.0 as the Backend
  • Carbon Startup Order Resolver
  • Dropins Support for OSGi Ready Bundles
  • Jar to Bundle Conversion Tool
  • Artifact Deployment Engine
  • Pluggable Runtime Support

You can download the product from [1], and find more information on [2].

[1] http://product-dist.wso2.com/products/carbon/5.0.0/wso2carbon-kernel-5.0.0.zip
[2] https://docs.wso2.com/display/Carbon500/WSO2+Carbon+Documentation

Friday, December 18, 2015

Logging with SLF4J

The Simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks. It allows you to code just depending on a one dependency namely "slf4j-api.jar", and to plug in the desire logging framework at runtime.

It is very simple to use slf4 logging in your application. You just need to create a slf4j logger and invoke its methods.

Following is a sample code,

package com.test;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Main {
    private static final Logger logger = LoggerFactory.getLogger(Main.class);

    public static void main(String[] args) {
        logger.info("Testing 123");
    }
}

You have to add the following dependency in your pom.xml file

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.13</version>
</dependency>

This is the bare minimum configuration you need to enable sl4fj logging. But if you run this code. you will get a warning similar to the below.

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

This is because, it can't find a binding in the class path, by default, if it can't find a binding in the class path, it will bind to no-op logger implementation.

Using java.util.logging

If you want to use the binding for java.util.logging in your code, you only need to add the following dependency in to your pom.file.

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-jdk14</artifactId>
    <version>1.7.13</version>
</dependency>

This will output the following,

INFO: Testing 123


Using Log4j

If you want to use the binding for log4j version 1.2 in your code, you only need to add the following dependency in to your pom.file.

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.13</version>
</dependency>

Log4j needs an appender to log. Hence, you have to specify the log4j properties.

Create the file "log4j.properties", in resource directory of your project and add the following into it.

# Set root logger level to DEBUG and its only appender to A1.
log4j.rootLogger=DEBUG, A1

# A1 is set to be a ConsoleAppender.
log4j.appender.A1=org.apache.log4j.ConsoleAppender

# A1 uses PatternLayout.
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

This will output the following,


0    [main] INFO  com.test.Main  - Testing 123


[1] http://www.slf4j.org/
[2] http://logging.apache.org/log4j/1.2/index.html