Saturday, January 9, 2021

Easily managing your build environments

Have you come across situations where your build tools suddenly stop working after you update the operating system? Do you want a simple way of setting up your build environment each time you change or format your machine?


One option is to use Docker-based build environments. In this approach, you can create Docker images with the necessary build tools and dependencies. However, you will still have to remember the tags, mount your source code manually, and execute docker commands with multiple arguments each time. This approach is right but needs some effort.


To address the issues in the approach mentioned above, The Condo [1] was developed. The Condo is a simple bash script that wraps all the necessary docker commands you need to manage your build environments.


You can install condo by running the following command


/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/jsdjayanga/condo/main/scripts/install.sh)"


Following are four simple commands that the Condo supports.


List build environments

    condo list

Run a build environment. condo <build-env-name>

    condo devj11

Stop a build environment. condo stop <build-env-name>

  condo stop devj11

Clean a build environment. condo clean <build-env-name>

  condo clean devj11


The configuration file (~/.condo/condo.json) allows to store all the necessary information to run the respective Docker images hence allowing you to run the docker image with just the name of the environment.

You can configure and add as many as build environment you like, e.g., java, golang, spring, or any custom docker image of your choice. Furthermore, the "additional-arguments" section in the configuration allows you to provide any docker compliant arguments to your docker image.

Try the Condo your self share your experience!

[1] https://github.com/jsdjayanga/condo

Thursday, January 17, 2019

Building your own JDK distributions

Everyone started to talk about openJDK[1] with the new licensing that Oracle introduced in January 2019. According to Oracle[2] public updates for Java SE8[3] will be available until the end of 2020 for individual personal use but not for business, commercial or production use[4].

 This lead people to look into alternative JDK distributions. AdoptOpenJDK[5] and AWS Corretto [6] seems to be good options. However, by the time this post is written, AWS Corretto still in its preview state.

By the way, the aim of this post is not to talk about other OpenJDK distributions but to build your JDK distribution.

Let's get our hands dirty.
Hmmmmm, no, you don't have to get your hands dirty. I have made it super simple for you :). I have made a docker image[7] which contains all the necessary dependent tools and libraries. Just follow the below mantioned steps and you will have your own OpenJDK distribution.

Note: Install Docker[1], if you haven't done it yet.

1. Pull the docker image

docker pull jsdjayanga/ubuntu-openjdk8-build

2. Spin up a container with the above image

docker run -it jsdjayanga/ubuntu-openjdk8-build

3. Go to source the directory

cd jdk8u

4. Checkout the tag

hg checkout jdk8u192-b12

5. Run the configure. You can set your name as the release user if you wish by setting --with-user-release-suffix

bash configure --enable-unlimited-crypto --with-user-release-suffix=jsdjayanga --with-build-number=b01 --with-milestone=192

6. Create images

make images

This process would take 30min to 1h and finally, you have your own OpenJDK distribution in the following location.

/jdk8u/build/linux-x86_64-normal-server-release/images

Try this is out and share your experience :)


[1] https://openjdk.java.net/
[2] https://www.oracle.com/index.html
[3] https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 
[4] https://java.com/en/download/release_notice.jsp
[5] https://adoptopenjdk.net/
[6] https://aws.amazon.com/corretto/
[7] https://hub.docker.com/r/jsdjayanga/ubuntu-openjdk8-build

Sunday, August 5, 2018

WUM3 New Features

WSO2 had been looking into the possibilities of improving the Update delivery mechanism for several years. As a result, WSO2 Update Manager (WUM) [1], a groundbreaking tool which enables shipping updates proactively, was developed two years ago. The latest version, WUM 3.0, was released two weeks back in parallel to the WSO2 US Con [4].  This article will discuss the new features of the WUM 3.0.

The novelty begins with the auto-migration mechanism. Your migration experience from WUM 2.0 to WUM 3.0 is seamless. At the very first run of the WUM 3.0, it will automatically detect the previous WUM installation ("Older WUM home(/Users/<yourname>/.wum-wso2) detected.") and ask whether you want to migrate. If you select "yes" a new WUM home will be created at "/Users/<yourname>/.wum3" with all your previous product packs and updated packs. However, you have the flexibility of starting off with a fresh WUM home by selecting "no" if you don't want to continue with the previous packs.

Next, the most awaited feature for WUM, WUM Channels. A channel is nothing but a mechanism to deliver a filtered set of updates. As an initiative, we released two channels.

  • Channel "full": this channel delivers all the fixes, including bug fixes, security fixes, improvements, etc.
This channel is preferred by many, who are managing dynamic requirements and continuous development. As this channel delivers various types of updates, it helps to stabilize the system in a highly evolving environment by proactively providing fixes. 

  • Channel "security": this channel only delivers security fixes.
As opposed to the channel "full" the channel "security" provides only the security updates. This channel will help the users with static requirements who infrequently push changes to the production environment. As the production system is stable and seldom changed, channel "security" helps to push only security fixes into the production system.

Another new feature is to create WUM updated packs using a timestamp. This feature was initially developed to help WSO2 support, in building a WUM updated product pack to a specific timestamp given by WSO2 clients. However, we decided to expose this feature to all WUM users as this would be useful to create a WUM updated pack (to particular timestamp) in case you accidentally delete an updated pack form WUM home.

Try new WUM 3.0 [2] and WSO2 products. Subscribe for Free Trial Subscription [3] to get access to WSO2 updates and all benefits of a WSO2 Subscription.

[1] https://docs.wso2.com/display/WUM300/Introduction
[2] https://wso2.com/updates/wum
[3] https://wso2.com/subscription
[4] https://us18.wso2con.com/

Tuesday, January 24, 2017

How to increase max connection in MySql

When you try to make a large number of connections to the MySQL server, sometimes you might get the following error.

com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Data source rejected establishment of connection,  message from server: "Too many connections"

There are two values governing the max number of connections one can create

  1. max_user_connections
  2. global.max_connections

1. max_user_connections

The max_user_connections is a user level parameter which you can set for each user. To let a user to create any number of connections set the above mentioned value to zero '0'.

First view the current  max_user_connections:

SELECT max_user_connections FROM mysql.user WHERE user='my_user' AND host='localhost';

Then set it to zero


GRANT USAGE ON *.* TO my_user@localhost MAX_USER_CONNECTIONS 0;

2. global.max_connections

The global.max_connections is a global parameter and has the precedence over 
max_user_connections. Hence just increasing the max_user_connections is not enough. Hence you have to increase the max_user_connections as well.


set @@global.max_connections = 1500;


Reference:

[1] http://dba.stackexchange.com/questions/47131/how-to-get-rid-of-maximum-user-connections-error

[2] https://www.netadmintools.com/art573.html

Friday, January 20, 2017

Installing and Configuring NGINX in ubuntu (for a Simple Setup)

In this post I am going to present you, how to install NGINX and setup it to operate with simple HTTP routing.

Below are the two easy steps to install NGINX in your ubuntu system.


sudo apt-get update
sudo apt-get install nginx

Once you are done go to any web browser and type in "http://localhost", in case  you are installing in the local machine or "http://[IP_ADDRESS]"

This will show you the default HTTP page hosted by NGINX


Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Below are few easy commands to "Stop", "Start" or "Restart"


sudo service nginx stop
sudo service nginx start
sudo service nginx restart


By now you have NGINX installed, up and running on your system.

We will next we how to configure NGINX to listen to a particular port and route the traffic to some other end points.

Below is a sample configuration file you need to create. Let's first see what each of these configuration means.

"upstream" : represents a group of endpoints that you need to route you requests.

"upstream/server" : an endpoint that you need to route you requests.

"server" : represent the configurations for listing ports and routing locations

"server/listen" : this is the port that NGINX will listen to

"server/server_name" : the server name this machine (where you install the NGINX)

"server/location/proxy_pass" : the group name of the back end servers you need to route your requests to. 


upstream backends {
    server 192.168.58.118:8280;
    server 192.168.88.118:8280;
}

server {
    listen 8280;
    server_name 192.168.58.123;
    location / {
        proxy_pass http://backends;
    }
}

The above configuration instructs NGINX to route requests that is coming into "192.168.58.123:8280", to be routed into "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

1. To make that happen you have to create a file with above configuration at "/etc/nginx/sites-available/mysite1". You can use any name you want. In this example I named it as "mysite1".

2. Now you have to enable this configuration by creating a symbolic link to the above file in "/etc/nginx/sites-enabled/" location
/etc/nginx/sites-enabled/mysite1 -> /etc/nginx/sites-available/mysite1

3. Now the last step. You have to restart the NGINX to get the new configurations affected.

Once restarted, any request you send to "192.168.58.123:8280" will be load balanced in to "192.168.58.118:8280" or "192.168.88.119:8280" in round robin manner.

Hope this helps you to quickly setup NGINX for you simple routing requirements

Friday, May 27, 2016

Deploying artifacts to WSO2 Servers using Admin Services

In this post I am going to show you, how to deploy artifacts on WSO2 Enterprise Service Bus [1] and WSO2 Business Process Server [2] using Admin Services [3]

Usual practice with WSO2 artifacts deployment is to, enable DepSync [4] (Deployement Synchronization). And upload the artifacts via the management console of master node. Which will then upload the artifacts to the configured SVN repository and notify the worker nodes regarding this new artifact via a cluster message. Worker nodes then download the new artifacts from the SVN repository and apply those.

In this approach you have to log in to the management console and do the artifacts deployment manually.

With the increasing use of continuous integration tools, people are looking in to the possibility of automating this task. There is a simple solution in which you need to configure a remote file copy to the relevant directory inside the [WSO2_SERVER_HOME]/repository/deployment/server directory. But this is a very low level solution.

Following is how to use Admin Services to do the same in much easier and much manageable manner.

NOTE: Usually all WSO2 servers accept deployable as .car file but WSO2 BPS prefer .zip for deploying BPELs.

For ESB,
  1. Call 'deleteApplication' in ApplicationAdmin service and delete the
    application existing application
  2. Wait for 1 min.
  3. Call 'uploadApp' in CarbonAppUploader service
  4. Wait for 1 min.
  5. Call 'getAppData' in ApplicationAdmin, if it returns application data
    continue. Else break
 For BPS,
  1. Call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with page=0 and
    packageSearchString=”Name_”
  2. Save the information
    <ns1:version>
    <ns1:name>HelloWorld2‐1</ns1:name>
    <ns1:isLatest>true</ns1:isLatest>
    <ns1:processes/>
    </ns1:version>
  3. Use the 'uploadService' in BPELUploader, to upload the new BPEL zip
    file
  4. Again call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with 15 seconds intervals for 3mins.
  5. If it finds the name getting changed (due to version upgrade. Eg:
    HelloWorld2‐4), then continue. (Deployment is success)
  6. If the name doesn't change for 3mins, break. Deployment has some
    issues. Hence need human intervention

[1] http://wso2.com/products/enterprise-service-bus/
[2] http://wso2.com/products/business-process-server/
[3] https://docs.wso2.com/display/BPS320/Calling+Admin+Services+from+Apps
[4] https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer

Sunday, March 6, 2016

How to write OSGi tests for C5 compoments

WSO2 C5 Carbon Kernel will be the heart of all the next generation Carbon products. With Kernel version 5.0.0 we introduced PAX OSGi testing.

Now we are trying to ease the life of C5 components developers by providing a utility, which will take care of most of the generic configurations need in OSGi testing. This will enable the C5 component developer to just specify a small number of dependencies and start writing PAX tests for C5 components.

You will have to depend on the following library, except for other PAX dependencies


1
2
3
4
5
6
<dependency>
    <groupId>org.wso2.carbon</groupId>
    <artifactId>carbon-kernel-osgi-test-util</artifactId>
    <version>5.1.0-SNAPSHOT</version>
    <scope>test</scope>
</dependency>


You can find a working sample on the following Git repo
https://github.com/jsdjayanga/c5-sample-osgi-test

Above will load the dependencies you need by default to test Carbon Kernel functionalities. But as a component developer you will have to specify your components jars into the testing environment. This is done via @Configuration annotation in your test class.

Lets assume you work on a bundle org.wso2.carbon.jndi:org.wso2.carbon.jndi, below is how you should specify your dependencies.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
    @Configuration
    public Option[] createConfiguration() {
        List<Option> customOptions = new ArrayList<>();
        customOptions.add(mavenBundle().artifactId("org.wso2.carbon.jndi").groupId("org.wso2.carbon.jndi")
                .versionAsInProject());

        CarbonOSGiTestEnvConfigs configs = new CarbonOSGiTestEnvConfigs();
        configs.setCarbonHome("/home/jayanga/WSO2/Training/TestUser/target/carbon-home");
        return CarbonOSGiTestUtils.getAllPaxOptions(configs, customOptions);
    }


Once these are done, your test should ideally work :)