Articles related to real life scenarios involving java, frameworks and libraries configurations and salt-stack. The topics includes Core Java, Hibernate, Spring MySQL, Multithreading, Java NIO, jQuery, c3p0, Connection pooling, salt state, pillar, grain etc.

Thursday, December 6, 2018

Amazon Web Services (AWS CodeDeploy): Auto Assignment of Elastic IP to green/blue deployment fleet replacement instances.

2:56 AM Posted by Unknown 3 comments

With a blue/green deployment, you provision a new set of instances on which CodeDeploy installs the latest version of your application. CodeDeploy then reroutes load balancer traffic from an existing set of instances running the previous version of your application to the new set of instances running the latest version. After traffic is rerouted to the new instances, the existing instances can be terminated. Blue/green deployments allow you to test the new application version before sending production traffic to it. If there is an issue with the newly deployed application version, you can roll back to the previous version faster than with in-place deployments. Additionally, the instances provisioned for the blue/green deployment will reflect the most up-to-date server configurations since they are new.
Problem with Blue/Green deployment is that once the fleet is replaces, if you had any EIP attached to the older instances, you have to manually reattch them to the new fleet. Here’s how it can be automated.
Before we start you need to have the following information ready.
  • Note the CodePipeline and CodeDeploy ID and Name
  • List Elastic IP addresses IDs i.e. eipalloc-0ff4f997b4cf990bd


Step 1: Create and IAM role with the following permissions to be used with Lambda function.

  • CodeDeploy
    • ListDeploymentInstances
    • GetDeploymentInstance
    • BatchGetDeploymentInstances
  • EC2
    • DescribeAddresses
    • AssociateAddress
    • DisassociateAddress
Here’s the policy document.
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:logs:*:*:*"

Step 2 : Create Lambda function with the following code associate the above role to the lambda function.

Note : Make sure to change the eip_list
import json
import boto3
import re
import sys, traceback
def lambda_handler(event, context):
        eip_list = ["eipalloc-0ff4f997b4cf99dbd", "eipalloc-07e14e6c1ed2cazf4", "eipalloc-03af9573f88548zda"]
        if event.get('detail'):
            deployment_id = event.get('detail').get('deploymentId')
            application_name = event.get('detail').get('application')
            state = event.get('detail').get('state')
            notification_type = event.get('detail-type')
            if deployment_id and \
                state == 'SUCCESS' and \
                    application_name == 'prodCodeDeploy' and \
                    notification_type == "CodeDeploy Deployment State-change Notification":
                client = boto3.client('codedeploy')
                ec2_client = boto3.client('ec2')
                response = client.list_deployment_instances(deploymentId=deployment_id)
                print('--------------list_deployment_instances response')
                if response.get('instancesList'):
                    instance_details = client.batch_get_deployment_instances(
                    index = 0
                    for instance in instance_details['instancesSummary']:
                      if instance['instanceType'] == 'Green':                                                
                        instance_id = re.findall(r'i-.[a-z0-9]*', instance['instanceId'])[0]                                                                       
                        print("InstanceId="+instance_id+" and EIP id="+eip_list[index])
                        response = ec2_client.associate_address(
                        print('--------------associate_address response')
                        index += 1
                return {
                    "statusCode": 200,
                    "body": json.dumps('IP Reassociation Successful')
    except Exception as e:
    return {
        "statusCode": 400,
        "body": json.dumps('IP Reassociation Failed')

Step 3: Create a CloudWatch EventsRule to trigger the lambda function.

  • Open AWS Console and goto CloudWatch.
  • From left pane select Events -> Rules
  • In the Event Source, select Event Pattern
  • Service Name : Select CodeDeploy
  • Event Type : State Change
  • Select Specific Detail Type(s) and Select CodeDeploy Deployment State-change Notification below.
  • Select Specific State(s) and Select SUCCESS
  • Select Specific application and select your CodeDeploy application from the list.
  • Select Any Deployment Group
  • The JSON config file should look like this.
  "source": [
  "detail-type": [
    "CodeDeploy Deployment State-change Notification"
  "detail": {
    "state": [
    "application": [
      "<CodeDeploy App Name>"
  • On the Right section Targets select Lambda Function
  • Select the function we created as part of Step 2.
  • Click on Configure Details/Save and you are done.
Now you will never have to worry about attaching EIPs to you instances as part of blue green deployment.

Monday, July 16, 2018

Setup SFTP tunnel through bastion/jump server with agent forwarding

11:22 PM Posted by Unknown No comments

Setup SFTP ssh tunnel through bastion/jump server with agent forwarding

Someone has said "With Security comes complexity". In some cases operational or otherwise, while configuring private network with bastion/jump server you require SFTP/SCP access to the private servers.
  • This is assuming you already have SSH configuration done for ssh jumping, if not follow the steps in (link)

Assumptions and Definition

We have 3 servers/machines here to work with.
  • LocalMachine: You Computer running macOS or Ubuntu/Linux
  • Bastion: Bastion/Jump Server having public access to port 22/SSH port.
  • PriNetServer: A server which is running in private network but is accessible through Bastion/JumpServer
  • You are able to connection to Bastion server through SSH and have root access or at least sudo access to restart ssh
  • Bastion server is able to connect to PriNetServer through SSH using SSH Agent Forwarding.
Here's the steps to enable SFTP ssh tunnel through bastion/jump server.

SSH Configuration on the Bastion server

SSH Daemon config changes

  • Edit /etc/ssh/sshd_config
  • Make sure to enable the following options.
AllowTcpForwarding yes PermitTunnel yes AllowAgentForwarding yes PermitOpen any
  • If you want to enable tunneling for a specific user you can do the following
Match User app AllowTcpForwarding yes ...
  • Restart ssh service ssh restart

User specific configuration for port forwarding/tunneling automation

  • We are going to use app user in this example.
  • Create or Edit app user ssh config file /home/app/.ssh/config
  • Add the following.
Host PriNetServer HostName #Private IP Address of PriNetServer User app Port 22 LocalForward 30022 # This forwards bastion port 30022 to 22 port of PriNetServer

LocalMachine Setup

Once all the setup done in Bastion server. You now should be able to create a tunnel thorugh bastion to access SFTP of the PriNetServer
  • In this example I am using scp for SFTP access, you can use any other program like filezilla.

Create Tunnel/Forward Port

ssh -A -L 30022:localhost:30022 app@<bastion_server_host_or_ip> -t ssh PriNetServer
In the above command -A is for enable AgentForwarding, -Lis used for port forwarding. In this it is asking SSH to create a tunnel from your LocaMachine's port 30022 to 30022 port of the Bastion server. You can use any unused port for tunneling.
The last part -t is the command to be executed once you have sshed into the bastion server. You can remove that and manually ssh PriNetServer once you are on bastion server.

Now that the tunnel is ready, I will upload a file to the PriNetServer and download the same file from there.

  • Run the following command to upload file.
scp -P 30022 /tmp/hello.test app@localhost:/tmp/
  • Run the following command to download a file.
scp -P 50022 app@localhost:/tmp/hello.test /tmp/hello.testdloaded

The process can be used for tunneling or any port forwarding. Like mysql port etc.

Sunday, July 15, 2018

Using Chrome Secure Shell to connect to your AWS instances/Key Protected instances

1:51 AM Posted by Unknown 1 comment

Using Chrome Secure Shell to connect to your AWS instances/Key Protected instances.

There are many tools available to SSH into your key auth enabled instances. On Windows mainly the putty, on macOS and Linux Distros mainly termnial. If you use variety of OSes and you want similar terminal experiance everywhere or you are using ChromeOS. you can use Chrome Secure Shell.
I mainly use it on Windows because it feel much more like terminal with better scrolling specifically.
Here's how to connect to your key auth protected instances using Secure Shell.

Pre Requisites

  • Protect your key with passphrase.
    • Follow steps in this post to protect your key with passphrase.
    • You should always protect your keys with passphrase and specially while using it with SecureShell because it uses HTML5 FileSystem which is relatively new and may have unfound exploits. Here's the reference
  • Prepare your keys for specific SecureShell requirements.
SSH connection using key pair in SecureShell has specific requirements.
  • Only PEM key will not suffice. You need to have PrivateKey and PublicKey.
  • If you only have a pem file. You need to extract public key from it. Follow steps in this post
  • The private key and public key must have the same name.
  • The PrivateKey should have no extention and the PublicKey should have .pub extention.
  • Example:
    • If you have a key named MainKey.pem or any name you want, you must create a public key from it and rename them to the following.
    • PrivateKey (MainKey.pem) > MainKey
    • PublicKey ( >


  • Install Secure Shell
  • Open Secure Shell from Chrome by entering the following in the Chrome Search Box chrome://apps/ and Click Secure Shell, OR directly enter the following in Chrome search bar chrome-extension://pnhechapfaindjhompbnflcldabbghjo/html/nassh.html
  • The SS app opens up, ready for you to configure. Here's how it looks. 

  • Fill up the details
    • Name of the connection - keep it short without spaces you will be able to use it to open SSH connection easily. I will come to that later.
    • SSH username
    • SSH Host
    • SSH Port
    • Now import the kyes by Clicking on Import...
      • This will open up file selector, select the two file in this example MainKey and and click Open

    • That's it. Now click on Connect or hit Enter to connect.
    • If you have passphrase for the key, It will ask you to enter passphrase.
    • Once connected you should see the server prompt. 

Pro Tip

  1. You can connect to any of the saved connection by entering ssh <profile name> in the Chrome search box/omnibox. 

  1. You can bookmark connections for easy accessiblity.
  2. You can connect to multiple instance in one click. For DevOps there are many cases where we want to connection multiple instances while doing something. Here's how I do that.
  • Bookmark all the connections
  • Move all the bookmarks into a Folder.
  • Right Click on the Bookmark folder and click Open All in new Window

Protect your AWS key/Private key with passphrase

12:37 AM Posted by Unknown No comments

Protect your aws key with passphrase

All of us have faced this, when you generate a new key from AWS. It does not have passphrase to it. One should never use keys without passphrase, it is a huge security risk.
Here's how to add passphrase to any existing key file. Just run the following command and enter the passphrase twice.
Note: Make sure to remember passphrase, if you forget you will lose access to all the instances using the key.
ssh-keygen -p -f somekey.pem
The output will be the following.
Enter new passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved with the new passphrase.

Monday, June 25, 2018

Solution for MySQL "Can't create more than max_prepared_stmt_count statements"

12:20 AM Posted by Unknown 7 comments
Recently I had a live incident. Support team started getting a lot of customer calls of service being down. The application is built with RoR and MySQL
I checked for usual things from past experience. While doing that I checked logs and found the following log statement.
ActiveRecord::StatementInvalid (Mysql::Error: Can't create more than max_prepared_stmt_count statements (current value: 16382): <some sql query>
The log clearly mentions the problem. The issue was happening because the application could not create prepared statements because limit has been reached. This was the first time I saw this issue, this means that our application has reached a stage where it the current limit is not enough(16382).

What are prepared statements?

I won’t go deep into it since there’s a wealth of information out there on the Internet. The basic idea behind prepared statements and the log message above is that the SQL statement itself is compiled once and cached for future use. Prepared statements are best when a sql query is being ran multiple times only the arguments/parameters changes. It has been introduced in Rails 3.1. It is automatically created whenever you use ActiveRecords to do any db operation.
Prepared statement 3 actions.
  • Prepare (Once)
  • Execute (As many times)
  • Deallocate (Once at the end)

Useful queries and commands(MySQL).

  • Get current max prepared statement limit.
show variables like "%prepared%"; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | max_prepared_stmt_count | 16382 | +-------------------------+-------+
  • Get current prepared statement count. These are the statements which has been Prepared and not yet deallocated. These are cached in memory.
show global status like "%prepared%"; +---------------------+-------+ | Variable_name | Value | +---------------------+-------+ | Prepared_stmt_count | 15975 | +---------------------+-------+
  • Get stats related to prepared statements.
show global status like "com_stmt%"; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | Com_stmt_close | 3248454 |# How many times statements has been `deallocated` since last mysql server started | Com_stmt_execute | 110121011 |# How many times statements has been `executed` since last mysql server started | Com_stmt_fetch | 0 | | Com_stmt_prepare | 3447923 |# How many times statements has been `prepared` since last mysql server started | Com_stmt_reprepare | 22 | | Com_stmt_reset | 0 | | Com_stmt_send_long_data | 0 | +-------------------------+-----------+


The easiest solution is to increase the limit. If the issue has started happeing quite recently and you suspect any new code changes/application changes could be causing the issue. You should check that first. Number one culprit could be prepared statements being Prepared but not being deallocated once done using it in the code.
Note: Don't increase the limit too much. It not good for your database health. I just doubled it becuase the could was low/defuault. Keep an eye on the count for a while and increase more only if required. This way you can keep track of the application usage growth and also detect if any specific release caused the prepared statement count to increase.
  • Increase prepared statement temporarily(Until mysql restart).
SET GLOBAL max_prepared_stmt_count = 32764; ## OR SET @@global.max_prepared_stmt_count = 32764;
  • Permanently update the limit Change
Edit mysql configuration file(/etc/mysql/my.cnf) and add/edit limit.
[mysqld] .... max_prepared_stmt_count = 32764 ....
Restart MySQL
service mysql restart