Tuesday, 31 May 2016

Frequently update the AMI or Instance-Type in AWS Autoscaling group

If your instances running under AWS autoscaling, then instances will always use the same AMI or Instance-Type which configured in launch configuration.
If you installed new packages or did any changes in instance root volume, then you need to take new AMI from that instance[root volume] and need to update that AMI in Autoscaling group[using launch configuration]. But it will require many manual steps.
So I created a Bash script and it will do that task automatically.

Example 1:-
If you want to just update the AMI in Autoscaling group, run the script like below.

#./script.sh <Autoscaling Group Name>  <Launch Configuration Name>  <AWS Region>

Example 2:-
If you want to  update both instance type and AMI in Autoscaling group, run the script like below.

#./script.sh <Autoscaling Group Name>  <Launch Configuration Name>  <AWS Region>  <Instance Type>

====================================
#!/bin/bash
AutoscalingGroupName=$1
LaunchConfigurationName=$2
Region=$3
InstanceType=$4
DATE=`date "+%Y-%m-%d_%H-%M"`
#############Creating AMI from Autoscaling instance root volume ####################
Instanceid=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name ${AutoscalingGroupName} --region $Region --query AutoScalingGroups[].Instances[].InstanceId --output text`
#echo "Autoscaling Instance Id is $Instanceid"
NewAMIid=`/usr/bin/aws ec2 create-image --instance-id $Instanceid --region us-east-1 --name "${AutoscalingGroupName}-Autoscaling-${DATE}" --description ${AutoscalingGroupName} --no-reboot --block-device-mappings "[{\"DeviceName\": \"/dev/sdf\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdg\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdh\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdi\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdj\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdk\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdl\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdm\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdn\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdo\",\"NoDevice\":\"\"},{\"DeviceName\": \"/dev/sdp\",\"NoDevice\":\"\"}]" | grep -oP 'ami-\S+' | cut -d'"' -f1`
#echo "New AMI ID which created using package updated autoscaling instance = $NewAMIid"
###################Update an Autoscalingsroup with launch configuration which have updated AMI####################
## Note: It will not affect running instances in autoscaling group. The changes will apply on next instance failover #####
UpdateAutoscalingGroup() {
aws autoscaling update-auto-scaling-group --auto-scaling-group-name ${AutoscalingGroupName} --launch-configuration-name $1 --region $Region
}
###################Create an Launch configuration##########################
DeviceName=`aws autoscaling describe-launch-configurations --launch-configuration-names ${LaunchConfigurationName} --region $Region --query LaunchConfigurations[].BlockDeviceMappings[0].DeviceName --output text`
EbsVolumeType=`aws autoscaling describe-launch-configurations --launch-configuration-names ${LaunchConfigurationName} --region $Region --query LaunchConfigurations[].BlockDeviceMappings[0].Ebs.VolumeType --output text`
EbsVolumeSize=`aws autoscaling describe-launch-configurations --launch-configuration-names ${LaunchConfigurationName} --region $Region --query LaunchConfigurations[].BlockDeviceMappings[0].Ebs.VolumeSize --output text`
aws autoscaling create-launch-configuration --launch-configuration-name ${LaunchConfigurationName}-Copy  --instance-id $Instanceid --image-id $NewAMIid --block-device-mappings "[{\"DeviceName\": \"$DeviceName\",\"Ebs\":{\"VolumeSize\":$EbsVolumeSize,\"VolumeType\":\"$EbsVolumeType\",\"DeleteOnTermination\":false}}]" --region $Region
##################Update an Autoscalingsroup with copied launch configuration#########################
aws ec2 wait image-available --image-ids $NewAMIid --region $Region
UpdateAutoscalingGroup ${LaunchConfigurationName}-Copy
##############Deregister an AMI and also remove the associated snapshot based on Description################
OldAMIid=`aws autoscaling describe-launch-configurations --launch-configuration-names ${LaunchConfigurationName} --region $Region --query LaunchConfigurations[].ImageId --output text`
OldAMIsnapid=`aws ec2 describe-images --image-id $OldAMIid --region $Region --query Images[].BlockDeviceMappings[0].Ebs.SnapshotId --output text`
aws ec2 deregister-image --image-id $OldAMIid --region $Region 2>/dev/null 1>&2 && aws ec2 delete-snapshot --snapshot-id $OldAMIsnapid --region $Region 2>/dev/null 1>&2
#####################Delete an Old Launch configuration and Rename the copied Launch configuration to Old name again######################
aws autoscaling delete-launch-configuration --launch-configuration-name ${LaunchConfigurationName} --region $Region
if [ -z "$InstanceType" ]
then
aws autoscaling create-launch-configuration --launch-configuration-name ${LaunchConfigurationName}  --instance-id $Instanceid --image-id $NewAMIid --block-device-mappings "[{\"DeviceName\": \"$DeviceName\",\"Ebs\":{\"VolumeSize\":$EbsVolumeSize,\"VolumeType\":\"$EbsVolumeType\",\"DeleteOnTermination\":false}}]" --region $Region
else
aws autoscaling create-launch-configuration --launch-configuration-name ${LaunchConfigurationName}  --instance-id $Instanceid --image-id $NewAMIid --instance-type $InstanceType --block-device-mappings "[{\"DeviceName\": \"$DeviceName\",\"Ebs\":{\"VolumeSize\":$EbsVolumeSize,\"VolumeType\":\"$EbsVolumeType\",\"DeleteOnTermination\":false}}]" --region $Region
fi
UpdateAutoscalingGroup ${LaunchConfigurationName}
aws autoscaling delete-launch-configuration --launch-configuration-name ${LaunchConfigurationName}-Copy --region $Region
================================================

Wednesday, 25 May 2016

Bash Script to find the exact application or command which consumed high CPU and Memory

#!/bin/bash
#loadavg=$(awk '{ print $1 "\t" $2 "\t" $3 }' /proc/loadavg )
#To find the exact memory used percentage[Used-(Cached+Buffer)]
Memusage=$(printf '%.0f\n' `free | awk 'FNR == 3{print $3/($3+$4)*100}'`)
now=$(date +"%Y-%m-%d %H:%M:%S")
#To find the cpu load average
cpu_load=$(printf '%.0f\n' `uptime | awk '{print $(NF-2)}' | cut -c1-4`)
if [ "$cpu_load" -ge "25" ] || [ "$Memusage" -ge "50" ];then
  echo -e "\n\n=================\t$now\t========================" >> /var/log/top_report
  top -b -c -n1 >> /var/log/top_report
fi

Tuesday, 19 April 2016

Mongod is not stopping properly on Centos6.5

Just change the following line on function mongo_killproc() in mongod init script[line 94],
# vim /etc/init.d/mongod

local pid=`pidof ${procname}`

instead of

local pid=`pidofproc -p "${pid_file}" ${procname}`

Saturday, 9 January 2016

How to install and configure the GitLab in Ubuntu 14.04

Step 1:- Check the Hardware requirements.
======
http://doc.gitlab.com/ce/install/requirements.html
Here I going to use Ubuntu 14.04 LTS version.

Step 2:- Choose an installation method [ Installation from source or Omnibus package installer ].
======
But they recommend installing the Omnibus package instead of installing GitLab from source. Omnibus GitLab takes just 2 minutes to install and is packaged in the popular deb and rpm formats. Compared to an installation from source, the Omnibus package is faster to install and upgrade, more reliable to upgrade and maintain, and it shortens the response time for our subscribers' issues. A package contains GitLab and all its dependcies (Ruby, PostgreSQL, Redis, Nginx, Unicorn, etc.), it can be installed without an internet connection.

But I am going to try both methods now.

Step 3:- Install GitLab from Omnibus package.
======
Enterprise Edition is license-based, so I going to install Community Edition which is free.

1. Install and configure the necessary dependencies.
#apt-get install curl openssh-server ca-certificates postfix
If you install Postfix to send email please select 'Internet Site' during setup.

2. Add the GitLab package server and install the package
#curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
It will create the repo file named "/etc/apt/sources.list.d/gitlab_gitlab-ce.list".
#apt-get install gitlab-ce

It will install latest version[8.3.2-ce.0] Or you can choose your own version,
#apt-get install gitlab-ce=8.3.1-ce.1

3. Configure and start GitLab
#gitlab-ctl reconfigure
Note: Git repository data path=/var/opt/gitlab/git-data, But we can change it as per our need.

4. Browse to the hostname/IP and login
Username: root
Password: 5iveL!fe --> Default Password

Upon signing in, you'll immediately be prompted for a new password.
After updating your password, you'll be presented with your user dashboard. From here you can create a new project or group. If you want to make a new project, simply click on the New Project button, enter the project name, choose the project visibility, and hit the Create Project button. If you want to add more users to your Gitlab instance, you can do so by clicking the Admin Area icon (a large gear and two smaller gears) in the upper-right corner of your screen, and clicking on "New User".

IMPORTANT:-
==========
I have installed gitlab by both methods Omnibus package as well as manual installation. I can't see any major difference after installation.
So I also prefer to install Gitlab through Omnibus package installer. Please skip the step 4 if you want to install it from Omnibus package.

Step 4:- Install GitLab from source
======

1. Packages / Dependencies
#apt-get update && apt-get upgrade -y
Then reboot the server if it requires.
#apt-get install -y build-essential zlib1g-dev libyaml-dev libssl-dev libgdbm-dev libreadline-dev libncurses5-dev libffi-dev curl openssh-server checkinstall libxml2-dev libxslt-dev libcurl4-openssl-dev libicu-dev logrotate python-docutils pkg-config cmake nodejs

Install Git,
#apt-get install -y git-core
Note: It requires Git version 1.7.10 or higher, for example 1.7.12 or 2.0.0.

In order to receive mail notifications, make sure to install a mail server.
#apt-get install -y postfix
Then select 'Internet Site' and press enter to confirm the hostname.

2. Ruby
GitLab requires Ruby 2.0 or higher while the default version on Ubuntu 14.04 is 1.9.3.
Remove the old Ruby if present
#sudo apt-get remove ruby

Download Ruby and compile it:
mkdir /tmp/ruby && cd /tmp/ruby
curl -O --progress https://cache.ruby-lang.org/pub/ruby/2.1/ruby-2.1.7.tar.gz
echo 'e2e195a4a58133e3ad33b955c829bb536fa3c075  ruby-2.1.7.tar.gz' | shasum -c - && tar xzf ruby-2.1.7.tar.gz
cd ruby-2.1.7
./configure --disable-install-rdoc
make
sudo make install

Install the Bundler Gem:
#sudo gem install bundler --no-ri --no-rdoc

3. Go
Since GitLab 8.0, Git HTTP requests are handled by gitlab-workhorse (formerly gitlab-git-http-server). This is a small daemon written in Go. To install gitlab-workhorse we need a Go compiler.

#curl -O --progress https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz
echo '46eecd290d8803887dec718c691cc243f2175fe0  go1.5.1.linux-amd64.tar.gz' | shasum -c - && \
  sudo tar -C /usr/local -xzf go1.5.1.linux-amd64.tar.gz
sudo ln -sf /usr/local/go/bin/{go,godoc,gofmt} /usr/local/bin/
rm go1.5.1.linux-amd64.tar.gz

4. System Users

Create a git user for GitLab:
#sudo adduser --disabled-login --gecos 'GitLab' git

5. Database
We recommend using a PostgreSQL database.

# Install the database packages
sudo apt-get install -y postgresql postgresql-client libpq-dev

# Login to PostgreSQL
sudo -u postgres psql -d template1

# Create a user for GitLab
# Do not type the 'template1=#', this is part of the prompt
template1=# CREATE USER git CREATEDB;

# Create the GitLab production database & grant all privileges on database
template1=# CREATE DATABASE gitlabhq_production OWNER git;

# Quit the database session
template1=# \q

# Try connecting to the new database with the new user
sudo -u git -H psql -d gitlabhq_production

# Quit the database session
gitlabhq_production> \q

6. Redis
GitLab requires at least Redis 2.8.

$ sudo apt-get install -y python-software-properties
$ sudo add-apt-repository -y ppa:rwky/redis
$ sudo apt-get update
$ sudo apt-get install -y redis-server

Enable the redis socket & set 777 permission in /etc/redis/redis.conf

# Add git to the redis group
sudo usermod -aG redis git

7. GitLab

# We'll install GitLab into home directory of the user "git"
cd /home/git

Clone the Source

# Clone GitLab repository
sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-ce.git -b 8-3-stable gitlab

Note: You can change 8-3-stable to master if you want the bleeding edge version, but never install master on a production server!
Configure It

# Go to GitLab installation folder
cd /home/git/gitlab

# Copy the example GitLab config
sudo -u git -H cp config/gitlab.yml.example config/gitlab.yml

# Update GitLab config file, follow the directions at top of file
sudo -u git -H editor config/gitlab.yml

# Copy the example secrets file
sudo -u git -H cp config/secrets.yml.example config/secrets.yml
sudo -u git -H chmod 0600 config/secrets.yml

# Make sure GitLab can write to the log/ and tmp/ directories
sudo chown -R git log/
sudo chown -R git tmp/
sudo chmod -R u+rwX,go-w log/
sudo chmod -R u+rwX tmp/

# Make sure GitLab can write to the tmp/pids/ and tmp/sockets/ directories
sudo chmod -R u+rwX tmp/pids/
sudo chmod -R u+rwX tmp/sockets/

# Make sure GitLab can write to the public/uploads/ directory
sudo chmod -R u+rwX  public/uploads

# Change the permissions of the directory where CI build traces are stored
sudo chmod -R u+rwX builds/

# Change the permissions of the directory where CI artifacts are stored
sudo chmod -R u+rwX shared/artifacts/

# Copy the example Unicorn config
sudo -u git -H cp config/unicorn.rb.example config/unicorn.rb

# Find number of cores
nproc

# Enable cluster mode if you expect to have a high load instance
# Set the number of workers to at least the number of cores
# Ex. change amount of workers to 3 for 2GB RAM server
sudo -u git -H editor config/unicorn.rb

# Copy the example Rack attack config
sudo -u git -H cp config/initializers/rack_attack.rb.example config/initializers/rack_attack.rb

# Configure Git global settings for git user, used when editing via web editor
sudo -u git -H git config --global core.autocrlf input

# Configure Redis connection settings
sudo -u git -H cp config/resque.yml.example config/resque.yml

# Change the Redis socket path if you are not using the default Debian / Ubuntu configuration
sudo -u git -H editor config/resque.yml

Important Note: Make sure to edit both gitlab.yml and unicorn.rb to match your setup

Configure GitLab DB Settings

# PostgreSQL only:
sudo -u git cp config/database.yml.postgresql config/database.yml

# Make config/database.yml readable to git only
sudo -u git -H chmod o-rwx config/database.yml

Install Gems

#sudo -u git -H bundle install --deployment --without development test mysql aws kerberos

# Run the installation task for gitlab-shell (replace `REDIS_URL` if needed):
sudo -u git -H bundle exec rake gitlab:shell:install REDIS_URL=unix:/var/run/redis/redis.sock RAILS_ENV=production

# By default, the gitlab-shell config is generated from your main GitLab config.
# You can review (and modify) the gitlab-shell config as follows:
sudo -u git -H editor /home/git/gitlab-shell/config.yml

Install gitlab-workhorse

cd /home/git
sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-workhorse.git
cd gitlab-workhorse
sudo -u git -H git checkout 0.5.1
sudo -u git -H make

Initialize Database and Activate Advanced Features

# Go to GitLab installation folder
cd /home/git/gitlab
sudo -u git -H bundle exec rake gitlab:setup RAILS_ENV=production

# Type 'yes' to create the database tables.
# When done you see 'Administrator account created:'
sudo -u git -H bundle exec rake gitlab:setup RAILS_ENV=production GITLAB_ROOT_PASSWORD=yourpassword

Install Init Script

Download the init script (will be /etc/init.d/gitlab):

sudo cp lib/support/init.d/gitlab /etc/init.d/gitlab

And if you are installing with a non-default folder or user copy and edit the defaults file:

sudo cp lib/support/init.d/gitlab.default.example /etc/default/gitlab

If you installed GitLab in another directory or as a user other than the default you should change these settings in /etc/default/gitlab. Do not edit /etc/init.d/gitlab as it will be changed on upgrade.

Make GitLab start on boot:

sudo update-rc.d gitlab defaults 21

Setup Logrotate

sudo cp lib/support/logrotate/gitlab /etc/logrotate.d/gitlab

Check Application Status

Check if GitLab and its environment are configured correctly:

sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production

Compile Assets

sudo -u git -H bundle exec rake assets:precompile RAILS_ENV=production

Start Your GitLab Instance

sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart

8. Nginx

Note: Nginx is the officially supported web server for GitLab.

sudo apt-get install -y nginx

Site Configuration

Copy the example site config:

sudo cp lib/support/nginx/gitlab /etc/nginx/sites-available/gitlab
sudo ln -s /etc/nginx/sites-available/gitlab /etc/nginx/sites-enabled/gitlab

Make sure to edit the config file to match your setup:

# Change YOUR_SERVER_FQDN to the fully-qualified
# domain name of your host serving GitLab.
# If using Ubuntu default nginx install:
# either remove the default_server from the listen line
# or else sudo rm -f /etc/nginx/sites-enabled/default
sudo editor /etc/nginx/sites-available/gitlab

sudo nginx -t

You should receive syntax is okay and test is successful messages.
sudo service nginx restart

Done!
Double-check Application Status

To make sure you didn't miss anything run a more thorough check with:

sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production

If all items are green, then congratulations on successfully installing GitLab!

Initial Login

Visit YOUR_SERVER in your web browser for your first GitLab login. The setup has created a default admin account for you. You can use it to log in:

root
yourpassword

Important Note: On login you'll be prompted to change the password.

Tuesday, 4 August 2015

P2V Migration in VMware

Question:-
---------------

i need to migrate a Linux box with no downtime to ESXi and with VMware tool that is not possible or i don't know how because you can't install the VCenter Converter into a linux OS and to the live migration with synchronization.

Does any 1 know a way to do this?

Solution:-
---------------

Of course, we can migrate the Linux physical machine to ESXi host using vsphere converter tool.

Yes, we can't install converter tool in linux but we can install it in windows machine then we can migrate the physical machine to ESXi through windows VM.
Recently I did this task. Please follow the below steps.

1. Install a windows machine in ESXi host where you want to migrate the physical machine.
2. Install the vsphere converter tool in that windows machine.
3. But please choose the source machine as linux physical machine instead of "localhost". If you select localhost then that windows machine will be migrated to ESXi host.
4. Then select the destination machine as ESXi host.
5. Please remember we need to choose the same hardware specs as physical machine during the migration.

In VMware vcenter converter,
1. To open the converter window, simply given the source physical Linux server root login details.
2. Given the destination ESXI host root login details.
3. Given the destination VM name and choose datastore & VM version.
4. Now it showing the cpu, disk, memory & NIC details[same as Source server]. We can edit this settings and change the disk type[thick/thin], select no of cpu's, Increase RAM & change NIC settings.
5. Configure the Helper VM[Destination VM] network/IP setup.

That's all.
It will take time based on physical machine Disk size.

Monday, 20 July 2015

KVM & Qemu

QEMU is a powerful emulator, which means that it can emulate a variety of processor types.

Xen uses QEMU for HVM guests, more specifically for the HVM guest's device model. The Xen-specific QEMU is called qemu-dm (short for QEMU device model)

QEMU uses emulation; KVM uses processor extensions (intel-VT) for virtualization.

Both Xen and KVM merge their various functionality to upstream QEMU, that way upstream QEMU can be used directly to accomplish Xen device model emulation, etc.

Xen is unique in that it has paravirtualized guests that don't require hardware virtualization.

Both Xen and KVM have paravirtualized device drivers that can run on top of the HVM guests.

The QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.
The typical uses cases for QEMU are

    Running on older hardware that lacks virtualization support.
    Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests.

One difference between them is that QEMU runs on a processor without needing for hardware virtualization extension(Intel VT/VT-d, AMD-V) while KVM uses it. Hardware virtualization extensions lets you acces the hardware on the physical machine directly. The downside is that KVM codebase can't emulate another architecture.
But when KVM is run on a machine without any HW virt ext., it switches back to QEMU to run the VM.

KVM is a type 1 hypervisor and Qemu is a Type 2 hypervisor. Type 1 hypervisor comes installed with the hardware system like KVM in Linux. KVM provides hardware acceleration for virtual machines but it need Qemu to emulate any operating system.

Qemu is a Type 2 hypverisor, it can be installed on an operating system and it runs as an indepent process and the instructions we give in Quemu will be executed on the host machine. Qemu can run independently without KVM as its a emulator however the performance will be poor as Qemu doesnt do any hardware acceleration.

KVM and QEMU – understanding hardware acceleration

To understand hardware acceleration, we must understand how Virtual Machine CPU works. In real hardware, the Operating System (OS) translates programs into instructions that are executed by the physical CPU. In a virtual machine, the same thing happens. However, the key difference is that the Virtual CPU is actually emulated (or virtualized) by the hypervisor. Therefore, the hypervisor software has to translate the instructions meant for the Virtual CPU and convert it into instructions for the physical CPU. This translation has a big performance overhead.

To minimize this performance overhead, modern processors support virtualization extensions. Intel support a technology called VT-x and the AMD equivalent is AMD-V. Using these technologies, a slice of physical CPU can be directly mapped to the Virtual CPU. Hence the instructions meant for the Virtual CPU can be directly executed the physical CPU slice.

KVM is the Linux kernel module that enables this mapping of physical CPU to Virtual CPU. This mapping provides the hardware acceleration for Virtual Machine and boosts its performance. Moreover, QEMU uses this acceleration when Virt Type is chosen as KVM.

Then what is TCG? If your server CPU does not support virtualization extension, then it is the job of the emulator (or hypervisor) to execute the Virtual CPU instruction using translation. QEMU uses TCG or Tiny Code Generator to optimally translate and execute the Virtual CPU instructions on the physical CPU.
KVM and QEMU – Type 1 or Type 2 hypervisor

The web pages of KVM and QEMU clearly show that KVM needs QEMU to provide full hypervisor functionality. By itself, KVM is more of a virtualization infrastructure provider.

QEMU by itself is a Type-2 hypervisor. It intercepts the instructions meant for Virtual CPU and uses the host operating system to get those instructions executed on the physical CPU. When QEMU uses KVM for hardware acceleration, the combination becomes a Type-1 hypervisor.

KVM and QEMU – the x86 dependency

Since KVM is really a driver for the physical CPU capabilities, it is very tightly associated with the CPU architecture (the x86 architecture). This means that the benefits of hardware acceleration will be available only if the Virtual Machine CPU also uses the same architecture (x86).

If a VM needs to run Power PC CPU but the hypervisor server has an Intel CPU, then KVM will not work. You must use QEMU as the Virt Type and live with the performance overhead.

KVM and QEMU – the conclusion

Based on the discussion above, it is quite clear that QEMU plays a very critical role in Linux based Open Source virtualization solutions. For all practical applications, QEMU needs KVM’s performance boost. However, it is clear that KVM by itself cannot provide the complete virtualization solution. It needs QEMU.

Reference: -
http://www.innervoice.in/blogs/2014/03/10/kvm-and-qemu/