Wednesday, 22 April 2015

Active vs Passive ftp

Active and passive are the two modes that FTP can run in. FTP uses two channels between client and server, the command channel and the data channel, which are actually separate TCP connections. The command channel is for commands and responses, the data channel is for actually transferring files. It's a nifty way of sending commands to the server without having to wait for the current data transfer to finish.
In active mode, the client establishes the command channel (from client port X to server port 21(b)) but the server establishes the data channel (from server port 20(b) to client port Y, where Y has been supplied by the client).
In passive mode, the client establishes both channels. In that case, the server tells the client which port should be used for the data channel.
Passive mode is generally used in situations where the FTP server is not able to establish the data channel. One of the major reasons for this is network firewalls. While you may have a firewall rule which allows you to open up FTP channels to ftp.microsoft.com, Microsoft's servers may not have the power to open up the data channel back through your firewall.
Passive mode solves this by opening up both types of channel from the client side. In order to make this hopefully clearer:
Active mode:
  • Client opens up command channel from client port 2000(a) to server port 21(b).
  • Client sends PORT 2001(a) to server and server acknowledges on command channel.
  • Server opens up data channel from server port 20(b) to client port 2001(a).
  • Client acknowledges on data channel.
Passive mode:
  • Client opens up command channel from client port 2000(a) to server port 21(b).
  • Client sends PASV to server on command channel.
  • Server sends back (on command channel) PORT 1234(a) after starting to listen on that port.
  • Client opens up data channel from client 2001(a) to server port 1234(a).
  • Server acknowledges on data channel.
At this point, the command and data channels are both open.

Saturday, 11 April 2015

Create a simple webserver container on Docker

1. Start the container with a shell,

#docker run -i -t -p 8080:80 214a4932132a /bin/bash

Mapped the host port 8080 to container port 80, so that you can access the container webserver contents through apache.
[It is similar as DNAT]

2. Install the web server package inside the container,

At first, the containers have only internal IP addresses. To access the Internet, SNAT (Source Network Address Translation, also known as IP masquerading) should be configured on theHardware Node.

[root@proxy ~]# iptables -t nat -A POSTROUTING -s 172.17.0.0/24 -o eth0 -j SNAT --to 31.x.x.x
172.17.0.0 - Container's ip range
31.x.x.x - Host public IP

Now you can able to access the internet from container, then we can install the apache server.

[root@c576532e21ab /]# yum install httpd -y

3. Create a custom content in document root,

[root@c576532e21ab /]# echo "Hello world" > /var/www/html/index.html

4. Test the web server,

We cant start the httpd service through service or /etc/init.d commands, so that we are going to run apache in foreground directly.

[root@c576532e21ab /]# /usr/sbin/httpd -D FOREGROUND

Then examine the webserver report on host,

[root@proxy ~]# lsof -i tcp:8080
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
docker  2215 root    4u  IPv6 112104      0t0  TCP *:webcache (LISTEN)
[root@proxy ~]# curl http://172.17.0.4:80
Hello world

172.17.0.4 - webserver container IP.
Also you can test it through host server IP on port 8080.

5. Create the startup script to run service httpd,

Normally if we run any application in docker continers, once it has been completed then the container will be stopped immediately.
So we can run the httpd service in foreground to serve the http requests persistantly.
For that we can create a simple startup script to run httpd service in foreground.
Because httpd daemon service is running on background by default[hence it is not persistent in Docker].

[root@c576532e21ab /]# cat /usr/sbin/httpd_startup_script.sh
#!/bin/bash
rm -rf /run/httpd
install -m 710 -o root -g apache -d /run/httpd
install -m 700 -o apache -g apache -d \
    /run/httpd/htcacheclean
exec /usr/sbin/httpd -D FOREGROUND

[root@c576532e21ab /]# /usr/sbin/httpd_startup_script.sh &

6. Create an image from this container with httpd service,

root@proxy ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                  NAMES
c576532e21ab        centos:latest       "/bin/bash"         32 minutes ago      Up 32 minutes       0.0.0.0:8080->80/tcp   silly_lumiere      

[root@proxy ~]# docker commit c576532e21ab centos:httpd
296301f4b66d5d10d714284125e1a16148e2cb65b7954d461e0dc1bc2ec842f1

[root@proxy ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos              httpd               296301f4b66d        29 seconds ago      310.8 MB
centos              latest              214a4932132a        14 hours ago        229.6 MB
ubuntu              latest              d0955f21bf24        3 weeks ago         192.7 MB

Now we have centos OS image with pre-installed apache webservice.

7. Start the container with httpd startup script on detach mode,
Detach (-d) the container starts with detach mode so it runs in the background or otherwise you get an interactive container (-i).

[root@proxy ~]# docker run -d -p 8080:80 296301f4b66d /usr/sbin/httpd_startup_script.sh
d4696f08eae37a87594247edf1e8028bdc641b28d4e3438842c4e2456c162afe
[root@proxy ~]#

8. Also we can compress the docker os images and make an tar file, save it locally.

[root@proxy ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos              httpd               296301f4b66d        7 minutes ago       310.8 MB
centos              latest              214a4932132a        15 hours ago        229.6 MB
ubuntu              latest              d0955f21bf24        3 weeks ago         192.7 MB

[root@proxy ~]# docker save -o centos_httpd.tar centos:httpd

Friday, 10 April 2015

Install and Configure the Docker on Centos

We are going to install and configure the Docker on Centos

1. Enable the epel repository.
2. yum install docker-io
3. Start the docker daemon

[root@proxy ~]# service docker start
Starting cgconfig service:                                 [  OK  ]
Starting docker:                                       [  OK  ]
[root@proxy ~]# chkconfig docker on

4. Download any public container images and store them in a local repository,

[root@proxy ~]# docker pull ubuntu
ubuntu:latest: The image you are pulling has been verified
511136ea3c5a: Pull complete
f3c84ac3a053: Pull complete
a1a958a24818: Pull complete
9fec74352904: Pull complete
d0955f21bf24: Pull complete
Status: Downloaded newer image for ubuntu:latest

5. To check the locally downloaded container images,

[root@proxy ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu              latest              d0955f21bf24        3 weeks ago         192.7 MB

6. Start/boot a container from that image you downloaded,


[root@proxy ~]# docker run -i -t d0955f21bf24 /bin/bash
root@267515ea78a9:/#
root@267515ea78a9:/# cat /etc/issue
Ubuntu 14.04.2 LTS \n \l

7. Once you exited from container you can check the CT status,

[root@proxy ~]# docker ps  -->running CT
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@proxy ~]# docker ps -a  -->All CT's
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
267515ea78a9        ubuntu:latest       "/bin/bash"         2 minutes ago       Exited (0) 15 seconds ago                       silly_mcclintock   
[root@proxy ~]#

8. We can also start the stopped container and access it,

[root@proxy ~]# docker start 267515ea78a9
267515ea78a9
[root@proxy ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
267515ea78a9        ubuntu:latest       "/bin/bash"         4 minutes ago       Up 5 seconds                            silly_mcclintock   
[root@proxy ~]#

To access you can use "attach" command,

[root@proxy ~]# docker attach 267515ea78a9
root@267515ea78a9:/#
root@267515ea78a9:/# cat /etc/issue
Ubuntu 14.04.2 LTS \n \l

To completely remove the stopped container with using "rm" command,

[root@proxy ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
267515ea78a9        ubuntu:latest       "/bin/bash"         7 minutes ago       Exited (0) 39 seconds ago                       silly_mcclintock   
[root@proxy ~]# docker rm 267515ea78a9
267515ea78a9
[root@proxy ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

9. By default,  docker0 Linux bridge created automatically and container you create will be connected to docker0 bridge interface.

E.G,

[root@proxy ~]# docker run -i -t d0955f21bf24 /bin/bash
root@6c559dd1549f:/# ip a
4: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
5: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever


In this container have assigned with ip address 172.17.0.2[dhcp] automatically and connected to bridge docker0.
Also you can see the bridge docker0 have ip on same subnet.

[root@proxy ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
docker0        8000.eac87edcdbb9    no        vethaa856b3

[root@proxy ~]# ifconfig
docker0   Link encap:Ethernet  HWaddr EA:C8:7E:DC:DB:B9 
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::54fe:b5ff:fe83:7bee/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:10 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:624 (624.0 b)  TX bytes:1070 (1.0 KiB)

[root@proxy ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
31.x.x.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
0.0.0.0         31.x.x.1     0.0.0.0         UG    0      0        0 eth0

9. Docker uses Linux bridge to interconnect containers with each other, and to connect them to external networks.
Hence you can create a custom Linux bridge with brctl command to interconnect containers and you can assign a separate subnet to the bridge, and have Docker assigned IP addresses from the subnet to containers.[e.g. 192.168.0.0]. We can see this briefly on separate page.

10. We can also map the ports from Host to container as well as folders.
E.G. If we mapped the folder /usr/local/bin from host to container then we can use binaries exists in that /usr/local/bin folder on container.

 Note:
docker run -d -p 80:80 <ct-image> /bin/bash

Detach (-d) the container so it runs in the background (otherwise you get an interactive container -i).

Refer: https://access.redhat.com/articles/881893#getatomic

Thursday, 9 April 2015

How to install and configure GlusterFS server on centos

We are going to setup glusterfs storage server with four nodes.
1. On all of your four nodes install the glusterfs and xfs packages,

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum install glusterfs-server xfsprogs
chkconfig glusterd on
service glusterd start

2. On all of your cluster nodescreate a new 2Gb LV called brick1 in vgsrv VG and format this LV with an XFS filesystem with 512byte inodes.

lvcreate -L 2G -n brick1 vgsrv
mkfs.xfs -i size=512 /dev/vgsrv/brick1
mkdir /server1_export1
echo "/dev/vgsrv/brick1 /server1_export1 xfs defaults 0 1" >> /etc/fstab
mount -a

3. From server1, add the other three nodes as trusted peers.


[root@proxy ~]# gluster peer probe server2{ip}
[root@proxy ~]# gluster peer probe server3{ip}
[root@proxy ~]# gluster peer probe server4{ip}

[root@proxy ~]# gluster peer status
Number of Peers: 3

Hostname: server2
Uuid: a381532b-81a0-41c7-9adb-cd29f9f38158
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: b289f724-1de9-47fb-8913-41ce82237f65
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 9ac6df3e-441e-495b-84cb-2b9d50a6099c
State: Peer in Cluster (Connected)

There are three types of volume[cluster of bricks] creation,
1. Distributed
2. striped
3. Replicated

4. We are going to create a new replicated GlusterFS volume using bricks from server1 & server2.

On server1,

[root@proxy ~]# gluster volume create newvol replica 2 server1:/server1_export1/brick server2:/server2_export1/brick
volume create: newvol: success: please start the volume to access data

Note: The brick directory should ideally be a sub-directory of a mount point (and not a mount point directory itself) for ease of administration.

[root@proxy ~]# gluster volume start newvol
volume start: newvol: success

[root@proxy ~]# gluster volume status newvol
Status of volume: newvol
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick server1:/server1_export1/brick        49152    Y    1277
Brick server2:/server2_export1/brick        49152    Y    1300
NFS Server on localhost                    N/A    N    N/A
Self-heal Daemon on localhost                N/A    Y    1298
NFS Server on server2                N/A    N    N/A
Self-heal Daemon on server2            N/A    Y    1321
NFS Server on server4                N/A    N    N/A
Self-heal Daemon on server4            N/A    Y    1287
NFS Server on server3                N/A    N    N/A
Self-heal Daemon on server3            N/A    Y    1315

Task Status of Volume newvol
------------------------------------------------------------------------------
There are no active volume tasks

[root@proxy ~]# gluster volume info newvol

Volume Name: newvol
Type: Replicate
Volume ID: af11dc1a-1536-40a0-8739-7213b462d6c2
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/server1_export1/brick
Brick2: server2:/server2_export1/brick

5. Configure a storage client on your local desktop machine,

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum -y install glusterfs-fuse
mkdir /newvol
mount -t glusterfs server1:/newvol /newvol


~]# mount
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
server1:/newvol on /newvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Create some test files under mount point /newvol and examine the contents of /serverX_export1 directory on server1 & server2.

~]#touch /newvol/file{0..9}

[root@proxy brick]# ls -la /server1_export1/brick/
total 0
drwxr-xr-x  3 root root 143 Apr  9 10:04 .
drwxr-xr-x  3 root root  18 Apr  9 09:53 ..
-rw-r--r--  2 root root   0 Apr  9 10:04 file0
-rw-r--r--  2 root root   0 Apr  9 10:04 file1
-rw-r--r--  2 root root   0 Apr  9 10:04 file2
-rw-r--r--  2 root root   0 Apr  9 10:04 file3
-rw-r--r--  2 root root   0 Apr  9 10:04 file4
-rw-r--r--  2 root root   0 Apr  9 10:04 file5
-rw-r--r--  2 root root   0 Apr  9 10:04 file6
-rw-r--r--  2 root root   0 Apr  9 10:04 file7
-rw-r--r--  2 root root   0 Apr  9 10:04 file8
-rw-r--r--  2 root root   0 Apr  9 10:04 file9
drw------- 16 root root 170 Apr  9 10:04 .glusterfs

[root@sara2 ~]# ls -la /server2_export1/brick/
total 0
drwxr-xr-x  3 root root 143 Apr  9 10:04 .
drwxr-xr-x  3 root root  18 Apr  9 09:53 ..
-rw-r--r--  2 root root   0 Apr  9 10:04 file0
-rw-r--r--  2 root root   0 Apr  9 10:04 file1
-rw-r--r--  2 root root   0 Apr  9 10:04 file2
-rw-r--r--  2 root root   0 Apr  9 10:04 file3
-rw-r--r--  2 root root   0 Apr  9 10:04 file4
-rw-r--r--  2 root root   0 Apr  9 10:04 file5
-rw-r--r--  2 root root   0 Apr  9 10:04 file6
-rw-r--r--  2 root root   0 Apr  9 10:04 file7
-rw-r--r--  2 root root   0 Apr  9 10:04 file8
-rw-r--r--  2 root root   0 Apr  9 10:04 file9
drw------- 16 root root 170 Apr  9 10:04 .glusterfs

Yes, both directory on both nodes have same contents due to replicated volume.

-Poweroff your server2,

[root@sara2 ~]# poweroff

Then create a 512MB file in desktop  and verify it on /server1_export1/brick/,

~]# dd if=/dev/zero of=/newvol/bigfile bs=1M count=512

[root@proxy brick]# ls -lh /server1_export1/brick/bigfile
-rw-r--r-- 2 root root 512M Apr  9 10:13 /server1_export1/brick/bigfile

-Start your server2,

Once it is back then examine the contents of /server2_export1/brick.
Initially you cant see that "bigfile" which means gluster service not started the self healing procedure.
Hence examine the list of files that still need to be healed on server1,

[root@proxy brick]# gluster volume heal newvol info
Brick proxy.sara.com:/server1_export1/brick/
Number of entries: 2
/
/bigfile

Brick sara2.test.com:/server2_export1/brick/
Number of entries: 0

[or] also you run the below command to start self heal manually,

[root@proxy brick]# gluster volume heal newvol
Launching heal operation to perform index self heal on volume newvol has been successful

6. Expand a volume on Glusterfs storage,

Here we are going to expand "newvol" volume from replica 2[server1-server2] to 2x2 distributed replica[both].

First add the bricks from server3 & server4 on server1,

[root@proxy brick]# gluster volume add-brick newvol \
> server3:/server3_export1/brick \
> server4:/server4_export1/brick
volume add-brick: success

[root@proxy brick]# gluster volume info newvol
 Volume Name: newvol
Type: Distributed-Replicate
Volume ID: af11dc1a-1536-40a0-8739-7213b462d6c2
Status: Started
Number of Bricks: 2 x 2 = 4  --> [two replica & two distributed = 1 replica[server1 & server2] and 2 replica[server3 & server4]
Transport-type: tcp
Bricks:
Brick1: server1:/server1_export1/brick
Brick2: server2:/server2_export1/brick
Brick3: server3:/server3_export1/brick
Brick4: server4:/server4_export1/brick

examine the contents of brick folder on server3 & server4, yes it is still empty.

[root@sara4 ~]# ls -la /server4_export1/brick/
total 0
drwxr-xr-x 3 root root 23 Apr  9 10:25 .
drwxr-xr-x 3 root root 18 Apr  9 09:53 ..
drw------- 6 root root 80 Apr  9 10:25 .glusterfs

Hence start the rebalancing of the files on "newvol" across all bricks from server1,

[root@proxy brick]# gluster volume rebalance newvol start
volume rebalance: newvol: success: Initiated rebalance on volume newvol.
Execute "gluster volume rebalance <volume-name> status" to check status.
ID: 8aefeaaa-697a-4efe-9017-18a52e4f07ab

[root@proxy brick]# gluster volume rebalance newvol status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                1       512.0MB            11             0             0            completed               1.00
                            server2                0        0Bytes            11             0             0            completed               0.00
                            server3                0        0Bytes            11             0             0            completed               0.00
                            server4                0        0Bytes            11             0             0            completed               0.00
volume rebalance: newvol: success:

Now examine the contents of all nodes,
Yes it seems the files have been distributed between two replicas, one replica on server1&2 and one replica on server3&4.

7. Replacing a Brick on GlusterFS,

Here we are going to replace the failed brick[just assume] /server4_export1 with a new brick /server4_export2 on server4.

lvcreate -L 2G -n brick2 vgsrv
mkfs.xfs -i size=512 /dev/vgsrv/brick2
mkdir /server4_export2
echo "/dev/vgsrv/brick2 /server4_export2 xfs defaults 0 1" >> /etc/fstab
mount -a

[root@sara4 brick]# mount
/dev/xvda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/vgsrv-brick1 on /server4_export1 type xfs (rw)
/dev/mapper/vgsrv-brick2 on /server4_export2 type xfs (rw)

Start the replacement,

On server4,
[root@sara4 brick]# gluster volume replace-brick newvol \
>  server4:/server4_export1/brick \
>  server4:/server4_export2/brick start
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: replace-brick started successfully
ID: a1a82d01-13d5-44cf-acf3-b5965482d7de

[root@sara4 brick]# gluster volume replace-brick newvol  server4:/server4_export1/brick  server4:/server4_export2/brick status
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: Number of files migrated = 11    Migration complete

Once migration is completed then commit your changes.

[root@sara4 brick]# gluster volume replace-brick newvol  server4:/server4_export1/brick  server4:/server4_export2/brick commit
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: replace-brick commit successful

Then examine the newly added brick,

volume replace-brick: success: replace-brick commit successful
[root@sara4 brick]# ls -la /server4_export2/brick/
total 128
drwxr-xr-x  3 root root       157 Apr  9 11:03 .
drwxr-xr-x  3 root root        18 Apr  9 11:02 ..
---------T  2 root root 536870912 Apr  9 10:34 bigfile
-rw-r--r--  2 root root         0 Apr  9 10:04 file0
-rw-r--r--  2 root root         0 Apr  9 10:04 file1
-rw-r--r--  2 root root         0 Apr  9 10:04 file2
---------T  2 root root         0 Apr  9 10:29 file3
---------T  2 root root         0 Apr  9 10:29 file4
-rw-r--r--  2 root root         0 Apr  9 10:04 file5
-rw-r--r--  2 root root         0 Apr  9 10:04 file6
---------T  2 root root         0 Apr  9 10:29 file7
-rw-r--r--  2 root root         0 Apr  9 10:04 file8
---------T  2 root root         0 Apr  9 10:29 file9
drw------- 17 root root       179 Apr  9 11:04 .glusterfs


Tuesday, 7 April 2015

Create XFS filesystem on Centos

Server : Test1


Step 1:
------------
Create two logical volumes with size 2Gb & 512MB respectively.

[root@sara2 ~]# lvcreate -n xfsdata -L 2G vgsrv -->xfs filesystem
  Logical volume "xfsdata" created
[root@sara2 ~]# lvcreate -n xfsjournal -L 512M vgsrv -->external journal setup
  Logical volume "xfsjournal" created

[root@sara2 ~]# yum install xfsprogs

Create a new xfs filesystem with external journal.

[root@sara2 ~]# mkfs -t xfs -l logdev=/dev/vgsrv/xfsjournal /dev/vgsrv/xfsdata
meta-data=/dev/vgsrv/xfsdata     isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =/dev/vgsrv/xfsjournal  bsize=4096   blocks=131072, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


Create a directory and mount it.


mkdir /xfs
/dev/vgsrv/xfsdata   /xfs  xfs logdev=/dev/vgsrv/xfsjournal 1 2 -->/etc/fstab
mount /xfs

Growing an XFS filesystem[on fly]:
--------------------------------------------------------


-We can grow xfs filesystem while its mounted, In fact we cant grow xfs with unmount.
[root@sara2 ~]# df -h /xfs
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vgsrv-xfsdata
                      2.0G   33M  2.0G   2% /xfs

[root@sara2 ~]# lvextend -L 4G /dev/vgsrv/xfsdata
  Size of logical volume vgsrv/xfsdata changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
  Logical volume xfsdata successfully resized

[root@sara2 ~]# xfs_growfs /xfs
meta-data=/dev/mapper/vgsrv-xfsdata isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =external               bsize=4096   blocks=131072, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_growfs: log growth not supported yet
data blocks changed from 524288 to 1048576

[root@sara2 ~]# df -h /xfs
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vgsrv-xfsdata
                      4.0G   33M  4.0G   1% /xfs


Xfs maintenance:
-----------------

Create some fragmented files to perform xfs maintenance,

for FILE in file{0..3}; do dd if=/dev/zero  of=/xfs/${FILE} bs=4M count=100 & done

Examine the fragmentation of that files and run the Xfs defragment tool to defragment all your mounted xfs filesystem,

[root@sara2 ~]# filefrag /xfs/file*
/xfs/file0: 2 extents found
/xfs/file1: 4 extents found
/xfs/file2: 4 extents found
/xfs/file3: 3 extents found
[root@sara2 ~]# xfs_fsr -v
Found 1 mounted, writable, XFS filesystems
xfs_fsr -m /etc/mtab -t 7200 -f /var/tmp/.fsrlast_xfs ...
START: pass=0 ino=0 /dev/mapper/vgsrv-xfsdata /xfs
/xfs start inode=0
ino=132
extents before:4 after:1 DONE ino=132
ino=134
extents before:4 after:1 DONE ino=134
ino=133
extents before:3 after:2      ino=133
ino=131
extents before:2 after:1 DONE ino=131
/xfs start inode=0
ino=133
extents before:2 after:1 DONE ino=133
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
/xfs start inode=0
Completed all 10 passes
[root@sara2 ~]# filefrag /xfs/file*
/xfs/file0: 1 extent found
/xfs/file1: 1 extent found
/xfs/file2: 1 extent found
/xfs/file3: 1 extent found
[root@sara2 ~]#

Before running xfs tool, most files were spread out over multiple extents and after they are just one extent.

Repairing xfs filesystem,

umount /xfs
xfs_repair -n -l /dev/vgsrv/xfsjournal /dev/vgsrv/xfsdata
xfs_repair -l /dev/vgsrv/xfsjournal /dev/vgsrv/xfsdata
mount /xfs

Backup & Restore,

-perform level 0[full] backup of /xfs filesystem and pipe the backup through xz compression tool into a file.

[root@sara2 ~]# yum -y install xfsdump

[root@sara2 ~]# xfsdump -L full -M dumpfile -l 0 - /xfs | xz > /xfs.xz

[root@sara2 ~]# xfsdump -I
file system 0:
    fs id:        2c82df1e-36f0-4770-a9b7-796fb9d6c78d
    session 0:
        mount point:    sara2.test.com:/xfs
        device:        sara2.test.com:/dev/mapper/vgsrv-xfsdata
        time:        Wed Apr  8 16:12:10 2015
        session label:    "full"
        session id:    db7f134c-7111-4fdb-978d-a3a9e16f4a79
        level:        0
        resumed:    NO
        subtree:    NO
        streams:    1
        stream 0:
            pathname:    stdio
            start:        ino 0 offset 0
            end:        ino 0 offset 0
            interrupted:    YES
            media files:    0
xfsdump: Dump Status: SUCCESS

Restore:
xzcat /xfs.xz | xfsrestore - /xfs

Sunday, 5 April 2015

Reverting LVM changes

On test1 create a 2GB Logical volume resizeme in vgsrv volume group and create a ext4 file-system, then mount your file-system and create some test files.

[root@test1 ~]# vgs
  VG    #PV #LV #SN Attr   VSize  VFree
  vgsrv   1   1   0 wz--n- 20.00g 16.00g
[root@test1 ~]# lvcreate -n resizeme -L2G vgsrv
  Logical volume "resizeme" created


[root@test1 ~]# mkfs -t ext4 /dev/vgsrv/resizeme
[root@test1 ~]# mount /dev/vgsrv/resizeme /mnt
[root@test1 ~]# touch /mnt/file{0..9}
[root@test1 ~]# umount /mnt

You want to resize your filesystem to 1GB, but accidently you forget to resize the file system first[resize2fs].

[root@test1 ~]# lvresize -L1G /dev/vgsrv/resizeme
  WARNING: Reducing active logical volume to 1.00 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce resizeme? [y/n]: y
  Size of logical volume vgsrv/resizeme changed from 2.00 GiB (512 extents) to 1.00 GiB (256 extents).
  Logical volume resizeme successfully resized

[root@test1 ~]# mount /dev/vgsrv/resizeme /mnt
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vgsrv-resizeme,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Locate the archive files and find a file  it created just before that LVM shrink, then restore it.

[root@test1 ~]# vgcfgrestore -l vgsrv
  
  File:        /etc/lvm/archive/vgsrv_00000-582231926.vg
  VG name:        vgsrv
  Description:    Created *before* executing 'vgcreate vgsrv /dev/xvda3'
  Backup Time:    Wed Apr  1 12:18:57 2015

  
  File:        /etc/lvm/archive/vgsrv_00001-1338397565.vg
  VG name:        vgsrv
  Description:    Created *before* executing 'lvcreate -n storage -L 4G vgsrv'
  Backup Time:    Wed Apr  1 12:19:26 2015

  
  File:        /etc/lvm/archive/vgsrv_00002-1702898714.vg
  VG name:        vgsrv
  Description:    Created *before* executing 'lvcreate -n resizeme -L2G vgsrv'
  Backup Time:    Mon Apr  6 06:04:56 2015

  
  File:        /etc/lvm/archive/vgsrv_00003-2028074156.vg
  VG name:        vgsrv
  Description:    Created *before* executing 'lvresize -L1G /dev/vgsrv/resizeme'
  Backup Time:    Mon Apr  6 06:10:03 2015

Here we select file vg it created before executing lvresize,

[root@test1 ~]# vgcfgrestore -f /etc/lvm/archive/vgsrv_00003-2028074156.vg vgsrv
  Restored volume group vgsrv


[root@test1 ~]# mount /dev/vgsrv/resize  /mnt
[root@test1 ~]# ls -la /mnt
total 24
drwxr-xr-x  3 root root  4096 Apr  6 06:17 .
dr-xr-xr-x 23 root root  4096 Apr  1 12:16 ..
-rw-r--r--  1 root root     0 Apr  6 06:17 file0
-rw-r--r--  1 root root     0 Apr  6 06:17 file1
-rw-r--r--  1 root root     0 Apr  6 06:17 file2
-rw-r--r--  1 root root     0 Apr  6 06:17 file3
-rw-r--r--  1 root root     0 Apr  6 06:17 file4


Create and add quorum disk on cluster

Requirement: Two node cluster with multipathed iscsi storage.

Test 1: Cluster server

Test 22: Node1

Test 3:Node2

On test2, create a new 128MB partition in multipathed storage.

[root@test22 ~]# fdisk -cu /dev/mapper/1IET_00010001

On all nodes, run the below commands to update the partitions to kernel and multipath daemon knows about.

# partprobe ; multipath -r

On test2, create quorum disk on that new partition

[root@test22 ~]# mkqdisk -c /dev/mapper/1IET_00010001p3 -l qdisk

On test3, see the quorum disk.

[root@test3 ~]# mkqdisk -L
mkqdisk v3.0.12.1

/dev/block/253:3:
/dev/disk/by-id/dm-name-1IET_00010001p3:
/dev/disk/by-id/dm-uuid-part3-mpath-1IET_00010001:
/dev/dm-3:
/dev/mapper/1IET_00010001p3:
    Magic:                eb7a62c2
    Label:                qdisk
    Created:              Mon Apr  6 05:21:39 2015
    Host:                 test22.com
    Kernel Sector Size:   512
    Recorded Sector Size: 512


Then add this quorum disk to cluster[Luci].

Friday, 3 April 2015

Bash Scripts

Need script that should find whether .deb os or .rpm os and install package 

 

#!/bin/bash

# Set variables

RPM=$(cat /etc/issue | awk -F . '{ print $1 }' | head -1)
DEB=$(cat /etc/issue | awk '( Print $1)' | cut -d ' ' -f1)

# Test for distro types
if [ -f $RPM ] 2> /dev/null
then
yum update
yum install -y snmpd
elif [ -f $DEB ] 2> /dev/null
then
apt-get update
apt-get install -y snmpd
fi

# Start your service
service snmpd status
if [ $? -ne 0 ]
then
echo "Would you like to start your service? Enter yes or no"
read ans
if [ $ans == yes ]
then
service snmpd start
elif [ $ans == no ]
then
echo "Your service has not been started"
fi
fi
echo "Please check your installation"  

 

 

Content Sync:

 

#!/bin/bash
#Web Contents
SRCDIR=/var/www/vhosts/grannliv.granngarden.se/
DESTDIR=root@10.224.44.126:/var/www/vhosts/grannliv.granngarden.se/
rsync -azvorgp --stats --progress --human-readable $SRCDIR $DESTDIR | tail -n 16 | mail -s 'web01 to ggab_test web-content sync report' test@gmail.com
# Database Contents
mysql_dump_path = '/var/backups/mysql'
database = 'iklasgra17402se12271_'
/usr/bin/mysqldump -u root -p<web01-mysql-password> -e -E --routines --single-transaction $database > $mysql_dump_path/$database.sql
/usr/bin/mysql -h 10.224.44.126 -u root -p<ggab_test-mysql-password> $database < $mysql_dump_path/$database.sql  && rm -f $mysql_dump_path/$database.sql | mail -s 'web01 to ggab_test db-content synced' test@gmail.com

How to setup innodb_file_per_table in running mysql server with databases


1) MySQLDump all databases into a SQL text file (call it SQLData.sql)

2) Drop all databases (except mysql schema, phpmyadmin/mysql databases)

3) Stop mysql

4) Add the following lines to your /etc/my.cnf

[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Note : Whatever your set for innodb _ buffer _ pool _ size, make sure innodb _ log _ file _ size is 25% of innodb _ buffer _ pool _ size

5) Delete ibdata1, ib _ logfile0 and ib _ logfile1

At this point, there should only be the mysql schema in /var/lib/mysql

6) Restart mysql

This will recreate ibdata1 at 10MB, ib _ logfile0 and ib _ logfile1 at 1G each

7) Reload SQLData.sql into mysql to restore your data

ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.

Now suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table:

mytable.frm (Storage Engine Header)
mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)

ibdata1 will never contain InnoDB data and indexes anymore.

Thursday, 2 April 2015

Configure a service/resource group in cluster[Luci]

We can created a webserver service we will need the below resources,

1. A file system for the document root[previous blog].
2. Floating ip address where client connect to service[Configured in Luci].
3. Httpd daemon listening for requests[Configured in Luci].

Under resources and service group tab we can configure step 2 & 3.

Now resource group[apache service] configured in cluster on three nodes.

Configure a File system resource in cluster[luci]


Step 1:

Server test1 have 4GB target /dev/vgsrv/storage, then we provide this share to other 3 nodes[test2,3,4-initiators].

In test1,

[root@test1 ~]# tgt-admin --show
Target 1: iqn.2008-09.com.example:first
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 4295 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vgsrv/storage
            Backing store flags:
    Account information:
    ACL information:
        31.x.x.x
        10.50.68.16
        31.x.x.x
        10.50.68.17
        31.x.x.x
        10.50.68.18


Step 2:

Configure multipath on all three nodes with that 4Gb share.

In test2,

yum install iscsi-initiator-utils
iscsiadm -m discovery -t sendtargets -p 213.180.70.83  [public Nic]
iscsiadm -m node -T iqn.2008-09.com.example:first -p 213.180.70.83 -l
iscsiadm -m discovery -t sendtargets -p 10.50.68.15   [Private Nic]
iscsiadm -m node -T iqn.2008-09.com.example:first -p 10.50.68.15 -l
service iscsi restart
[root@test22 ~]# yum install device-mapper-multipath
 mpathconf --user_friendly_names n
 getuid_callout "lib/udev/scsi_id --replace-whitespace --whitelisted --device=/dev/%n" - include this line to defaults section in /etc/multipath.conf

This two used for identify the multipath device mapper id.

 service multipathd start
[root@test22 ~]# ls -la /dev/mapper/1IET_00010001
lrwxrwxrwx 1 root root 7 Apr  2 22:10 /dev/mapper/1IET_00010001 -> ../dm-0

Repeat the steps in test3,4.

Step 3:

-Create a partition under /dev/mapper/1IET_00010001 on test1.
-Make sure test3,4 see that new partition under /dev/mapper/1IET_00010001 by running,
[root@test22 ~]# partprobe ; multipath -r
[root@test22 ~]# mkfs -t ext4 /dev/mapper/1IET_00010001p1
-Install httpd on test2,3,4 servers.
-On test1, mount the /dev/mapper/1IET_00010001p1 to /var/ww/html
[root@test22 ~]# mount /dev/mapper/1IET_00010001p1 /var/www/html
-create index.html file and unmount.

Step 4:

Login to Luci and add this filesystem resource to Resources.

Wednesday, 1 April 2015

Install High availability cluster on centos

Server 1: test1
Type : luci - Cluster Management server

Server 2: test2
Type : ricci - Cluster Node

Server 3 : test3
Type : ricci - Cluster Node

Step 1:

Install luci in test1,

 yum -y install luci
chkconfig luci on
 service luci start

Now luci available on port 8084 in web browser.

Step 2:

Install ricci on test2 and test3,

yum -y install ricci
passwd ricci
chkconfig ricci on
service ricci start

Step 3:

Create an cluster with conga through luci web interface.
1. Add nodes to the cluster.
2. It will install all cluster add-on packages to node servers automatically.


Now test2 & test3 are added on Cluster.

Install and Configure Multipath on Centos

Server 1 : test1


213.x.x.x - eth0
10.50.68.15 - eth1

Server 2 : test2
31.x.x.x - eth0
10.50.68.16 - eth1

Step 1:

Create an LVM on test1 and make this LVM as target.

Step 2:

On test2, Login to target you created on test1 using both ip's.

iscsiadm -m discovery -t sendtargets -p 213.x.x.x
iscsiadm -m node -T iqn.2008-09.com.example:first -p 213.x.x.x -l
 iscsiadm -m discovery -t sendtargets -p 10.50.68.15
 iscsiadm -m node -T iqn.2008-09.com.example:first -p 10.50.68.15 -l

[root@test ~]# lsblk
NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda1 202:1    0  15G  0 disk /
xvda2 202:2    0   2G  0 disk [SWAP]
sda     8:0    0   4G  0 disk
sdb     8:16   0   4G  0 disk

The same target will show like two disks on test2.

On test1:

[root@test ~]# tgt-admin --show
Target 1: iqn.2008-09.com.example:first
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 5
            Initiator: iqn.1994-05.com.redhat:test2
            Connection: 0
                IP Address: 31.x.x.x
        I_T nexus: 6
            Initiator: iqn.1994-05.com.redhat:test2
            Connection: 0
                IP Address: 10.50.68.16
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 4295 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vgsrv/storage
            Backing store flags:
    Account information:
    ACL information:
        31.x.x.x
        10.50.68.16

On test2:
[root@test ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0-873.13.el6
Target: iqn.2008-09.com.example:first (non-flash)
    Current Portal: 213.x.x.x:3260,1
    Persistent Portal: 213.x.x.x:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1994-05.com.redhat:test2
        Iface IPaddress: 31.x.x.x
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 6
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 2
        Target Reset Timeout: 2
        LUN Reset Timeout: 2
        Abort Timeout: 2
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 8192
        FirstBurstLength: 65536
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 5    State: running
        scsi5 Channel 00 Id 0 Lun: 0
        scsi5 Channel 00 Id 0 Lun: 1
            Attached scsi disk sdb        State: running
    Current Portal: 10.50.68.15:3260,1
    Persistent Portal: 10.50.68.15:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1994-05.com.redhat:test2
        Iface IPaddress: 10.50.68.16
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 7
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 2
        Target Reset Timeout: 2
        LUN Reset Timeout: 2
        Abort Timeout: 2
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 8192
        FirstBurstLength: 65536
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 6    State: running
        scsi6 Channel 00 Id 0 Lun: 0
        scsi6 Channel 00 Id 0 Lun: 1
            Attached scsi disk sda        State: running



Step 4:

Install & configure multipath on test2,

yum  install  device-mapper-multipath
mpathconf --user_friendly_names n
service multipathd start

Then,
[root@test ~]# ls -la /dev/mapper/1IET_00010001
lrwxrwxrwx 1 root root 7 Apr  2 00:01 /dev/mapper/1IET_00010001 -> ../dm-0
[root@test ~]# lsblk
NAME                   MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
xvda1                  202:1    0  15G  0 disk  /
xvda2                  202:2    0   2G  0 disk  [SWAP]
sda                      8:0    0   4G  0 disk 
└─1IET_00010001 (dm-0) 253:0    0   4G  0 mpath
sdb                      8:16   0   4G  0 disk 
└─1IET_00010001 (dm-0) 253:0    0   4G  0 mpath

Testing Multipath,

[root@test ~]# multipath -ll
1IET_00010001 dm-0 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 6:0:0:1 sda   8:0   active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:1 sdb   8:16  active ready running

[root@test ~]# ifdown eth1

[root@test ~]# multipath -ll
1IET_00010001 dm-0 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 6:0:0:1 sda   8:0   active faulty running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:1 sdb   8:16  active ready  running

[root@test ~]# ifup eth1

[root@test ~]# multipath -ll
1IET_00010001 dm-0 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 6:0:0:1 sda   8:0   active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:1 sdb   8:16  active ready running

Now If one path fails [e.g sda] but still the storage will accessible by another path[e.g sdb].