Sunday, 21 December 2014

Openstack installation

Single Node[tips]:-

http://www.doublecloud.org/2013/05/installing-openstack-on-centos-in-private-network/

 http://www.doublecloud.org/2013/06/installing-openstack-with-multiple-nodes-tips-and-tricks/

Instances ip ranges:-

https://openstack.redhat.com/Floating_IP_range

https://www.youtube.com/watch?v=DGf-ny25OAw

http://serverfault.com/questions/579789/openstack-packstack-basic-multi-node-network-setup

http://docs.openstack.org/juno/install-guide/install/yum/content/

Monday, 24 November 2014

Non-existing pages with "?p=" redirects to 404 page in wordpress

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

RewriteCond %{QUERY_STRING} p=.*
RewriteRule .* - [R=404]

Monday, 17 November 2014

Next Documents

For more information about the High Availability Add-On and the Resilient Storage Add-On for Red Hat Enterprise Linux 6, refer to the following resources:
  • High Availability Add-On Overview — Provides a high-level overview of the Red Hat High Availability Add-On.
  • Cluster Administration — Provides information about installing, configuring and managing the High Availability Add-On.
  • DM Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux.
  • Load Balancer Administration — Provides information on configuring high-performance systems and services with the Load Balancer Add-On, a set of integrated software components that provide Linux Virtual Servers (LVS) for balancing IP load across a set of real servers.

Sunday, 2 November 2014

Configure postfix as a backup[secondary] mail server

Tried various possibilities to configure the secondary MX server to 
queue mails properly if primary MX fails. Today tried the secondary MX 
setup with below configuration and it worked out.....
 
In main.cf file,
 
relay_domains = $mydestination example.com
smtpd_recipient_restrictions = permit_mynetworks check_relay_domains
 
Included the transport_maps too in main.cf file in which configured the 
domain smtp transport address:

transport_maps = hash:/etc/postfix/transport
 
In /etc/postfix/transport file,
 
example.com smtp:mail.example.com
 
 
Concept:-
 
If primary mail server for the domain example.com will down then all the mails  are backuped/queued
in secondary/backup mail server[postfix] once the primary server will be ready then mails are delivered 
from backup to primary server.

Thursday, 16 October 2014

Cloudstack Overview

Refer:-
http://docs.cloudstack.apache.org/en/master/concepts.html#what-is-apache-cloudstack

What is Apache CloudStack?

Apache CloudStack is an open source Infrastructure-as-a-Service platform that manages and orchestrates pools of storage, network, and computer resources to build a public or private IaaS compute cloud.
With CloudStack you can:
  • Set up an on-demand elastic cloud computing service.
  • Allow end-users to provision resources

Cloud Infrastructure Overview

Resources within the cloud are managed as follows:
  • Regions: A collection of one or more geographically proximate zones managed by one or more management servers.
  • Zones: Typically, a zone is equivalent to a single datacenter. A zone consists of one or more pods and secondary storage.
  • Pods: A pod is usually a rack, or row of racks that includes a layer-2 switch and one or more clusters.
  • Clusters: A cluster consists of one or more homogenous hosts and primary storage.
  • Host: A single compute node within a cluster; often a hypervisor.
  • Primary Storage: A storage resource typically provided to a single cluster for the actual running of instance disk images. (Zone-wide primary storage is an option, though not typically used.)
  • Secondary Storage: A zone-wide resource which stores disk templates, ISO images, and snapshots.

About Primary Storage

Primary storage is associated with a cluster or (in KVM and VMware) a zone, and it stores the disk volumes for all the VMs running on hosts.
You can add multiple primary storage servers to a cluster or zone.

About Secondary Storage

Secondary storage stores the following:
  • Templates — OS images that can be used to boot VMs and can include additional configuration information, such as installed applications
  • ISO images — disc images containing data or bootable media for operating systems
  • Disk volume snapshots — saved copies of VM data which can be used for data recovery or to create new templates
The items in secondary storage are available to all hosts in the scope of the secondary storage, which may be defined as per zone or per region.


Networking:-[Ip address allocation]

 When basic networking is used, CloudStack will assign IP addresses based in the CIDR of the pod to the guests in that pod. The administrator must add a Direct IP range on the pod for this purpose. These IPs are in the same VLAN as the hosts.


 



Thursday, 2 October 2014

Virtualization Hardware drivers and devices

Emulated devices
Emulated devices, sometimes referred to as virtual devices, exist entirely in software. Emulated device drivers are a translation layer between the operating system running on the host (which manages the source device) and the operating systems running on the guests. T he device level instructions directed to and from the emulated device are intercepted and translated by the
hypervisor. Any device of the same type as that being emulated and recognized by the Linux kernel is able to be used as the backing source device for the emulated drivers. 

Para-virtualized Devices
Para-virtualized devices require the installation of device drivers on the guest operating system providing it with an interface to communicate with the hypervisor on the host machine. T his interface is used to allow traditionally intensive tasks such as disk I/O to be performed outside of the virtualized environment. Lowering the overhead inherent in virtualization in this manner is
intended to allow guest operating system performance closer to that expected when running directly on physical hardware.

Physically shared devices
Certain hardware platforms allow virtualized guests to directly access various hardware devices and components. T his process in virtualization is known as passthrough or device assignment. Passthrough allows devices to appear and behave as if they were physically attached to the guest operating system.

Wednesday, 1 October 2014

Vlan Concepts

A VLAN (Virtual LAN) is an attribute that can be applied to network packets. Network packets can be "tagged" into a numbered VLAN. A VLAN is a security feature used to completely isolate network traffic at the switch level. VLANs are completely separate and mutually exclusive. T he Red Hat Enterprise
Virtualization Manager is VLAN aware and able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs.
At the switch level, ports are assigned a VLAN designation.

A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry
the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent
to a single port, to be deciphered using software on the machine that receives the traffic.

Monday, 29 September 2014

Concepts of QCOW2 & RAW format

QCOW2 Formatted Virtual Machine Storage

QCOW2 is a storage format for virtual machine disk images. QCOW stands for QEMU copy on write. T he QCOW2 format decouples the physical storage layer from the virtual layer by adding  a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-comittment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying disk image.
T he initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. T hen the map is updated to point to the new place.

RAW
T he RAW storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual machine disk images stored in the RAW format. Virtual machine data operations on disk images stored in RAW format require no additional work from hosts. When a virtual machine writes data to a given offset in its virtual disk, the I/O is written to the same offset on the backing file or logical volume.
Raw format requires that the entire space of the defined image be preallocated unless using externally managed thin provisioned LUNs from a storage array.

Friday, 26 September 2014

PHP Fatal error: Class 'JFactory' not found

I have faced this issue in Joomla site and searched the forums but none of the forums including joomla did not give solution for me.
They simply suggest to check joomla version, php version compatibility and php extensions.
Finally I have fixed the issue.


Issue:-
---------
 [26-Sep-2014 18:11:29 Europe/Berlin] PHP Fatal error: Class 'JFactory' not found in /home/test/public_html/index.php on line 31

Cause & Solution:-
-----------------------
In my case, it seems file which gives the class JFactory was missed.

/home/test/public_html/libraries/joomla/factory.php --->Core file for Joomla.

Simply restore that file under proper path to fix the issue.

Friday, 19 September 2014

Thursday, 11 September 2014

Synchronization tools

1.Lsyncd - Live Syncing (Mirror) Daemon[Directory level]
2.DRBD.[block device level]
3. GlusterFS and BindFS use a FUSE-Filesystem to interject kernel/userspace filesystem events. 

Reference:
-----------------
https://code.google.com/p/lsyncd/
http://configure.systems/glusterfs-and-why-you-should-consider-it/

GlusterFS would actually mitigate and simply so much more of that. There would be no need for a Load Balancer, no need for a special script to promote, demote the content servers, nothing, not even to replicate the data between the servers!
Basically, you can create two or more servers, install GlusterFS on each of the servers, have node all of the nodes probe the master node, then you would create the volume. Easy.
Once that’s done, one your actual web nodes, where you have Apache, PHP, and again Varnish installed, you would install GlusterFS, add the correct line to the /etc/fstab, and you’re set. Within that line, you can even add a failover server in case the primary goes down! Say what? Not only that, when it comes back up, it can self-heal to ensure consistency across all servers again.
Adding more servers to the GlusterFS environment is pretty simple too, couple commands and you’re good to go. All of this could even be automated.
There are some other comparable options but GlusterFS seems to be a very viable option, one that I use on this servers configuration. I’m not a big site, nor do I serve tens of thousands of users. However, I’m completely ready to scale at the drop of a hammer if need be. Both from a saved image and from a completely orchestrated manner with Ansible. The less moving parts, the better. Keep everything dedicated one set of resources and you’ll be building for success.

Wednesday, 10 September 2014

Saturday, 6 September 2014

Tuesday, 19 August 2014

Storage pools

Storage pools and volumes are not required for the proper operation of guest virtual machines.
Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined.

NFS storage pool
Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. T he system administrator defines a pool on the host physical machine with the details of the share (nfs.example.com:/path/to/share should be mounted on /vm _data). When
the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed m ount nfs.exam ple.com :/path/to/share /vm data. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.
Once the pool starts, the files that the NFS share, are reported as volumes, and the storage volumes' paths are then queried using the libvirt APIs. T he volumes' paths can then be copied into the section of a guest virtual machine's XML definition file describing the source storage for the guest virtual machine's block devices. With NFS, applications using the libvirt APIs can create and delete volumes in the pool (files within the NFS share) up to the limit of the size of the pool (the maximum storage capacity of the share).

Storage Pools

A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to guest virtual machines. T he storage pool can be local or it can be shared over a network. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by guest virtual machines. Storage pools are divided into storage volumes either
by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices. In short storage volumes are to partitions what storage pools are to disks.
Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. T he maximum sizes are as follows:
virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk)
Ext4 = ~ 16 T B (using 4 KB block size)
XFS = ~8 Exabytes
qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size.
libvirt uses a directory-based storage pool, the /var/lib/libvirt/images/ directory, as the default
storage pool. T he default storage pool can be changed to another storage pool.
Local storage pools - Local storage pools are directly attached to the host physical machine server. Local storage pools include: local directories, directly attached disks, physical partitions, and LVM volume groups. T hese storage volumes store guest virtual machine images or are attached to
guest virtual machines as additional storage. As local storage pools are directly attached to the host physical machine server, they are useful for development, testing and small deployments that do not  require migration or large numbers of guest virtual machines. Local storage pools are not suitable for many production environments as local storage pools do not support live migration.
Networked (shared) storage pools - Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between host physical machines with virt-manager, but is optional when migrating with virsh. Networked storage pools are managed by libvirt. Supported protocols for networked storage
pools include:
Fibre Channel-based LUNs
iSCSI
NFS
GFS2
SCSI RDMA protocols (SCSI RCP), the block export protocol used in InfiniBand and 10GbE iWARP adapters.

Memory overcommiting process

Guest virtual machines running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each guest virtual machine functions as a Linux process where the host physical machine's Linux kernel allocates memory only when requested. In addition the host physical
machine's memory manager can move the guest virtual machine's memory between its own physical memory and swap space. T his is why overcommitting requires allotting sufficient swap space on the host physical machine to accommodate all guest virtual machines as well as enought memory for the
host physical machine's processes. As a basic rule, the host physical machine's operating system requires a maximum of 4GB of memory along with a minimum of 4GB of swap space.




T his example demonstrates how to calculate swap space for overcommitting. Although it may appear to be simple in nature, the ramifications of overcommitting should not be ignored. Refer to Important before proceeding.

ExampleServer1 has 32GB of physical RAM. T he system is being configured to run 50 guest virtual machines, each requiring 1GB of virtualized memory. As mentioned above, the host physical machine's system itself needs a maximum of 4GB (apart from the guest virtual machines) as well as an additional 4GB as a swap space minimum.
The swap space is calculated as follows:
Calculate the amount of memory needed for the sum of all the guest virtual machines - In this example: (50 guest virtual machines * 1GB of memory per guest virtual machine) = 50GB
Add the guest virtual machine's memory amount to the amount needed for the host physical machine's OS and for the host physical machine's minimum swap space - In this example: 50GB
guest virtual machine memory + 4GB host physical machine's OS + 4GB minimal swap = 58GB
Subtract this amount from the amount of physical RAM there is on the system - In this example
58GB - 32GB = 26GB
T he answer is the amount of swap space that needs to be allocated in Host.

Live migration Backend process

In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine. During migration,KVM monitors the source for any changes in pages it has already transferred, and begins to transfer
these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer will take a certain configurable period of time (10ms by default), KVM suspends the original guest virtual machine, transfers the remaining
data, and resumes the same guest virtual machine on the destination host physical machine.

Wednesday, 13 August 2014

Metadata

what the metadata is for. Your single file is split up into a bunch of small pieces and spread out of geographic location, servers, and hard drives. These small pieces also contain more data, they contain parity information for the other pieces of data, or maybe even outright duplication.
The metadata is used to locate every piece of data for that file over different geographic locations, data centres, servers and hard drives as well as being used to restore any destroyed pieces from hardware failure. It does this automatically. It will even fluidly move these pieces around to have a better spread. It will even recreate a piece that is gone and store it on a new good hard drive.

Friday, 8 August 2014

filesystem making process

mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2) -->Default block size 4KB.
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks --> no of inodes & blocks created under that partition.
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Wednesday, 6 August 2014

Cloud Computing

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort.

Today, it is more or less accepted that there are three Cloud Computing models depending on the type of service provided, IaaS, Infrastructure as a Service, PaaS, Platform as a Service, and SaaS, Software as a Service.


IaaS – Infrastructure as a Service

 Infrastructure as a Service provides infrastructure capabilities like processing, storage, networking, security, and other resources which allow consumers to deploy their applications and data. This is the lowest level provided by the Cloud Computing paradigm. Some examples of IaaS are: Amazon S3/EC2, Microsoft Windows Azure, and VMWare vCloud.

  PaaS – Platform as a Service

 Platform as a Service provides application infrastructure such as programming languages, database management systems, web servers, applications servers, etc. that allow applications to run. The consumer does not manage the underlying platform including, networking, operating system, storage, etc. Some examples of PaaS are: Google App Engine, Microsoft Azure Services Platform, and ORACLE/AWS.

  SaaS – Software as a Service

 Software as a Service is the most sophisticated model hiding all the underlying details of networking, storage, operating system, database management systems, application servers, etc. from the consumer. It provides the consumers end-user software applications most commonly through a web browser (but could also be though a rich client). Some examples of SaaS are: Salesforce CRM, Oracle CRM On Demand, Microsoft Online Services, and Google Apps.


  PaaS – Platform as a Service have:


AIaaS – Application Infrastructure as a Service
Some analysts consider this model to provide application middleware, including applications servers, ESB, and BPM (Business Process Management).
 APaaS – Application Platform as a Service
Provides application servers with added multitenant elasticity as a service. The model PaaS (Platform as a Service) mentioned before includes AIaaS and APaaS.

 SaaS – Software as a Service have:

BPaaS – Business Process as a Service
Provides business processes such as billing, contract management, payroll, HR, advertising, etc. as a service.

 IaaS – Infrastructure as a Service have:

DaaS – Desktop as a Service
Based on application streaming and virtualization technology, provides desktop standardization, pay-per-use, management, and security.
 CaaS – Communications as a Service
Management of hardware and software required for delivering voice over IP, instant messaging, video conferencing, for both fixed and mobile devices.
 NaaS – Network as a Service
It allows telecommunication operators to provide network communications, billing, and intelligent features as services to consumers.

Reference:
------------------
http://itechthoughts.wordpress.com/category/cloud-computing/
http://itechthoughts.wordpress.com/category/virtualization/


Tuesday, 5 August 2014

Linux-Filesystem

A filesystem is the methods and data structures that an operating system uses to keep track of files on a disk or partition; that is, the way the files are organized on the disk. The word is also used to refer to a partition or disk that is used to store the files or the type of the filesystem. Thus, one might say I have two filesystems meaning one has two partitions on which one stores files, or that one is using the extended filesystem, meaning the type of the filesystem.

Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data structures need to be written to the disk. This process is called making a filesystem.

Most UNIX filesystem types have a similar general structure, although the exact details vary quite a bit. The central concepts are superblock, inode, data block, directory block, and indirection block. The superblock contains information about the filesystem as a whole, such as its size (the exact information here depends on the filesystem). An inode contains all information about a file, except its name. The name is stored in the directory, together with the number of the inode. A directory entry consists of a filename and the number of the inode which represents the file. The inode contains the numbers of several data blocks, which are used to store the data in the file. There is space only for a few data block numbers in the inode, however, and if more are needed, more space for pointers to the data blocks is allocated dynamically. These dynamically allocated blocks are indirect blocks; the name indicates that in order to find the data block, one has to find its number in the indirect block first.

 Like UNIX, Linux chooses to have a single hierarchical directory structure. Everything starts from the root directory, represented by /, and then expands into sub-directories instead of having so-called 'drives'.

 In general, 'block devices' are devices that store or hold data, 'character devices' can be thought of as devices that transmit or transfer data. For example, diskette drives, hard drives and CD-ROM drives are all block devices while serial ports, mice and parallel printer ports are all character devices.

A few terms to understand:
tty: Video console terminal (abbreviation for “Teletype”)
ttyS: Serial console terminal
pts: Virtual console terminal (pseudo-tty or pty but stands for Pseudo-Terminal Slave (PTS))
[root@host ~]# w
16:58:50 up 19:22,  3 users,  load average: 0.81, 0.90, 0.71
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    client.domain. 16:57    0.00s  0.01s  0.00s w
root     ttyS0    –                16:58    3.00s  0.00s  0.00s -bash
root     tty1     –                16:56    1:24   0.02s  0.02s -bash
As you can see from this, we have 1 user logged in via a virtual console (such as SSH), defined as pts/0. Another user logged in via serial console, defined as ttyS0. And another user logged in via the video console port, defined as tty1.

/lost+found directory:
 ---------------------------------

 Linux should always go through a proper shutdown. Sometimes your system might crash or a power failure might take the machine down. Either way, at the next boot, a lengthy filesystem check (the speed of this check is dependent on the type of filesystem that you actually use. ie. ext3 is faster than ext2 because it is a journalled filesystem) using fsck will be done. Fsck will go through the system and try to recover any corrupt files that it finds. The result of this recovery operation will be placed in this directory. The files recovered are not likely to be complete or make much sense but there always is a chance that something worthwhile is recovered. Each partition has its own lost+found directory. If you find files in there, try to move them back to their original location.

Mounting:
----------------

 Before one can use a filesystem, it has to be mounted. The operating system then does various bookkeeping things to make sure that everything works. Since all files in UNIX are in a single directory tree, the mount operation will make it look like the contents of the new filesystem are the contents of an existing subdirectory in some already mounted filesystem.

/Proc:
----------

/proc is very special in that it is also a virtual filesystem. It's sometimes referred to as a process information pseudo-file system. It doesn't contain 'real' files but runtime system information (e.g. system memory, devices mounted, hardware configuration, etc).


/sbin should contain only binaries essential for booting, restoring,
  recovering, and/or repairing the system in addition to the binaries
  in /bin. 
 
 

Reference:

 http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/glossary.html

 


Thursday, 24 July 2014

How to Mount the VM Disk

Mount the VM Disk which built On image file,
-------------------------------------------------------------------------

losetup /dev/loop0 /root/mailserver.img

-Attach the image file to loop device.

kpartx -a -v /dev/loop0

-Map the vm partitions,

Output:
------------
add map loop0p1 (253:4): 0 1024000 linear /dev/loop0 2048
add map loop0p2 (253:5): 0 7362560 linear /dev/loop0 1026048

Then mount the VM partitions,

mount /dev/mapper/loop0p1 /mnt/boot
mount /dev/mapper/loop0p2 /mnt/root -->if it is not LVM.

if vm root partition is an LVM,
You will receive the below error,
mount: unknown filesystem type 'LVM2_member'

Which means LVM not identify that vm lvm partition.

  pvscan
vgscan && lvscan

Output:
-------------
inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg0/ubuntu' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/centos' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/ubuntuhvm' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/images' [20.00 GiB] inherit

-need to activate that new volumegroup,

 vgchange -ay VolGroup

 mount /dev/VolGroup/lv_root /mnt/centos

That's All.


Finally, the backward process is:
# umount /mnt
# vgchange -an VolGroup
# kpartx -dv /dev/loop0

 Reference:
------------------






Mount virtual machine’s LVM partition[Inside VM] on KVM host


1. Scan volume Group
  #vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1  10   0 wz--n- 500G  300G
2. Scan lvm group.
Notice all lvm are active and belong to vg volume group.
In server lvm our VM is installed.
  #lvscan
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
3.
 #kpartx -av /dev/vg/yumserver
4. Scan lvm group again, notice their are now two inactive lvm volumes, these are of server lvm where our VM is installed
  #lvscan
  inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
5. Scan volume group again.
Now notice that their are two volume groups. One is of KVM HOST and other is of Guest VM.
  #vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   1   2   0 wz--n- 19.51g    0
  vg         1  10   0 wz--n- 500G  300G
6. Perform this step only if GuestVM volume group is same as of our KVMHost.
  #vgrename GuestVMvolumegroup  newvolgroup
  In our case both are different so we skip this step
7. activating the VolGroup LVM (GUEST VM)
  #vgchange -ay VolGroup   
  2 logical volume(s) in volume group "VolGroup" now active
8 scan lvm again now all lvms are active
  #lvscan
  ACTIVE            '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
9. Mount the VM lvm volume.
  #mount /dev/VolGroup/lv_root /mnt/
10. Do whatever

11. Unmount the lvm.
   #umount /mnt/
12. Deactivating the VolGroup LVM (GUEST VM)
  #vgchange -an VolGroup   
13. List LVM again
  #lvscan
  inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
14. Perform this step only if you performed Step 6
  #vgrename  newvolgroup GuestVMvolumegroup 
In our case both are different so we skip this step

15.
#kpartx  -dv /dev/vg/yumserver
16 List VolumeGroup
  #vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1  10   0 wz--n- 500G  300G



Mount the VM Disk which built on LVM:
-----------------------------------------------------------------------

[root@xen3 xen]# kpartx -av /dev/mapper/vg0-centos
add map vg0-centos1 (253:4): 0 409600 linear /dev/mapper/vg0-centos 2048
add map vg0-centos2 (253:5): 0 16384000 linear /dev/mapper/vg0-centos 411648
add map vg0-centos3 (253:6): 0 4096000 linear /dev/mapper/vg0-centos 16795648

Then it is very simple,

vg0-centos1 - boot partition

mount /dev/mapper/vg0-centos1 /mnt/boot

vg0-centos2 - root partition

mount /dev/mapper/vg0-centos2 /mnt/root

Other one is swap partition.

Reverse:

umount /dev/mapper/vg0-centos1

umount /dev/mapper/vg0-centos2

 kpartx -dv /dev/mapper/vg0-centos

Crosscheck with,

ls -all /dev/mapper/


Wednesday, 23 July 2014

Tuesday, 22 July 2014

Mapping physical storage to domU disk

Protocol
Description
Example
phy:
Block devices, such as a physical disk, in domain 0
phy:/dev/sdc
file:
Raw disk images accessed by using loopback
file:/path/file
nbd:
Raw disk images accessed by using NBD
ndb: ip_port
tap:aio:
Raw disk images accessed by using blktap. Similar to loopback but without using loop devices.
tap:aio:/path/file
tap:cdrom
CD reader block devices
tap:cdrom:/dev/sr0
tap:vmdk:
VMware disk images accessed by using blktap
tap:vmdk:/path/file
tap:qcow:
QEMU disk images accessed by using blktap
tap:qcow:/path/file
iscsi:
iSCSI targets using connections initiated from domain 0
iscsi:IQN,LUN
npiv:
Fibre Channel connections initiated from domain 0
npiv:NPIV,LUN

OpenSource Cloud Projects that you could FOCUS on !

*1. Hypervisor and Container*

*Docker. Io* - an open-source engine for building, packing and running any
application as a lightweight container, built upon the LXC container
mechanism included in the Linux kernel. It was written by dotCloud and
released in 2013.

*KVM* - a lightweight hypervisor that was accepted into the Linux kernel in
February 2007. It was originally developed by Qumranet, a startup that was
acquired by Red Hat in 2008.

*Xen Project* - a cross-platform software hypervisor that runs on platforms
such as BSD, Linux and Solaris. Xen was originally written at the
University of Cambridge by a team led by Ian Pratt and is now a Linux
Foundation Collaborative Project.

*CoreOS* – a new Linux distribution that uses containers to help manage
massive server deployments. Its beta version was released in May 2014.

 *2. Infrastructure as a Service*

*Apache Cloud**S**tack* - an open source IaaS platform with Amazon Web
Services (AWS) compatibility. CloudStack was originally created by
Cloud.com (formerly known as VMOps), a startup that was purchased by Citrix
in 2011. In April of 2012, CloudStack was donated by Citrix to the Apache
Software Foundation.

*Eucalyptus *- an open-source IaaS platform for building AWS-compatible
private and hybrid clouds. It began as a research project at UC Santa
Barbara and was commercialized in January 2009 under the name Eucalyptus
Systems.

*OpenNebula* - an open-source IaaS platform for building and managing
virtualized enterprise data centers and private clouds. It began as a
research project in 2005 authored by Ignacio M. Llorente and Rubén S.
Montero. Publicly released in 2008, development today is via the open
source model.

*OpenStack* - an open source IaaS platform, covering compute, storage and
networking. In July of 2010, NASA and Rackspace joined forces to create the
OpenStack project, with a goal of allowing any organization to build a
public or private cloud using using the same technology as top cloud
providers.

 *3. Platform as a Service*

*CloudFoundry* - an open Platform-as-a-Service, providing a choice of
clouds, developer frameworks and application services. VMware announced
Cloud Foundry in April 2011 and built a partner ecosystem.

*OpenShift *- Red Hat’s Platform-as-a-Service offering. OpenShift is a
cloud application platform where application developers and teams can
build, test, deploy, and run their applications in a cloud environment. The
OpenShift technology came from Red Hat’s 2010 acquisition of start-up
Makara (founded in May 2008). OpenShift was announced in May 2011 and
open-sourced in April 2012.

 *4. Provisioning and Management Tool*

*Ansible* – an automation engine for deploying systems and applications.

*Apache Mesos* - a cluster manager that provides efficient resource
isolation and sharing across distributed applications, or frameworks. It
was created at the University of California at Berkeley's AMPLab and became
an Apache Foundation top level project in 2013.

*Chef* - a configuration-management tool, controlled using an extension of
Ruby. Released by Opscode in January 2009.

*Juju *- a service orchestration management tool released by Canonical as
Ensemble in 2011 and then renamed later that year.

*O**v**irt *- provides a feature-rich management system for virtualized
servers with advanced capabilities for hosts and guests. Red Hat first
announced oVirt as part of its emerging-technology initiative in 2008, then
re-launched the project in late 2011 as part of the Open Virtualization
Alliance.

*Puppet* - IT automation software that helps system administrators manage
infrastructure throughout its lifecycle. Founded by Luke Kanies in 2005.

*Salt* - a configuration management tool focused on speed and incorporating
orchestration features. Salt was written by Thomas S Hatch and first
released in 2011.

*Vagrant* - an open source tool for building and managing development
environments, often within virtual machines. Written in 2010 by Mitchell
Hashimoto and John Bender.

 *5. Storage*

*Camlistore* - a set of open source formats, protocols, and software for
modeling, storing, searching, sharing and synchronizing data. First
released by Google developers in 2013.

*Ceph* - a distributed object store and file system. It was originally
created by Sage Weil for a doctoral dissertation. After Weil’s graduation
in 2007, he continued working on it full-time at DreamHost as the
development team grew. In 2012, Weil and others formed Inktank to deliver
professional services and support. It was acquired by Red Hat in 2014.

*Gluster *- a scale-out, distributed file system. It is developed by the
Gluster community, a global community of users, developers and other
contributors. GlusterFS was originally developed by Gluster Inc., then
acquired by Red Hat in October 2011.

*Riak CS* - an open source storage system built on top of the Riak
key-value store. Riak CS was originally developed by Basho and launched in
2012, with the source subsequently released in 2013.

*Swift* - is a highly available, distributed object store system, ideal for
unstructured data. Developed as part of the OpenStack project.

Friday, 18 July 2014

Xen HVM Migration

Migrate VM[vm139]:

Source Node:
-------------

--- Logical volume ---
  LV Path                /dev/vg_grp/vm139_img
  LV Name                vm139_img
  VG Name                vg_grp
    LV Status              available
  # open                 3
  LV Size                10.00 GiB
  Current LE             320
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:24

Source Node:
-------------

Backup:
--------

lvcreate -n vm139_backup --size 15G /dev/vg_grp


mkfs.ext3 /dev/vg_grp/vm139_backup

mkdir -p /home/vm139_backup

mount /dev/vg_grp/vm139_backup /home/vm139_backup

xm shutdown vm139

dd if=/dev/vg_grp/vm139_img of=/home/vm139_backup/vm139_backup.img


Destination Node:
-----------------

lvcreate -n vm139_backup --size 15G /dev/vg_xenhvm

lvcreate -n vm139_img --size 10G /dev/vg_xenhvm

mkfs.ext3 /dev/vg_xenhvm/vm139_backup

mkdir -p /home/vm139_backup

mount /dev/vg_xenhvm/vm139_backup /home/vm139_backup


Transfer:
----------

scp -C /home/vm139_backup/vm139_backup.img root@x.x.x.x:/home/vm139_backup/

Restore:
---------

dd if=/home/vm139_backup/vm139_backup.img of=/dev/vg_xenhvm/vm139_img

Solusvm server:
-----------------

Need to move the VM from slave1 to slave2,

/scripts/vm-migrate 104 4


Then Reboot the VM via solusvm. It will create config file.

POST Migration:
---------------

Source:
--------

umount /home/vm139_backup

lvremove /dev/vg_grp/vm139_backup

Destination:
------------

umount /home/vm139_backup

lvremove /dev/vg_xenhvm/vm139_backup


Note:
------

For Ip transfer, there is two ip blocks in solusvm.

1. ipblock2 for slave2
2.  ipblock2 for slave1

-granted the permission for slave2 server to access & assign the ips from ipblock1. So the VM's will use the same ip after migration.


Reference:
http://docs.solusvm.com/xen_migrations
http://www.nocser.net/clients/knowledgebase/421/Migrate-Solusvm-VPS-from-1-node-to-another-node-Updated.html

Monday, 14 July 2014

Port Mirroring

Port mirroring is an approach to monitoring network traffic that involves forwarding a copy of each packet from one network switch port to another.
Port mirroring enables the administrator to keep close track of switch performance by placing a protocol analyzer on the port that's receiving the mirrored data.
An administrator configures port mirroring by assigning a port from which to copy all packets and another port to which those packets will be sent. A packet bound for -- or heading away from -- the first port will be forwarded to the second port as well. The administrator must then place a protocol analyzer on the port that's receiving the mirrored data to monitor each segment separately.
Network administrators can use port mirroring as a diagnostic or debugging tool. 

Friday, 11 July 2014

Xen DomU booting process on HVM[pure]

For this, the booting process starting with.

        Welcome to CentOS
Starting udev: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[  OK  ]
Setting hostname localhost.localdomain:  [  OK  ]
Setting up Logical Volume Management:   No volume groups found
[  OK  ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda2
/dev/xvda2: clean, 18459/512064 files, 218237/2048000 blocks
[/sbin/fsck.ext4 (1) -- /boot] fsck.ext4 -a /dev/xvda1
/dev/xvda1: clean, 38/51200 files, 34256/204800 blocks
[  OK  ]
Remounting root filesystem in read-write mode:  [  OK  ]
Mounting local filesystems:  [  OK  ]
Enabling /etc/fstab swaps:  [  OK  ]
Entering non-interactive startup
ip6tables: Applying firewall rules: [  OK  ]
iptables: Applying firewall rules: [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0: 
Determining IP information for eth0... done.
[  OK  ]
Starting auditd: [  OK  ]
Starting system logger: [  OK  ]
Mounting filesystems:  [  OK  ]
Retrigger failed udev events[  OK  ]
Starting sshd: [  OK  ]
Starting postfix: [  OK  ]
Starting crond: [  OK  ]

-Because the VM act like standalone physical server, so that the PV driver process are not showing during boot.

Xen DomU booting process [PV on HVM].


Centos:
-------------

-Kernel loaded with parameter ide0=noprobe[it will prevent disk & NIC emulation, use xen pv drivers].

Linux version 2.6.32-431.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Nov 22 03:15:09 UTC 2013
Command line: ro root=UUID=07a30ea1-f06a-44e5-a85a-6e346bb9e3af rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quieti console=ttyS0 ide0=noprobe

E.g.

Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.

Booting paravirtualized kernel on Xen
NR_CPUS:4096 nr_cpumask_bits:15 nr_cpu_ids:15 nr_node_ids:1

 Xen HVM callback vector for event delivery is enabled

Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)

pci_hotplug: PCI Hot Plug PCI Core version: 0.5
pciehp: PCI Express Hot Plug Controller Driver version: 0.4
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5


input: Macintosh mouse button emulation as /devices/virtual/input/input2
xvda: xvda1 xvda2 xvda3

 xvda: xvda1 xvda2 xvda3

EXT4-fs (xvda2): INFO: recovery required on readonly filesystem
EXT4-fs (xvda2): write access will be enabled during recovery
EXT4-fs (xvda2): recovery complete
EXT4-fs (xvda2): mounted filesystem with ordered data mode. Opts:
dracut: Mounted root filesystem /dev/xvda2


Welcome to CentOS[screen]

Starting udev: udev: starting version 147

Initialising Xen virtual ethernet driver.

All services are  started.

That's all.










SSL redirect issue in cpanel


If a domain using shared ip..... if someone access domain with https.. then the request would be sent to the default document root[htdocs] instead of the actual one... so that the redirect rules also  not working in .htaccess.


-Need to update the below code in index.html under htdocs.

[root@vm5 htdocs]# cat index.html
<html><head><script> window.location.href = (window.location.protocol != "http:") ? "http:" + window.location.href.substring(window.location.protocol.length) : "/cgi-sys/defaultwebpage.cgi"; </script></head><body></body></html>

Database Backup script

#!/bin/bash
export savepath='/var/mysqlbackups'
export usr='mysql user'
export pwd=''
if [ ! -d $savepath ]; then
    mkdir -p $savepath
fi
chmod 700 $savepath
rm -rf $savepath/*
echo 'mySQL Backup Script'
echo 'Dumping individual tables..'
for a in `echo 'show databases' | mysql -u$usr -p$pwd | grep -v Database | grep -v information_schema`;

do
echo $a
  mkdir -p $savepath/$a
  chmod 700 $savepath/$a
  echo "Dumping database: $a"
echo
for i in `mysqldump --no-data -u $usr -p$pwd $a | grep 'CREATE TABLE' | sed -e 's/CREATE TABLE //' | sed -e 's/(.*//' | sed -e 's/\ /|/g' |sed -e's/|$//'`
  do
   echo "i = $i";
   c=`echo $i|sed -e's/|/\ /g'|sed -e 's/\`//g'`;
   echo " * Dumping table: $c"
   mysqldump --compact --allow-keywords --add-drop-table --allow-keywords --skip-dump-date -q -a -c -u$usr -p$pwd $a "$c" > "$savepath/$a/$c.sql"
   gzip -f "$savepath/$a/$c.sql"
   chmod 600 "$savepath/$a/$c.sql.gz"
  done
done

Thursday, 10 July 2014

Enable text console for HVM Domu

 Is there any way to change Graphical console[vnc] to the non-graphical console (xen console)?

For HVM guest, you need to enable serial port on domU config file (example here: http://pastebin.com/fb6fe631), and setup domU to use serial port (ttyS0 on Linux) by modifying (for Linux domU) /boot/grub/menu.lst, /etc/inittab, and /etc/securetty.

If it's PV guest, you need to set up domU to use xen console (which is xvc0 on current xen version, hvc0 on pv_ops kernel). It's similar to setting up domU for serial console, you just need to change ttyS0 to hvc0. An example of domU setup that can use both xvc0 and vnc console is here : http://pastebin.com/f6a5022bf




Referenece 1:
----------------
Part 2, converting HVM guest to PV guest
#=======================================================================

First we need to install kernel-xen with correct initrd
- yum install kernel-xen
- edit /boot/grub/menu.lst so it looks like this
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.1.6.el5xen)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.1.6.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0
    initrd /initrd-2.6.18-128.1.6.el5xen.img
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img
#=================================================

- edit /etc/sysconfig/kernel so it looks like this
#=================================================
UPDATEDEFAULT=yes
DEFAULTKERNEL=kernel-xen
#=================================================

- edit /etc/modprobe.conf so it looks like this
#=================================================
alias eth0 xennet
alias scsi_hostadapter xenblk
#=================================================

-recreate initrd
cd /boot
mv initrd-2.6.18-128.1.6.el5xen.img initrd-2.6.18-128.1.6.el5xen.img.bak
mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-128.1.6.el5xen.img 2.6.18-128.1.6.el5xen

Next we need to allow login from xvc0 (the default console)
- edit /etc/inittab and add a line like this near the end
#=================================================
xvc:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
#=================================================

- add "xvc0" to the end of /etc/securetty
- shutdown domU

Next we start the PV domU

- create PV domU config. Mine looks like this
#=================================================
memory = "500"
maxmem = "8000"

vcpus=8
vcpu_avail=1

disk =    [
    'phy:/dev/rootVG/testlv,hda,w',
    ]
vif =    [
    'mac=00:16:3E:49:CA:65, bridge=br6',
    ]
vfb =['type=vnc,vnclisten=0.0.0.0']
bootloader="/usr/bin/pygrub"
#=================================================

- startup domU, connect to it's console (xm create -c ...)

#=======================================================================
End of part 2




Reference 2:
-------------
Part 1. Creating a Centos HVM domU with working PV drivers
#=======================================================================

start with standard Centos 5.3 x86_64 HVM install.
- my HVM domU config file :
#=================================================
memory = 500

vif = [ 'mac=00:16:3E:49:CA:65, bridge=br6' ]
disk =    [
    'phy:/dev/rootVG/testlv,hda,w',
    'file:/data/iso/centos.iso,hdc:cdrom,r',
    ]

boot="cd"

device_model = '/usr/lib64/xen/bin/qemu-dm'
kernel = "/usr/lib/xen/boot/hvmloader"
builder='hvm'

sdl=0
vnc=1
vnclisten="0.0.0.0"
#vncunused=0
vncpasswd=''
#stdvga=0
serial='pty'
#localtime=1

usbdevice='tablet'
acpi=1
apic=1
pae=1

vcpus=1
#=================================================

Note boot="cd". With this config if you're using "fresh" LVM or image file, the harddisk will be unbootable initally and it will boot from CD. After installation it will automatically boot from harddisk.

- if you want serial text console (like I do), on DVD installation splash screen start installation with
linux text console=vga console=ttyS0

- during package selection, unselect "desktop GNOME" if you want text login like I do. Although not required, this will reduce resource needs (e.g. memory) and make subsequent setup easier.
- proceed untill installation finished


activate PV drivers

- edit /boot/grub/menu.lst so it looks like this (note ide0=noprobe)
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img
#=================================================

- edit /etc/modprobe.conf so it looks like this
#=================================================
#alias eth0 8139cp
blacklist 8139cp
blacklist 8139too
alias scsi_hostadapter ata_piix
# xen HVM
alias eth0 xen-vnif
alias scsi_hostadapter1 xen-vbd
#=================================================

-recreate initrd
cd /boot
mv initrd-2.6.18-128.el5.img initrd-2.6.18-128.el5.img.bak
mkinitrd -v initrd-2.6.18-128.el5.img 2.6.18-128.el5

- init 6
- reconnect to domU console (with either "xm console" or vncviewer)
- login, check whether xen-vbd activates correctly
# ls -la /sys/class/net/eth0/device
lrwxrwxrwx 1 root root 0 Apr 25 11:50 /sys/class/net/eth0/device -> ../../../devices/xen/vif-0
# ls -la /sys/block/hda/device
lrwxrwxrwx 1 root root 0 Apr 25 11:50 /sys/block/hda/device -> ../../devices/xen/vbd-768

#=======================================================================
End of part 1.


Reference 3:
--------------

Xen has a built in console when creating paravirtualized DOMU's, but this does not extend to hardware virtualized ones. In this case, we need to modify the configuration file, then set the DOM0 up to send messages and allow logins from the serial console.



This is basically like setting up a computer with a serial console and connecting to it via a serial cable.


Instructions for centos.

    in configuration file for DOMU (on DOM0), add the line:
        serial='pty'
    In DOMU
        edit /etc/inittab and find line which starts with co:2345 and
            comment any line that looks like ??:2345 by adding a pound sign in front (#)
            Find the line which say

            sT0:23:respawn:/sbin/getty -L ttyS0 9600 vt100

            and uncomment it by removing the pound sign in front of it
            To make the changes immediate, without rebooting the server, enter the command

            init q # or kill -HUP 1

            to tell init to reload. At this point, you should be able to execute the command xm console domainname from the DOM0

 
- edit /boot/grub/menu.lst so it looks like this (note ide0=noprobe)
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img


 - add "ttys0" to the end of /etc/securetty.

-Reboot the server.

For Ubuntu,

1) Create a file called /etc/init/ttyS0.conf containing the following:
# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.

start on stopped rc or RUNLEVEL=[12345]
stop on runlevel [!12345]

respawn
exec /sbin/getty -L 115200 ttyS0 vt102
 
2) Ask upstart to start the getty
sudo start ttyS0



3). edit /etc/default/grub. At the bottom of the file, add the following three lines

        GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,38400n8"
        GRUB_TERMINAL=serial
        GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"

        Execute command grub-mkconfig > /boot/grub/grub.cfg
    reboot DOMU and you should be able to access console via xm console. NOTE: this is a very basic console, so don't expect pretty




Thursday, 3 July 2014

How to create VM in xen virtualization

The command below will create an 8GB file that will be used as an 8GB drive. The whole file will be written to disk in one go so may take a short while to complete.
dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M count=8192

Alternatively, you can use the command below to create the same size file as a sparse file. What this does is create the file, but only take up disk space as the file is used. In this case the file will only really take about 1mb of disk initially and grow as you use it.
dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M seek=8191 count=1
There are pros and cons of using sparse files. On one hand they only take as much disk as is actually used, on the other hand the file can become fragmented and you could run out of real disk if you overcommit space.
Next up we’ll mount the install CD and export it over nfs so that xen can use it as a network install.
mkdir /tmp/centos52
mount /dev/hda /tmp/centos52 -o loop,ro

Just to check the mount went OK: ls /tmp/centos52 should show the files.
Now run the export:
exportfs *:/tmp/centos5
Now we’ll create the xen config file for our new instance. The default location for xen config files is /var/xen so that’s where ours will go.
I’m going to call my VM test01, so I’ll create a file /var/xen/test01 that contains the following initial configuration:
kernel = "/tmp/centos52/images/xen/vmlinuz"
ramdisk = "/tmp/centos52/images/xen/initrd.img"
name = "test01"
memory = "256"
## disk = [ 'tap:aio:/xenimages/test01/disk1.img,xvda,w', ]
disk = [ 'file:/xenimages/test01/disk1.img,xvda,w', ]
vif = [ 'bridge=eth0', ]
vcpus=1
on_reboot = "destroy"
on_crash = "destroy"
Note that if you are installing from a different machine from your xen machine the you will need to nfs mount the install disk in order for the above config to kick off the installer. e.g.
mount IP address:/tmp/centos52 /tmp/centos52
So, lets boot the new instance and start the installer.
xm create test01
After a moment or two the console should return with something like “Started domain test01″.
Now lets connect to the console and proceed with the install:
xm console test01
Or if you prefer the previous two commands can be combined into one: xm create test01 -c.
From here on you should work through the standard text mode installer.
The points to note are:
  • For installation image select “NFS image”. Then in the later nfs panel enter your PC’s IP address for the servername and /tmp/centos52 (or wherever you mounted the cd) as the directory.
  • I also specified a manual IP address for my VM. I selected my routers IP for the gateway and dns server, so that I can access the internet from the VM later.
  • The hard drive is named xvda, as specified in the config file. This will need to be partitioned and formatted by the installer.
The rest of the install is fairly straight forward. If in doubt just go with the defaults, although it’s probably a good idea to set a manual IP address in your subnet range so that you can easily ssh onto the VM.
Note that to release the console from your VM hold down Ctlr and press the ] key.
When the install is complete the new domain will need to be shut down (you’ll be prompted to ‘restart’ by the installer, this will in fact shut down the VM because we set the on_reboot option to destroy), and then the xen config file must be modified to allow the new VM to boot.
So, edit the config file that we created earlier and comment out the kernel and ramdisk lines. You should also change the on_crash and on_reboot actions to restart.
So the edited config file now looks like this:
## kernel = "/tmp/centos52/images/xen/vmlinuz"
## ramdisk = "/tmp/centos52/images/xen/initrd.img"
name = "test01"
memory = "256"
## disk = [ 'tap:aio:/xenimages/test01/disk1.img,xvda,w', ]
disk = [ 'file:/xenimages/test01/disk1.img,xvda,w', ]
vif = [ 'bridge=eth0', ]
vcpus=1
on_reboot = "restart"
on_crash = "restart"
Finally we can boot the new VM instance:
xm create test01 -c
and log in as root. You should also be able to ssh onto it from your network.

Monday, 30 June 2014

Add some specific word to some files using For Loop

We Need to add the line/change it from unlimited to 500 for MAX_EMAIL_PER_HOUR=500
for all packages/files under the folder.

Step 1
---------
cp -rpf /var/cpanel/packages /var/cpanel/packages_org
Step 2
--------
2).ls /var/cpanel/packages > test.txt

 #but we need to remove some files which have spaces[in between] from the list test.txt, then need to add the entry MAX_EMAIL_PER_HOUR=500 for that files manually. Because it does not recognize the spaces b/w single file.
E.G:
 johnhe3_WebFarm Beef(single file)
For loop consider it as two files johnhe3_WebFarm,Beef.

Step 3
--------
3).for i in `cat test.txt`; do if [ -z `grep -w 'MAX_EMAIL_PER_HOUR' "$i" | cut -d = -f1` ]; then echo -e 'MAX_EMAIL_PER_HOUR=500' >> "$i"; else sed -i 's/MAX_EMAIL_PER_HOUR=unlimited/MAX_EMAIL_PER_HOUR=500/g' "$i"; fi; done

Friday, 27 June 2014

Kernel Compile

Kernel compilation:
-------------------

cd /usr/src

wget ftp://ftp.kernel.org/pub/linux/kernel/v3.x/linux-3.13.6.tar.gz

tar xvf linux-3.13.6.tar.gz

cd linux-3.13.6

make menuconfig
--------------------------------
This for xen virtualiaztion support

Go into Processor type and features

Statically enable all XEN features

Go back to the  main menu and enter Device Drivers menu, then enter block devices menu

Statically enable the 2 XEN options

Go back to the Device Drivers menu and go down to XEN driver suppport

Statically enable all features

Go back to Device Drivers, go into Network device support and statically enable the 2 XEN options at the bottom

Exit out and save.
-----------------------------------

But you run directly " make menuconfig " & save, it creates new .config file.
But if you copied the config-`uname -r` file to /usr/src/linux-3.13.6 and run " make menuconfig ".
Now it included the new entries[e.g xen support] in that old config file.

Note to make sure all options are selected, Run

cat /usr/src/linux-3.13.6/.config | grep XEN

You should see the same as

CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_PCI_XEN=y
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_NETXEN_NIC=m
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=y
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_XEN_WDT is not set
CONFIG_XEN_FBDEV_FRONTEND=y
CONFIG_XEN_BALLOON=y
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=y
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=y
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=y
CONFIG_XEN_ACPI_PROCESSOR=y

If it looks good then continue otherwise please correct before hand.


SECTION 2:
----------------------------------
make bzImage

make modules

make modules_install

cp -a arch/x86/boot/bzImage /boot/vmlinuz-3.13.6

cp -a System.map /boot/System.map-3.13.6

cp -a .config /boot/config-3.13.6

depmod –a

mkinitramfs -o /boot/initrd.img-3.13.6 3.13.6

Modify GRUB to boot in XEN mode

vi /boot/grub/grub.conf

scroll down to the current setting and add the following above your current boot config, you will need to edit the lines to match your root and paths to the files.

title Xen 4.4.0 / Debian GNU/Linux, kernel 3.13.6
root (hd0,0)
kernel /xen.gz
module /vmlinuz-3.13.6 root=root=UUID=03f9e700-ba18-41a1-bbe7-65a372716c73 ro console=tty0
module /initrd.img-3.4.5

---------------------------------------------------

We can also use the below commands instead of section2,

--------------------------------------------------
Compile the main kernel:
# make
Compile the kernel modules:
# make modules
Install the kernel modules:
# make modules_install
At this point, you should see a directory named /lib/modules/3.9.3/ in your system.
 
Install the new kernel on the system:
# make install
The make install command will create the following files in the /boot directory.
  • vmlinuz-3.9.3 – The actual kernel
  • System.map-3.9.3 – The symbols exported by the kernel
  • initrd.img-3.9.3 – initrd image is temporary root file system used during boot process
  • config-3.9.3 – The kernel configuration file
----------------------------------------------

save and reboot.



Sunday, 25 May 2014

Automount using cifs


Put entries like below in /etc/fstab:

/192.168.1.1/backup /backup cifs 
defaults,noatime,username=root,password=PASSWD  0 0
 
 

Saturday, 24 May 2014

Multiple Rsync[parallel] during data transfer

Sometimes the data transfer is very slow due to network connections.
At that time, we use parallel rsync to transfer the data to other server efficiently.

Script:
-----------
export SRCDIR="/home/."; -->Source Directory
export DESTDIR="root@server.example.com:/home/."; --> Destination Directory
export THREADS="8";
rsync -lptgoDvzd $SRCDIR $DESTDIR; --> transfer the folders & sub-folders first.
cd $SRCDIR;
find . -type f | xargs -n1 -P$THREADS -I% rsync -az % $DESTDIR; -->rsync files in multiple process.

Monday, 5 May 2014

info [rebuildhttpdconf] Unable to determine group for user

-Unfortunately virtual host entry for some users are missing in apache configuration.

Also you are facing the below issue while rebuild the apache configuration.

info [rebuildhttpdconf] Unable to determine group for user

It seems that user entry missing in /etc/group file.

Fix:

First check that user entry in /etc/passwd

E.g.

grep xxxx /etc/passwd
crossjui:x:778:779::/home/xxxx:/bin/bash

779 is GID for that user.

You need to add the below entry in /etc/passwd file.

xxxx:x:779:

-Once again rebuild the apache configuration, it will create vhost entry.

Cagefs enabled user[php selector] gives 500 error

Need to manually re-set PHP selector and php version for this user:

# cagfesctl --setup-cl-selector
# /usr/bin/cl-selector --select=php --version=5.3 --user=xxxxx

Sunday, 27 April 2014

Routing Concept1

Sometimes you have more than one router in your network, and want different containers to use different routers. Other times you may have a single HN with IP addresses on different networks and want to assign containers addresses from those networks.
Lets say you have a HN with an IP address in network 192.168.100.0/24 (192.168.100.10) and an IP address in 192.168.200.0 (192.168.200.10). Maybe those addresses are on different VLANs. Maybe one is an internal network and the other faces the wider internet. Maybe you have 10 different networks assigned to the HN. It does not matter as long as there is a gateway on each of those networks. In our example we will assume the gateways are 192.168.100.1 and 192.168.200.1. You want any container assigned an address in the 192.168.100.0/24 network to use 192.168.100.1 and any container assigned an address in the 192.168.200.0/24 network to use 192.168.200.1.
By default the network traffic coming from a container will use the default gateway on the HN to reach the rest of the world. If we want our containers to use the gateways on their respective networks we need to configure source based routing. This involves creating an additional routing table to redirect the traffic.
For example:
# /sbin/ip rule add from 192.168.100.0/24 table 10000
# /sbin/ip route add throw 192.168.100.0/24 table 10000
# /sbin/ip route add default via 192.168.100.1 table 10000
The first line adds a routing rule. This rule tells the system to use an alternate routing table when trying to route packets from a certain source. In this case we are telling the system that if a packet originates from a 192.168.100.0/24 address we should use routing table 10000. The table number is unique and simply must be an unused table number from your system. I tend to start at 10000, but you can start your number wherever is convenient. To see a list of tables in use you can use:
# /sbin/ip rule list
Next we add two routing rules to table 10000. The first one is a throw rule. A throw rule merely tells the system to stop processing the current table if the destination address matches the criteria provided. This will allow the host system and the VPSs to continue to reach other systems on our 192.168.100.0/24 network without trying to use the default gateway we provide. And the second rule provides that default gateway.
Now all we need to do is repeat this for our second network:
# /sbin/ip rule add from 192.168.200.0/24 table 10001
# /sbin/ip route add throw 192.168.200.0/24 table 10001
# /sbin/ip route add default via 192.168.200.1 table 10001
Here we have changed the networks in the rule and routes and used a different table number. Everything else stays the same. You can, of course, as as many complex routes to a particular table as you like. If you want to allow a container in the 192.168.100.0/24 network to reach the 192.168.200.0/24 network without using the gateway, you can add another throw rule and allow the HN's default routing table to take effect:
# /sbin/ip route add throw 192.168.200.0/24 table 10000
A previous version of this page suggested adding an additional route in order to allow the HN to contact the container. Indeed this would be required if we did not provide the throw rule, but maintaining such a configuration requires adding new rules for every container. Using vzctl set <ctid> --ipadd <ip> adds these rules to the main routing table by default, but not our custom routing table. The configuration here only requires rules to be modified when changes are made to the networks, not each container.

Saturday, 26 April 2014

Tracing a program

Suppose some program on your system refuses to work or it works, but much slower then you've expected.  One way is to use strace program to follow system calls performed by given process.

 Use of strace

Commonly to use strace you should give the following command:
strace -o strace.out -ff touch /tmp/file

Here 

  • -o strace.out option means that strace program will output all information to the file named strace.out;
  • -ff means to strace the forked children of the program. Child straces outputs will be placed to strace.out.PID files, where PID is a pid of the child. If you want all the output to a single file, use -f argument instead (i.e. single f not double).
  • touch /tmp/file is the program with arguments which is to be straced.

 Strace results

So this is what we have in strace.out:
execve("/usr/bin/touch", ["touch", "/tmp/file"], [/* 51 vars */]) = 0
uname({sys="Linux", node="dhcp0-138", ...}) = 0
brk(0)                                  = 0x804f000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=47843, ...}) = 0
mmap2(NULL, 47843, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f1a000
close(3)                                = 0
open("/lib/libc.so.6", O_RDONLY)        = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360V\1"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1227872, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f19000
mmap2(NULL, 1142148, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e02000
mmap2(0xb7f13000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x110) = 0xb7f13000
mmap2(0xb7f17000, 7556, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f17000
close(3)                                = 0
mprotect(0xb7f13000, 4096, PROT_READ)   = 0
munmap(0xb7f1a000, 47843)               = 0
open("/dev/urandom", O_RDONLY)          = 3
read(3, "v\0265\313", 4)                = 4
close(3)                                = 0
brk(0)                                  = 0x804f000
brk(0x8070000)                          = 0x8070000
open("/tmp/file", O_WRONLY|O_NONBLOCK|O_CREAT|O_NOCTTY|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
utime("/tmp/file", NULL)                = -1 EACCES (Permission denied)
write(2, "touch: ", 7)                  = 7
write(2, "cannot touch `/tmp/file\'", 24) = 24
write(2, ": Permission denied", 19)     = 19
write(2, "\n", 1)                       = 1
exit_group(1)                           = ?



In this case we see, that the problem is in access to /tmp/file:


open("/tmp/file", O_WRONLY|O_NONBLOCK|O_CREAT|O_NOCTTY|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
utime("/tmp/file", NULL)                = -1 EACCES (Permission denied)