Thursday, 24 July 2014

How to Mount the VM Disk

Mount the VM Disk which built On image file,
-------------------------------------------------------------------------

losetup /dev/loop0 /root/mailserver.img

-Attach the image file to loop device.

kpartx -a -v /dev/loop0

-Map the vm partitions,

Output:
------------
add map loop0p1 (253:4): 0 1024000 linear /dev/loop0 2048
add map loop0p2 (253:5): 0 7362560 linear /dev/loop0 1026048

Then mount the VM partitions,

mount /dev/mapper/loop0p1 /mnt/boot
mount /dev/mapper/loop0p2 /mnt/root -->if it is not LVM.

if vm root partition is an LVM,
You will receive the below error,
mount: unknown filesystem type 'LVM2_member'

Which means LVM not identify that vm lvm partition.

  pvscan
vgscan && lvscan

Output:
-------------
inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg0/ubuntu' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/centos' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/ubuntuhvm' [10.00 GiB] inherit
  ACTIVE            '/dev/vg0/images' [20.00 GiB] inherit

-need to activate that new volumegroup,

 vgchange -ay VolGroup

 mount /dev/VolGroup/lv_root /mnt/centos

That's All.


Finally, the backward process is:
# umount /mnt
# vgchange -an VolGroup
# kpartx -dv /dev/loop0

 Reference:
------------------






Mount virtual machine’s LVM partition[Inside VM] on KVM host


1. Scan volume Group
  #vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1  10   0 wz--n- 500G  300G
2. Scan lvm group.
Notice all lvm are active and belong to vg volume group.
In server lvm our VM is installed.
  #lvscan
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
3.
 #kpartx -av /dev/vg/yumserver
4. Scan lvm group again, notice their are now two inactive lvm volumes, these are of server lvm where our VM is installed
  #lvscan
  inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
5. Scan volume group again.
Now notice that their are two volume groups. One is of KVM HOST and other is of Guest VM.
  #vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   1   2   0 wz--n- 19.51g    0
  vg         1  10   0 wz--n- 500G  300G
6. Perform this step only if GuestVM volume group is same as of our KVMHost.
  #vgrename GuestVMvolumegroup  newvolgroup
  In our case both are different so we skip this step
7. activating the VolGroup LVM (GUEST VM)
  #vgchange -ay VolGroup   
  2 logical volume(s) in volume group "VolGroup" now active
8 scan lvm again now all lvms are active
  #lvscan
  ACTIVE            '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
9. Mount the VM lvm volume.
  #mount /dev/VolGroup/lv_root /mnt/
10. Do whatever

11. Unmount the lvm.
   #umount /mnt/
12. Deactivating the VolGroup LVM (GUEST VM)
  #vgchange -an VolGroup   
13. List LVM again
  #lvscan
  inactive          '/dev/VolGroup/lv_root' [17.54 GiB] inherit
  inactive          '/dev/VolGroup/lv_swap' [1.97 GiB] inherit
  ACTIVE            '/dev/vg/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/vg/home' [2.00 GiB] inherit
  ACTIVE            '/dev/vg/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/vg/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/vg/server' [20.00 GiB] inherit
14. Perform this step only if you performed Step 6
  #vgrename  newvolgroup GuestVMvolumegroup 
In our case both are different so we skip this step

15.
#kpartx  -dv /dev/vg/yumserver
16 List VolumeGroup
  #vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1  10   0 wz--n- 500G  300G



Mount the VM Disk which built on LVM:
-----------------------------------------------------------------------

[root@xen3 xen]# kpartx -av /dev/mapper/vg0-centos
add map vg0-centos1 (253:4): 0 409600 linear /dev/mapper/vg0-centos 2048
add map vg0-centos2 (253:5): 0 16384000 linear /dev/mapper/vg0-centos 411648
add map vg0-centos3 (253:6): 0 4096000 linear /dev/mapper/vg0-centos 16795648

Then it is very simple,

vg0-centos1 - boot partition

mount /dev/mapper/vg0-centos1 /mnt/boot

vg0-centos2 - root partition

mount /dev/mapper/vg0-centos2 /mnt/root

Other one is swap partition.

Reverse:

umount /dev/mapper/vg0-centos1

umount /dev/mapper/vg0-centos2

 kpartx -dv /dev/mapper/vg0-centos

Crosscheck with,

ls -all /dev/mapper/


Wednesday, 23 July 2014

Tuesday, 22 July 2014

Mapping physical storage to domU disk

Protocol
Description
Example
phy:
Block devices, such as a physical disk, in domain 0
phy:/dev/sdc
file:
Raw disk images accessed by using loopback
file:/path/file
nbd:
Raw disk images accessed by using NBD
ndb: ip_port
tap:aio:
Raw disk images accessed by using blktap. Similar to loopback but without using loop devices.
tap:aio:/path/file
tap:cdrom
CD reader block devices
tap:cdrom:/dev/sr0
tap:vmdk:
VMware disk images accessed by using blktap
tap:vmdk:/path/file
tap:qcow:
QEMU disk images accessed by using blktap
tap:qcow:/path/file
iscsi:
iSCSI targets using connections initiated from domain 0
iscsi:IQN,LUN
npiv:
Fibre Channel connections initiated from domain 0
npiv:NPIV,LUN

OpenSource Cloud Projects that you could FOCUS on !

*1. Hypervisor and Container*

*Docker. Io* - an open-source engine for building, packing and running any
application as a lightweight container, built upon the LXC container
mechanism included in the Linux kernel. It was written by dotCloud and
released in 2013.

*KVM* - a lightweight hypervisor that was accepted into the Linux kernel in
February 2007. It was originally developed by Qumranet, a startup that was
acquired by Red Hat in 2008.

*Xen Project* - a cross-platform software hypervisor that runs on platforms
such as BSD, Linux and Solaris. Xen was originally written at the
University of Cambridge by a team led by Ian Pratt and is now a Linux
Foundation Collaborative Project.

*CoreOS* – a new Linux distribution that uses containers to help manage
massive server deployments. Its beta version was released in May 2014.

 *2. Infrastructure as a Service*

*Apache Cloud**S**tack* - an open source IaaS platform with Amazon Web
Services (AWS) compatibility. CloudStack was originally created by
Cloud.com (formerly known as VMOps), a startup that was purchased by Citrix
in 2011. In April of 2012, CloudStack was donated by Citrix to the Apache
Software Foundation.

*Eucalyptus *- an open-source IaaS platform for building AWS-compatible
private and hybrid clouds. It began as a research project at UC Santa
Barbara and was commercialized in January 2009 under the name Eucalyptus
Systems.

*OpenNebula* - an open-source IaaS platform for building and managing
virtualized enterprise data centers and private clouds. It began as a
research project in 2005 authored by Ignacio M. Llorente and Rubén S.
Montero. Publicly released in 2008, development today is via the open
source model.

*OpenStack* - an open source IaaS platform, covering compute, storage and
networking. In July of 2010, NASA and Rackspace joined forces to create the
OpenStack project, with a goal of allowing any organization to build a
public or private cloud using using the same technology as top cloud
providers.

 *3. Platform as a Service*

*CloudFoundry* - an open Platform-as-a-Service, providing a choice of
clouds, developer frameworks and application services. VMware announced
Cloud Foundry in April 2011 and built a partner ecosystem.

*OpenShift *- Red Hat’s Platform-as-a-Service offering. OpenShift is a
cloud application platform where application developers and teams can
build, test, deploy, and run their applications in a cloud environment. The
OpenShift technology came from Red Hat’s 2010 acquisition of start-up
Makara (founded in May 2008). OpenShift was announced in May 2011 and
open-sourced in April 2012.

 *4. Provisioning and Management Tool*

*Ansible* – an automation engine for deploying systems and applications.

*Apache Mesos* - a cluster manager that provides efficient resource
isolation and sharing across distributed applications, or frameworks. It
was created at the University of California at Berkeley's AMPLab and became
an Apache Foundation top level project in 2013.

*Chef* - a configuration-management tool, controlled using an extension of
Ruby. Released by Opscode in January 2009.

*Juju *- a service orchestration management tool released by Canonical as
Ensemble in 2011 and then renamed later that year.

*O**v**irt *- provides a feature-rich management system for virtualized
servers with advanced capabilities for hosts and guests. Red Hat first
announced oVirt as part of its emerging-technology initiative in 2008, then
re-launched the project in late 2011 as part of the Open Virtualization
Alliance.

*Puppet* - IT automation software that helps system administrators manage
infrastructure throughout its lifecycle. Founded by Luke Kanies in 2005.

*Salt* - a configuration management tool focused on speed and incorporating
orchestration features. Salt was written by Thomas S Hatch and first
released in 2011.

*Vagrant* - an open source tool for building and managing development
environments, often within virtual machines. Written in 2010 by Mitchell
Hashimoto and John Bender.

 *5. Storage*

*Camlistore* - a set of open source formats, protocols, and software for
modeling, storing, searching, sharing and synchronizing data. First
released by Google developers in 2013.

*Ceph* - a distributed object store and file system. It was originally
created by Sage Weil for a doctoral dissertation. After Weil’s graduation
in 2007, he continued working on it full-time at DreamHost as the
development team grew. In 2012, Weil and others formed Inktank to deliver
professional services and support. It was acquired by Red Hat in 2014.

*Gluster *- a scale-out, distributed file system. It is developed by the
Gluster community, a global community of users, developers and other
contributors. GlusterFS was originally developed by Gluster Inc., then
acquired by Red Hat in October 2011.

*Riak CS* - an open source storage system built on top of the Riak
key-value store. Riak CS was originally developed by Basho and launched in
2012, with the source subsequently released in 2013.

*Swift* - is a highly available, distributed object store system, ideal for
unstructured data. Developed as part of the OpenStack project.

Friday, 18 July 2014

Xen HVM Migration

Migrate VM[vm139]:

Source Node:
-------------

--- Logical volume ---
  LV Path                /dev/vg_grp/vm139_img
  LV Name                vm139_img
  VG Name                vg_grp
    LV Status              available
  # open                 3
  LV Size                10.00 GiB
  Current LE             320
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:24

Source Node:
-------------

Backup:
--------

lvcreate -n vm139_backup --size 15G /dev/vg_grp


mkfs.ext3 /dev/vg_grp/vm139_backup

mkdir -p /home/vm139_backup

mount /dev/vg_grp/vm139_backup /home/vm139_backup

xm shutdown vm139

dd if=/dev/vg_grp/vm139_img of=/home/vm139_backup/vm139_backup.img


Destination Node:
-----------------

lvcreate -n vm139_backup --size 15G /dev/vg_xenhvm

lvcreate -n vm139_img --size 10G /dev/vg_xenhvm

mkfs.ext3 /dev/vg_xenhvm/vm139_backup

mkdir -p /home/vm139_backup

mount /dev/vg_xenhvm/vm139_backup /home/vm139_backup


Transfer:
----------

scp -C /home/vm139_backup/vm139_backup.img root@x.x.x.x:/home/vm139_backup/

Restore:
---------

dd if=/home/vm139_backup/vm139_backup.img of=/dev/vg_xenhvm/vm139_img

Solusvm server:
-----------------

Need to move the VM from slave1 to slave2,

/scripts/vm-migrate 104 4


Then Reboot the VM via solusvm. It will create config file.

POST Migration:
---------------

Source:
--------

umount /home/vm139_backup

lvremove /dev/vg_grp/vm139_backup

Destination:
------------

umount /home/vm139_backup

lvremove /dev/vg_xenhvm/vm139_backup


Note:
------

For Ip transfer, there is two ip blocks in solusvm.

1. ipblock2 for slave2
2.  ipblock2 for slave1

-granted the permission for slave2 server to access & assign the ips from ipblock1. So the VM's will use the same ip after migration.


Reference:
http://docs.solusvm.com/xen_migrations
http://www.nocser.net/clients/knowledgebase/421/Migrate-Solusvm-VPS-from-1-node-to-another-node-Updated.html

Monday, 14 July 2014

Port Mirroring

Port mirroring is an approach to monitoring network traffic that involves forwarding a copy of each packet from one network switch port to another.
Port mirroring enables the administrator to keep close track of switch performance by placing a protocol analyzer on the port that's receiving the mirrored data.
An administrator configures port mirroring by assigning a port from which to copy all packets and another port to which those packets will be sent. A packet bound for -- or heading away from -- the first port will be forwarded to the second port as well. The administrator must then place a protocol analyzer on the port that's receiving the mirrored data to monitor each segment separately.
Network administrators can use port mirroring as a diagnostic or debugging tool. 

Friday, 11 July 2014

Xen DomU booting process on HVM[pure]

For this, the booting process starting with.

        Welcome to CentOS
Starting udev: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[  OK  ]
Setting hostname localhost.localdomain:  [  OK  ]
Setting up Logical Volume Management:   No volume groups found
[  OK  ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda2
/dev/xvda2: clean, 18459/512064 files, 218237/2048000 blocks
[/sbin/fsck.ext4 (1) -- /boot] fsck.ext4 -a /dev/xvda1
/dev/xvda1: clean, 38/51200 files, 34256/204800 blocks
[  OK  ]
Remounting root filesystem in read-write mode:  [  OK  ]
Mounting local filesystems:  [  OK  ]
Enabling /etc/fstab swaps:  [  OK  ]
Entering non-interactive startup
ip6tables: Applying firewall rules: [  OK  ]
iptables: Applying firewall rules: [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0: 
Determining IP information for eth0... done.
[  OK  ]
Starting auditd: [  OK  ]
Starting system logger: [  OK  ]
Mounting filesystems:  [  OK  ]
Retrigger failed udev events[  OK  ]
Starting sshd: [  OK  ]
Starting postfix: [  OK  ]
Starting crond: [  OK  ]

-Because the VM act like standalone physical server, so that the PV driver process are not showing during boot.

Xen DomU booting process [PV on HVM].


Centos:
-------------

-Kernel loaded with parameter ide0=noprobe[it will prevent disk & NIC emulation, use xen pv drivers].

Linux version 2.6.32-431.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) ) #1 SMP Fri Nov 22 03:15:09 UTC 2013
Command line: ro root=UUID=07a30ea1-f06a-44e5-a85a-6e346bb9e3af rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quieti console=ttyS0 ide0=noprobe

E.g.

Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.

Booting paravirtualized kernel on Xen
NR_CPUS:4096 nr_cpumask_bits:15 nr_cpu_ids:15 nr_node_ids:1

 Xen HVM callback vector for event delivery is enabled

Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)

pci_hotplug: PCI Hot Plug PCI Core version: 0.5
pciehp: PCI Express Hot Plug Controller Driver version: 0.4
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5


input: Macintosh mouse button emulation as /devices/virtual/input/input2
xvda: xvda1 xvda2 xvda3

 xvda: xvda1 xvda2 xvda3

EXT4-fs (xvda2): INFO: recovery required on readonly filesystem
EXT4-fs (xvda2): write access will be enabled during recovery
EXT4-fs (xvda2): recovery complete
EXT4-fs (xvda2): mounted filesystem with ordered data mode. Opts:
dracut: Mounted root filesystem /dev/xvda2


Welcome to CentOS[screen]

Starting udev: udev: starting version 147

Initialising Xen virtual ethernet driver.

All services are  started.

That's all.










SSL redirect issue in cpanel


If a domain using shared ip..... if someone access domain with https.. then the request would be sent to the default document root[htdocs] instead of the actual one... so that the redirect rules also  not working in .htaccess.


-Need to update the below code in index.html under htdocs.

[root@vm5 htdocs]# cat index.html
<html><head><script> window.location.href = (window.location.protocol != "http:") ? "http:" + window.location.href.substring(window.location.protocol.length) : "/cgi-sys/defaultwebpage.cgi"; </script></head><body></body></html>

Database Backup script

#!/bin/bash
export savepath='/var/mysqlbackups'
export usr='mysql user'
export pwd=''
if [ ! -d $savepath ]; then
    mkdir -p $savepath
fi
chmod 700 $savepath
rm -rf $savepath/*
echo 'mySQL Backup Script'
echo 'Dumping individual tables..'
for a in `echo 'show databases' | mysql -u$usr -p$pwd | grep -v Database | grep -v information_schema`;

do
echo $a
  mkdir -p $savepath/$a
  chmod 700 $savepath/$a
  echo "Dumping database: $a"
echo
for i in `mysqldump --no-data -u $usr -p$pwd $a | grep 'CREATE TABLE' | sed -e 's/CREATE TABLE //' | sed -e 's/(.*//' | sed -e 's/\ /|/g' |sed -e's/|$//'`
  do
   echo "i = $i";
   c=`echo $i|sed -e's/|/\ /g'|sed -e 's/\`//g'`;
   echo " * Dumping table: $c"
   mysqldump --compact --allow-keywords --add-drop-table --allow-keywords --skip-dump-date -q -a -c -u$usr -p$pwd $a "$c" > "$savepath/$a/$c.sql"
   gzip -f "$savepath/$a/$c.sql"
   chmod 600 "$savepath/$a/$c.sql.gz"
  done
done

Thursday, 10 July 2014

Enable text console for HVM Domu

 Is there any way to change Graphical console[vnc] to the non-graphical console (xen console)?

For HVM guest, you need to enable serial port on domU config file (example here: http://pastebin.com/fb6fe631), and setup domU to use serial port (ttyS0 on Linux) by modifying (for Linux domU) /boot/grub/menu.lst, /etc/inittab, and /etc/securetty.

If it's PV guest, you need to set up domU to use xen console (which is xvc0 on current xen version, hvc0 on pv_ops kernel). It's similar to setting up domU for serial console, you just need to change ttyS0 to hvc0. An example of domU setup that can use both xvc0 and vnc console is here : http://pastebin.com/f6a5022bf




Referenece 1:
----------------
Part 2, converting HVM guest to PV guest
#=======================================================================

First we need to install kernel-xen with correct initrd
- yum install kernel-xen
- edit /boot/grub/menu.lst so it looks like this
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.1.6.el5xen)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.1.6.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0
    initrd /initrd-2.6.18-128.1.6.el5xen.img
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img
#=================================================

- edit /etc/sysconfig/kernel so it looks like this
#=================================================
UPDATEDEFAULT=yes
DEFAULTKERNEL=kernel-xen
#=================================================

- edit /etc/modprobe.conf so it looks like this
#=================================================
alias eth0 xennet
alias scsi_hostadapter xenblk
#=================================================

-recreate initrd
cd /boot
mv initrd-2.6.18-128.1.6.el5xen.img initrd-2.6.18-128.1.6.el5xen.img.bak
mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-128.1.6.el5xen.img 2.6.18-128.1.6.el5xen

Next we need to allow login from xvc0 (the default console)
- edit /etc/inittab and add a line like this near the end
#=================================================
xvc:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
#=================================================

- add "xvc0" to the end of /etc/securetty
- shutdown domU

Next we start the PV domU

- create PV domU config. Mine looks like this
#=================================================
memory = "500"
maxmem = "8000"

vcpus=8
vcpu_avail=1

disk =    [
    'phy:/dev/rootVG/testlv,hda,w',
    ]
vif =    [
    'mac=00:16:3E:49:CA:65, bridge=br6',
    ]
vfb =['type=vnc,vnclisten=0.0.0.0']
bootloader="/usr/bin/pygrub"
#=================================================

- startup domU, connect to it's console (xm create -c ...)

#=======================================================================
End of part 2




Reference 2:
-------------
Part 1. Creating a Centos HVM domU with working PV drivers
#=======================================================================

start with standard Centos 5.3 x86_64 HVM install.
- my HVM domU config file :
#=================================================
memory = 500

vif = [ 'mac=00:16:3E:49:CA:65, bridge=br6' ]
disk =    [
    'phy:/dev/rootVG/testlv,hda,w',
    'file:/data/iso/centos.iso,hdc:cdrom,r',
    ]

boot="cd"

device_model = '/usr/lib64/xen/bin/qemu-dm'
kernel = "/usr/lib/xen/boot/hvmloader"
builder='hvm'

sdl=0
vnc=1
vnclisten="0.0.0.0"
#vncunused=0
vncpasswd=''
#stdvga=0
serial='pty'
#localtime=1

usbdevice='tablet'
acpi=1
apic=1
pae=1

vcpus=1
#=================================================

Note boot="cd". With this config if you're using "fresh" LVM or image file, the harddisk will be unbootable initally and it will boot from CD. After installation it will automatically boot from harddisk.

- if you want serial text console (like I do), on DVD installation splash screen start installation with
linux text console=vga console=ttyS0

- during package selection, unselect "desktop GNOME" if you want text login like I do. Although not required, this will reduce resource needs (e.g. memory) and make subsequent setup easier.
- proceed untill installation finished


activate PV drivers

- edit /boot/grub/menu.lst so it looks like this (note ide0=noprobe)
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img
#=================================================

- edit /etc/modprobe.conf so it looks like this
#=================================================
#alias eth0 8139cp
blacklist 8139cp
blacklist 8139too
alias scsi_hostadapter ata_piix
# xen HVM
alias eth0 xen-vnif
alias scsi_hostadapter1 xen-vbd
#=================================================

-recreate initrd
cd /boot
mv initrd-2.6.18-128.el5.img initrd-2.6.18-128.el5.img.bak
mkinitrd -v initrd-2.6.18-128.el5.img 2.6.18-128.el5

- init 6
- reconnect to domU console (with either "xm console" or vncviewer)
- login, check whether xen-vbd activates correctly
# ls -la /sys/class/net/eth0/device
lrwxrwxrwx 1 root root 0 Apr 25 11:50 /sys/class/net/eth0/device -> ../../../devices/xen/vif-0
# ls -la /sys/block/hda/device
lrwxrwxrwx 1 root root 0 Apr 25 11:50 /sys/block/hda/device -> ../../devices/xen/vbd-768

#=======================================================================
End of part 1.


Reference 3:
--------------

Xen has a built in console when creating paravirtualized DOMU's, but this does not extend to hardware virtualized ones. In this case, we need to modify the configuration file, then set the DOM0 up to send messages and allow logins from the serial console.



This is basically like setting up a computer with a serial console and connecting to it via a serial cable.


Instructions for centos.

    in configuration file for DOMU (on DOM0), add the line:
        serial='pty'
    In DOMU
        edit /etc/inittab and find line which starts with co:2345 and
            comment any line that looks like ??:2345 by adding a pound sign in front (#)
            Find the line which say

            sT0:23:respawn:/sbin/getty -L ttyS0 9600 vt100

            and uncomment it by removing the pound sign in front of it
            To make the changes immediate, without rebooting the server, enter the command

            init q # or kill -HUP 1

            to tell init to reload. At this point, you should be able to execute the command xm console domainname from the DOM0

 
- edit /boot/grub/menu.lst so it looks like this (note ide0=noprobe)
#=================================================
default=0
timeout=5
serial --unit=0 --speed=9600
terminal --timeout=5 serial console
title CentOS (2.6.18-128.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0 ide0=noprobe
    initrd /initrd-2.6.18-128.el5.img


 - add "ttys0" to the end of /etc/securetty.

-Reboot the server.

For Ubuntu,

1) Create a file called /etc/init/ttyS0.conf containing the following:
# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.

start on stopped rc or RUNLEVEL=[12345]
stop on runlevel [!12345]

respawn
exec /sbin/getty -L 115200 ttyS0 vt102
 
2) Ask upstart to start the getty
sudo start ttyS0



3). edit /etc/default/grub. At the bottom of the file, add the following three lines

        GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,38400n8"
        GRUB_TERMINAL=serial
        GRUB_SERIAL_COMMAND="serial --speed=38400 --unit=0 --word=8 --parity=no --stop=1"

        Execute command grub-mkconfig > /boot/grub/grub.cfg
    reboot DOMU and you should be able to access console via xm console. NOTE: this is a very basic console, so don't expect pretty




Thursday, 3 July 2014

How to create VM in xen virtualization

The command below will create an 8GB file that will be used as an 8GB drive. The whole file will be written to disk in one go so may take a short while to complete.
dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M count=8192

Alternatively, you can use the command below to create the same size file as a sparse file. What this does is create the file, but only take up disk space as the file is used. In this case the file will only really take about 1mb of disk initially and grow as you use it.
dd if=/dev/zero of=/xenimages/test01/disk1.img oflag=direct bs=1M seek=8191 count=1
There are pros and cons of using sparse files. On one hand they only take as much disk as is actually used, on the other hand the file can become fragmented and you could run out of real disk if you overcommit space.
Next up we’ll mount the install CD and export it over nfs so that xen can use it as a network install.
mkdir /tmp/centos52
mount /dev/hda /tmp/centos52 -o loop,ro

Just to check the mount went OK: ls /tmp/centos52 should show the files.
Now run the export:
exportfs *:/tmp/centos5
Now we’ll create the xen config file for our new instance. The default location for xen config files is /var/xen so that’s where ours will go.
I’m going to call my VM test01, so I’ll create a file /var/xen/test01 that contains the following initial configuration:
kernel = "/tmp/centos52/images/xen/vmlinuz"
ramdisk = "/tmp/centos52/images/xen/initrd.img"
name = "test01"
memory = "256"
## disk = [ 'tap:aio:/xenimages/test01/disk1.img,xvda,w', ]
disk = [ 'file:/xenimages/test01/disk1.img,xvda,w', ]
vif = [ 'bridge=eth0', ]
vcpus=1
on_reboot = "destroy"
on_crash = "destroy"
Note that if you are installing from a different machine from your xen machine the you will need to nfs mount the install disk in order for the above config to kick off the installer. e.g.
mount IP address:/tmp/centos52 /tmp/centos52
So, lets boot the new instance and start the installer.
xm create test01
After a moment or two the console should return with something like “Started domain test01″.
Now lets connect to the console and proceed with the install:
xm console test01
Or if you prefer the previous two commands can be combined into one: xm create test01 -c.
From here on you should work through the standard text mode installer.
The points to note are:
  • For installation image select “NFS image”. Then in the later nfs panel enter your PC’s IP address for the servername and /tmp/centos52 (or wherever you mounted the cd) as the directory.
  • I also specified a manual IP address for my VM. I selected my routers IP for the gateway and dns server, so that I can access the internet from the VM later.
  • The hard drive is named xvda, as specified in the config file. This will need to be partitioned and formatted by the installer.
The rest of the install is fairly straight forward. If in doubt just go with the defaults, although it’s probably a good idea to set a manual IP address in your subnet range so that you can easily ssh onto the VM.
Note that to release the console from your VM hold down Ctlr and press the ] key.
When the install is complete the new domain will need to be shut down (you’ll be prompted to ‘restart’ by the installer, this will in fact shut down the VM because we set the on_reboot option to destroy), and then the xen config file must be modified to allow the new VM to boot.
So, edit the config file that we created earlier and comment out the kernel and ramdisk lines. You should also change the on_crash and on_reboot actions to restart.
So the edited config file now looks like this:
## kernel = "/tmp/centos52/images/xen/vmlinuz"
## ramdisk = "/tmp/centos52/images/xen/initrd.img"
name = "test01"
memory = "256"
## disk = [ 'tap:aio:/xenimages/test01/disk1.img,xvda,w', ]
disk = [ 'file:/xenimages/test01/disk1.img,xvda,w', ]
vif = [ 'bridge=eth0', ]
vcpus=1
on_reboot = "restart"
on_crash = "restart"
Finally we can boot the new VM instance:
xm create test01 -c
and log in as root. You should also be able to ssh onto it from your network.