2011年11月21日 星期一

Disable X windows on Ubuntu 10.04

In CentOS / RHEL, people would change option on initdefault from /etc/inittab to control whether X11 should be started. In Ubuntu, it is totally different, there is no such iniitab file on Ubuntu. To stop Ubuntu to start X11 (i.e. behave more or less the same as runlevel 3 in CentOS), one have to modify the /etc/init/gdm.conf and edit the line to disable gdm service on runlevel 2.

Here is the line

# stop on runlevel [0126]  ## This means GDM will start on run level 1, 2 and 6.

stop on runlevel [016] ## This means GDM will only start on run level 1 and 6. And Ubuntu's default run level is 2, i.e GDM wont start on default run level.

After that, a system reboot will bring your ubuntu installation to text mode.



2011年11月20日 星期日

Recover corrupted LVM root partition by fsck

So you have a Linux machine that is installed with LVM and you have assigned root partition to sit on that LVM pool. One day your machine is crashed and the root partition is corrupted. It failed to boot properly and asked you to fill in root password to run fsck. Sadly, you lost the root password and you need a recovery CD (or installation CD) to boot and recover the disk.

Now your machine is booted and you found that your just couldn't run fsck against a LVM partition.

root@test:/# fdisk -l

Disk /dev/vda: 21.5 GB, 21474836480 bytes
16 heads, 63 sectors/track, 41610 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00014ef8

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         389      194560   83  Linux
/dev/vda2             391       41609    20773888   8e  Linux LVM

So rescue CD by default just won't automatically activate the LVM (and its underlaying volumes). What you need to do is to activate the LVM partition. and then run fsck on the volumes.

### Run lvm from rescue CD
bash-4.1# lvm

### This will scan and list the PV
lvm> pvscan 
   PV /dev/vda2   VG volgroup01   lvm2 [19.78GiB/ 0     free]
   Total: 1 [19.78 GiB] / in use: 1 [19.78 GiB] / in no VG: 0 [0    ]

### This will scan and list the VG
lvm> vgscan
   Reading all physical volumes. This may take a while...
   Found volume group "volgroup01" using metadata type lvm2

### This will list and scan the LV (the meat is here)
lvm> lvscan
   inactive                 '/dev/volgroup01/root'    [11.78 GiB] inherit
   inactive                 '/dev/volgroup01/swap'  [8.00 GiB] inherit

### And then execute lvchange
bash-4.1# lvchange -ay /dev/volgroup01/root

### So quit the lvm
lvm > exit
Adding dirhash hint to filesystem

### now time to run fsck
bash-4.1# fsck -y /dev/volgroup01/root


2011年11月17日 星期四

Install QEMU-KVM in RHEL / CentOS

Here is the command to install QEMU-KVM packages in RHEL / CentOS, 

yum -y install qemu-kvm qemu-kvm-tools libvirt

or these command will install the packages for associated virtualization stuff.

yum -y groupinstall "Virtualization"
yum -y groupinstall "Virtualization Client"
yum -y groupinstall "Virtualization Platform"
yum -y groupinstall "Virtualization Tools"

2011年11月14日 星期一

Bandwidth testing with iperf

To test bandwidth between 2 linux servers, we would use a tools call iperf. Installing iperf from repository is easy, what we need to run is

[root@server ~]# yum -y install server.

Or,

You may want to run apt-get install iperf on debian or ubuntu.

Once we install the package on both machines, we will need to pick one node to run as server

[root@server ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------


Once service is running, we could run iperf from the client machine to connect to server.
[root@client ~]# iperf -c 192.168.0.1
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 37476 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.38 GBytes  1.19 Gbits/sec

Now the bandwidth is 1.19Gbit/sec, which is pretty much correct on my VM.

There are some other common options could be used. For e.g. using -d would perform a bi-directional test, i.e testing the bandwidth from source-to-target and then target-to-source. Using -n $some_number would define the size to transfer.

Below example will send approximately 1000Mb of data to test.

[root@client ~]# iperf -n 1000000000 -c 192.168.0.1 
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 39375 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 7.8 sec   954 MBytes  1.03 Gbits/sec