2012年3月31日 星期六

OpenStack Swift with swauth, Auth subsystem prep failed: 400 Bad Request

Was mudding around with Openstack Swift recently and found that there are quite a lot of issue to get it working.

One issue I come across today is the inability to run swauth-prep to set up the account environment on my swift-proxy (version 1.4.6).

root@proxy ~/s3-curl# swauth-prep  -A https://127.0.0.1:8080/auth/v1 -K swauthkey
Auth subsystem prep failed: 400 Bad Request


Keep googling but dont have much finding. When I was stucked, i keep reading their online forum and then I come across an article saying the correct URL for swauth-prep should be on /auth/ instead of /auth/v1. So I changed the admin URL to

root@proxy:~/s3-curl# swauth-prep  -A https://127.0.0.1:8080/auth/ -K swauthkey
root@proxy:~/s3-curl# swauth-list -A https://127.0.0.1:8080/auth/ -K swauthkey
{"accounts": [{"name": "system"}]}


And then viola, it works.


** BTW ** Chances that it is either swift storage permission problem or it is indicating a connection problem between swauth and swift nodes.

Below syslog error would indicate a connection issue between swauth node and swift storage nodes.

May  3 00:07:47 swift-proxy swift ERROR with Account server 192.168.0.11:6002/vdb1 re: Trying to PUT /AUTH_.auth: Connection refused
May  3 00:07:47 swift-proxy swift ERROR with Account server 192.168.0.12:6002/vdb1 re: Trying to PUT /AUTH_.auth: Connection refused
May  3 00:07:47 swift-proxy swift ERROR with Account server 192.168.0.13:6002/vdb1 re: Trying to PUT /AUTH_.auth: Connection refused
May  3 00:07:47 swift-proxy swift Account PUT returning 503 for (503, 503, 503) (txn: tx79ff210a2f5249989abecb8319a64126)

In above scenario, I checked my configuration and found that there is a discrepancy on port settings between rings file and swift node configuration files (object-server.conf / account-server.conf / container-server.conf)

2012年3月30日 星期五

AWS storage gateway port 80 connection refused

Was testing on AWS storage gateway services in my ESXi host.  So what is AWS storage gateway? Basically it is a AWS service to host behind your on-premise firewall. An AWS customized VM living on ESXi will be put on-premise and allow remote storage management from AWS console. The VM on ESXi will present storage via iscsi which allow remote read-write access..

I was following this documents but unfortunately I jump into the condition while I was trying to activate my storage gateway VM. It looks like activation require access on port 80 but unfortunately I see nothing on port 80. Running nmap against the VM but I see nothing coming up.


root@localhost:~$ nmap -sT 1.2.3.4


Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-29 16:13 HKT
Interesting ports on 1.2.3.4:
Not shown: 996 filtered ports
PORT     STATE  SERVICE
22/tcp   closed ssh
80/tcp   closed http
631/tcp  closed ipp
3260/tcp closed iscsi

 I tried to get into the storage VM as root (prior to the that, I booted the VM to single user mode and reset the password, for details see here) and found that no services are running on those ports.

With further checking, it seems that it have to do with the time service on the VM. The storage VM will need an very accurate time or otherwise it will not be coming up. As per the suggestion from setup guide, I enabled ntp service on ESXi and then enable the "Synchronize guest time with host" option on VM followed by VM restart. I hope it would work but somehow it doesnt, looks like time didnt catch up still.

Turn out I manually setup the time by logging into the storage gateway VM and set it up. After that I proceed with VM restart and then port 80 is coming up. Now I could activate the VM gateway.

2012年3月29日 星期四

Reset root password for AWS Storage Gateway VM

I am not sure if that would violate the terms and conditions of using AWS Storage Gateway VM, what I know is that the default sguser is pretty restrictive and not much troubleshoot could be done from that user (well, I kept seeing activation failure when tried to activate the Storage Gateway service and it looks like port 80 was rejecting the requests for some reasons).

So, as a last resort I tried to "break in" the VM by booting it into single user mode so that I could reset the password there (thanks God, as long as it is still a general Linux).

To reset the password, hit "e" at the Grub menu, use the up-down cursor to scroll to the line start with "Kernel". Once you are on that line, press "e" and then append "boot single" to the end of the line and press enter and "b" to boot. After that the VM will be booted to single user mode and you could simply type "password root" to reset the root password.

So what makes it good to reset the root password? You could get into this VM and do further troubleshooting. In my case, I get into the AWS Storage Gateway VM and found that port 80 didnt come up at all for some reason so that I could check on something else.(though, with further investigation, it looks like i didnt configure proper ntp options for the VM and ESX host)

2012年3月28日 星期三

Tricks to avoid DHCP client to override /etc/resolv.conf

I have a laptop installed with Ubuntu and is using DHCP client to connect to the Internet in couples of locations. Most of the DHCP server love to offer DHCP IP bundled with DNS addresses which is kind of convenience if one dont have their own DNS. For some reason, I have to use my own DNS server to perform DNS lookup and this DHCP kindness is getting annoying as I have to update the resolv.conf everytime I got the DHCP IP.

Just think of a trick to lock the /etc/resolv.conf from overwriting by doing chattr +i, i.e.

[root@ ~]# lsattr /etc/resolv.conf
------------- /etc/resolv.conf
[root@ ~]# chattr +i /etc/resolv.conf
[root@ ~]# lsattr /etc/resolv.conf
----i-------- /etc/resolv.conf


After that the file /etc/resolv.conf would be locked from writing until removal of this tag. I tested it by appending some crap to the /etc/resolv.conf but it doesnt allow me to write over.

[root@ ~]# echo some-crap >> /etc/resolv.conf
-bash: /etc/resolv.conf: Permission denied


Now I could keep using my own DNS and no need to update the file all the time.

Falling back is easy.

[root@ ~]# chattr -i /etc/resolv.conf
[root@ ~]# lsattr /etc/resolv.conf
------------- /etc/resolv.conf

2012年3月24日 星期六

SSH multiplexing tricks

In my job I always have to make a lot of connections to the same servers. I used ssh key to authenticate so basically I don't have to type the password all the time but it is really a pain to wait for a new ssh connection to come up. There is some other situations that the server only accept password authentication but not the key authentication and it is really a pain to type the password on all new connections. In situation like this, we could use SSH multiplexing.

What make ssh multiplexing good is that it allow sharing of multiple sessions over a single network connection. Looking at the man page of ssh_config, here is the description.

ControlMaster

      Enables the sharing of multiple sessions over a single network connection.  When set to “yes” ssh will listen for connections on a control socket specified using the ControlPath argument.  Additional sessions can connect to this socket using the same ControlPath with ControlMaster set to “no” (the default). These sessions will try to reuse the master instance’s network connection rather than initiating new ones, but will fall back to connecting normally if the control socket does not exist, or is not listening.

Simply put, with multiplexing enabled, the first connection towards a server would be used as a control session. And then all new connections after that will be going through that control session (a local UNIX socket) which skips all the hassle (initializing connection negotiation, password / key exchange ... etc).

To turn on SSH mulitplexing,

Create the directory to hold the socket

$ mkdir -p ~/.ssh/multiplex
$ chmod 700 ~/.ssh/connections

And then add this to your ~/.ssh/config:

Host *
ControlMaster auto
ControlPath ~/.ssh/connections/%r@%h:%p


In here %r stand for the user name, %h stand for hostname while @p stand for the ssh port.

2012年3月20日 星期二

Turn on bash history timestamp

Bash history time-stamping is not something new but it is not enabled as default in most Linux distro and hence not much people really know about it. I found this is really useful especially when you want to trace back user activities on server. So, it really worth a minute to turn it on. 

To turn it on, you can either add the below parameter to systems' bashrc (i.e. /etc/bashrc in CentOS/Fedora/RHEL, /etc/bash.bashrc in Ubuntu/Debian) or your own bashrc (~/.bashrc)

export HISTTIMEFORMAT="%d.%m.%y %T "


Once you added above parameter to bashrc, logout and login and issues some command and then check back the history, you will see the timestamp is added.

# history
...
  127  21.05.11 22:10:56 uptime
  128  21.05.11 22:11:12 su - admin
  129  21.05.11 22:11:15 exit
  130  21.05.11 22:12:19 su - admin
  132  21.05.11 22:12:33 exit
  133  21.05.11 22:13:56 ps auxww
  134  21.05.11 22:15:43 pwd
  135  21.05.11 22:17:56 ls
  136  21.05.11 22:20:56 sudo su -
  137  21.05.11 22:23:56 exit
...

So, the magic here is the option HISTTIMEFORMAT. This option making use of strftime format. so%d %m %y %T means

%d - Day
%m - Month
%y - Year
%T - Time

To know more, one can always type "help history", "man bash" and "man strftime".

2012年3月15日 星期四

virtio_nic or e1000 on Linux KVM?

If you are RHEV / QEMU KVM user, you probably launched some virtual machines and yeah it is just as easy as couples of clicks. However, did you ever notice the reason why we have to fill in OS Type and Version ? And, actually VM still can boot up even you didnt fill in the exact OS type and Version for your VM.

The reason is that OS type and Version are used to define whether para-virtualized device will be used for the particular VM. Linux para-virtualized driver starts to run on kernel 2.6.25 or later and if you are running with a VM guest based on older kernel, chances that you will not be able to take advantages from it. For e.g. if you have a VM based on CentOS 4 (running with old 2.6.9 kernel) and you selected OS Type as CentOS 4, the VM will be configured to use simulated block device (/dev/sdX) and Intel e1000 emulated network card and these devices are actually not doing any better to paravirtualized device. However, if you start a VM by defining OS type Ubuntu 10.04 (i.e. kernel version 2.6.32 which does come with para-virtualized driver) and then you really load a Ubuntu 10.04 to it, it would recognize the para-virtualized device and make use of para-virtualized driver. In situation like this, your VM will be presented with para-virtualized block device (/dev/vdX) and virtio network card (virtio_nic) which would take full advantages of para-virtualized kernel.

2012年3月8日 星期四

Migrating AWS CentOS Linux images to KVM environment.

For some reason, I would like to port the AWS EC2 Linux images from AWS environment to inhouse KVM environment. I choose to

So this time I am choosing a EBS-based CentOS based template, centos-x64-6.0-core (ami-03559b6a). I think below mechanism would work the same on instance-store template as well.

I quickly launched an instance from ami-03559b6a and then logging into the instances to have a glance.



[root@ip-10-195-81-211 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            6.0G 1010M  4.7G  18% /
tmpfs                 296M     0  296M   0% /dev/shm

[root@ip-10-195-81-211 ~]# fdisk -l

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux


It is an instance running on xenserver, the root disk is 6G. In order to clone the root disk, I created an EBS volume with size sightly larger than 6G. In this example I use add a 10G EBS volume.



[root@ip-10-195-81-211 ~]# fdisk -l

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux

Disk /dev/xvdj: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdj doesn't contain a valid partition table


So now /dev/xvde is the root disk, /dev/xvde1 is the partition in-use while /dev/xvdj is the disk attached to this instance and there is no partiton being created here.

We will now create a file system on /dev/xvdj and then create a raw disk image on top of that.


[root@ip-10-195-81-211 ~]# mkfs.ext4 /dev/xvdj
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@ip-10-195-81-211 ~]# mkdir /target
[root@ip-10-195-81-211 ~]# mount /dev/xvdj /target


It will take some time to run dd though, depends on the disk image size. In this example we are going to create a disk image with 6442MB in size (same as the size of /dev/xvde showed in fdisk).

[root@ip-10-195-81-211 ~]# time dd if=/dev/zero of=/target/diskimage.raw bs=1M count=6442
6442+0 records in
6442+0 records out
6754926592 bytes (6.8 GB) copied, 178.33 s, 37.9 MB/s

real    2m58.363s
user    0m0.010s
sys    0m9.586s
[root@ip-10-195-81-211 ~]# ls -la /target/diskimage.raw
-rw-r--r-- 1 root root 6754926592 Apr 12 02:33 /target/diskimage.raw





And then using losetup to setup the disk image as loopback disk /dev/loop0.

[root@ip-10-195-81-211 ~]# losetup -fv /target/diskimage.raw
Loop device is /dev/loop0


Use diskdump to clone /dev/xvde to /dev/loop0, it will take some time as well.

[root@ip-10-195-81-211 ~]# time dd if=/dev/xvde of=/dev/loop0
12582912+0 records in
12582912+0 records out
6442450944 bytes (6.4 GB) copied, 705.782 s, 9.1 MB/s

real    11m45.844s
user    0m9.865s
sys    1m24.088s


Once the diskdump is done, we mount up the disk image, chroot to it and replace the kernel on it. One trick here is that, we will need to check the partition offset so that we will start from correct disk location. To check the existing offset, we will need to check back the fdisk output of /dev/xvde

[root@ip-10-195-81-211 ~]# fdisk -l /dev/xvde

Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux

So from above screenshot, we know that the partition is starting from first track (1) and each track size is 63*512. So our offset here is 1*63*512. We mount the partition to /clone.

[root@ip-10-195-81-211 ~]# mkdir /clone
[root@ip-10-195-81-211 ~]# mount -o loop,offset=$((63*512)) /target/diskimage.raw /clone


Compare / and /clone, they should be the same here. (it will be sightly different actually, as there are some run time file on /)

[root@ip-10-195-81-211 ~]# df -h / /clone
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            6.0G 1010M  4.7G  18% /
/dev/loop1            6.0G 1001M  4.7G  18% /clone


As the original kernel of the VM instance is based on Xen paravirtualized kernel, we will have to install the generic system kernel or otherwise it wont be able to boot it our KVM environment. So, we chroot to the new virtual partition /clone and install the original kernel.

[root@ip-10-195-81-211 /]# yum -y install kernel kernel-devel


Once the kernel is installed. We may need to edit /boot/grub/grub.conf so that it will boot the correct kernel. In this example, I added below red line to define the new kernel (you will have to check the path of the new kernel though.

[root@ip-10-195-81-211 /]# cat /boot/grub/menu.lst
#===========
default=0
timeout=2
title CentOS (6.0 - /boot/vmlinuz-2.6.32-220.7.1.el6.x86_64)
    root (hd0,0)
    kernel /boot/vmlinuz-2.6.32-220.7.1.el6.x86_64 ro root=/dev/vda1
    initrd /boot/initramfs-2.6.32-220.7.1.el6.x86_64.img
title CentOS (6.0 - vmlinuz-2.6.32-131.17.1.el6.x86_64)
    root (hd0,0)
    kernel /boot/vmlinuz-2.6.32-131.17.1.el6.x86_64 ro root=/dev/xvde1
    initrd /boot/initramfs-2.6.32-131.17.1.el6.x86_64.img
#===========


Please be notice that the original drive on above was /dev/xvde1 while the new drive will be /dev/vda1 (if you are going to make use of paravirtualized driver on KVM hypervisor) or /dev/sda1 (if you are going to run it with generic driver).

Also we will need to replace /dev/xvde to /dev/vda on /etc/fstab

[root@ip-10-195-81-211 /]# grep "/dev/xvde" /etc/fstab
/dev/xvde1    /        ext4    defaults    1 1
[root@ip-10-195-81-211 /]# sed -i s%/dev/xvde%/dev/vda% /etc/fstab
[root@ip-10-195-81-211 /]# grep "/dev/vda" /etc/fstab
/dev/vda1    /        ext4    defaults    1 1


Once above preparation are done, we could quit from the chroot environment, unmount the virtual partition and detach the loopback device.

[root@ip-10-195-81-211 ~]# umount /clone
[root@ip-10-195-81-211 ~]# losetup -d /dev/loop0 


To load the diskimage on our standalone KVM server, we will have to scp / sftp the disk image from our EC2 VM to our on-premise server. As the image itself is a bit large, we may want to compress it with gzip or whatever compression tools before sending over.


Due to the fact that the image is a raw image, we will need to start the VM by defining raw disk option. The following screen-shots basically show what we need to do to facilitate this via virt-manager on kvm hypervisor node.

1. Launch a new node with existing disk image



2. Pointing to the path of the raw image. One thing have to be careful here is that, we will have to choose the proper OS Type (Linux) and Version (RHEL/CentOS 6) or otherwise it wont be able to present the device as para-virtualized device.


3. Define the memory and CPU core assigned to this VM, nothing special though.

4.  Proper Architecture have to be defined (x86_64 or i686)


Once the VM configured, we could start the VM however it will basically wont boot due to the GRUB failure. We will need to boot it via rescue CD / CentOS CD and issue below command to mount the volume and install grub.

# mkdir /vm; mount /dev/vda1 /vm
# mount –o bind /dev /vm/dev
# mount –o bind /sys /vm/sys
# chroot /vm

And then run grub-install and associated configuration.

# grub-install /dev/vda
# grub
grub> device (hd0) /dev/vda
device (hd0) /dev/vda
grub> root (hd0,0)
root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
grub> setup (hd0)
 setup (hd0)
  Checking if "/boot/grub/stage1" exists... yes
  Checking if "/boot/grub/stage2" exists... yes
  Checking if "/boot/grub/e2fs_stage1_5" exists... yes
  Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 26 sectors are embedded. successed.
  Running "install /boot/grub/stage1 (hd0) (hd0)1+26 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf "... succeeded
Done.

grub> quit
#

After above task, basically the VM is ready for reboot and it should be able to boot. However, before rebooting, you may also want to reset the root password so that you could get into the VM once it is rebooted.

2012年3月4日 星期日

Parse json text via shell command

Let say you have a long json output and it is pretty hard to read.

$ cat json-example.txt
{"email":"user@domain.com","display_name":"Email user","name":"user","key_id":"ajkazGNjgjj33122k22","key_secret":"shjajajaj8AzgjgnG3ooppkajzn3","id":"a81290zkgjgnzmGffs13AgkzzpGgz"}



You could then use python json module to decode the json output and makes it readable.


$ cat json-example.txt  | python -mjson.tool
{
    "display_name": "Email user",
    "email": "user@domain.com",
    "id": "a81290zkgjgnzmGffs13AgkzzpGgz",
    "key_id": "ajkazGNjgjj33122k22",
    "key_secret": "shjajajaj8AzgjgnG3ooppkajzn3",
    "name": "user"
}