2012年12月26日 星期三

Nginx X-Forwarded-Protocol and X-Forwarded-For

I have a client that had multiples apache web servers sit behind the Nginx web load balancer. Currently both http and https requests are terminated on Nginx and the requests will then be proxied to the backend apache web servers at port 80 (i.e. http).

From backend web servers perspective, all traffic coming in are sort of masqueraded by the Nginx , web server could only see the requests are made by Nginx and the protocol was http. So my client would interest to know which protocol the original request was, whether it is http or the ssl-encrypted https.

Nginx do allow customization on proxy header via proxy_set_header attributes. So I added below parameter to the location block so that extra header will be passed to the backend web server.


Here is the reverse proxy configuration
    upstream backend_web_server_pool {
       server 1.2.3.4:80;

       server 1.2.3.5:80;
    }

Here is the http site configuration

server {
    listen       80;  # The http server

    ....     
   location / {
       proxy_pass http://backend_web_server_pool;
       proxy_set_header X-Forwarded-Protocol "http" ;
       proxy_set_header X-Forwarded-For $remote_addr;
    }

}




Here is the https site configuration
server {
    listen       443; 
# The https server
    ssl                  on;
    ....

    location / {
       proxy_pass http://backend_web_server_pool;
       proxy_set_header X-Forwarded-Protocol "https" ;
       proxy_set_header X-Forwarded-For $remote_addr;
    }

}


So the line proxy_set_header X-Forwarded-Protocol "http" will pass a header named "X-Forwarded-Protocol" and its value "http" to the backend web server. You can replace this header value to any arbitrary value, e.g. "xyz123". After all it is just a placeholder or symbol to let you know where the request came from. The same logic applies to the HTTPS block however you may want to replace the value from "http" to "https" to avoid confusion. The line proxy_set_header X-Forwarded-For $remote_addr pass the variable remote_addr (i.e. the remote client IP address) to the backend web server.

Once the above configuration applied, restart nginx and then we can head to reconfigure the log format configuration on apache web server. We will now modify the combined log format to capture the X-Forwarded-For and X-Forwarded-Protocol.

#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %{X-Forwarded-For}i %{X-Forwarded-Protocol}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined


I added the %{X-Forwarded-For}i and %{X-Forwarded-Protocol}i to the combined log format followed by apache restart and now apache log could capture the client IP address and the original protocol.


2012年11月23日 星期五

dnsmasq, assign duplicate DHCP subnet via single interface

I was troubleshooting a dnsmasq DHCP server back in few days and notice special settings have to be applied if the particular dnsmasq DHCP server is configured to serve multiple subnet.

In my scenario, the DHCP server is having one nic, eth0. For some reason, there are 2 subnets being served, 192.168.0.0/24 and 192.168.1.0/24. The eth0 is configured as 192.168.0.1/24 while an additional ip 192.168.1.1 is added to eth0 too.

So DHCP clients that connect to the eth0 of DHCP server (through switches) could retrieve IP from same subnet without any issue, however it seems like the DHCP gateway of the 2nd subnet is acting weird. For example, here is the DHCP lease file i got from one of the DHCP client.

# cat /var/lib/dhcp/pump.lease
Device eth0
    IP: 192.168.1.10
    Netmask: 255.255.255.0
    Broadcast: 192.168.1.255
    Network: 192.168.1.0
    Boot server 192.168.1.1
    Next server 192.168.1.1
    Gateways: 192.168.0.1
    Hostname: test-dhcp-client
    Domain: test.internal
    Renewal time: Fri Nov 23 17:38:13 2012
    Expiration time: Fri Nov 23 19:08:13 2012


Interestingly, this DHCP client is assigned with the 2nd DHCP subnet (i.e. 192.168.1.0/24) however gateway of default subnet (192.168.0.0/24) is being assigned.

To fix this, I tried couples of approach but eventually it looks like configuring the option though dhcp-range tagging would do the best. Below are the configuration snippet that fixed the problem.

listen-address=192.168.0.1
listen-address=192.168.1.1
dhcp-range=set:1stblock,192.168.1.10,192.168.0.50,255.255.255.0
dhcp-range=set:2ndblock,192.168.1.1,192.168.1.50,255.255.255.0
dhcp-option=tag:1stblock,option:router,192.168.0.1
dhcp-option=tag:2ndblock,option:router,192.168.1.1

So in above example, I assign a tag to each individual subnet (i.e 1stblock -> 192.168.0.0/24, 2ndblock -> 192.168.1.0/24) and then assign individual router ip with each associated tag.

2012年10月25日 星期四

QEMU/KVM atkbd.c: Unknown key pressed (translated set 2, code 0xa0 on isa0060/serio0)

So I am getting this annoying warning message when I press Enter on my KVM VM.


atkbd.c: Unknown key pressed (translated set 2, code 0xa0 on isa0060/serio0)
atkbd.c Use 'setkeycodes 00 <keycode>' to make it known.

So now whenever I hit enter, the annoying warning will be coming up and this can be easily cleaned up by using the showkey and setkeycodes command.

So from the tty console (it may not work on x-windows), i executed showkey

# showkey
kb mode was UNICODE
[ if you are trying this under X, it might not work
since the X server is also reading /dev/console ]

press any key (program terminates 10s after last keypress)...

Now you can press 'Enter' (or whatever key that could cause the annoying message), it will show the keycode

keycode  28 press

It means that the Enter key is associated to keycode 28. Now you can use setkeycodes to map the Enter key.

# setkeycodes 0x00 28

0x00 here is the scancode, if you are interested on what it means, you could google it and there is some other articles out there explaining this.

Now, right after you typed the command, you should now be safed from this annoying warning message.

Should you want this be sustained across reboot, you may want to add it to /etc/rc.local

# echo 'setykeycodes 0x00 28' >> /etc/rc.local


2012年8月16日 星期四

Using gawk (GNU awk) to monitor /var/log/messages

I have a requirement to periodically scan through /var/log/messages to catch specific error message. The error itself is very time-sensitive and I want to be informed as soon as the message is shown on the log, ideally in a minute interval.

Usually people would suggest to periodically run grep against the log which is the simplest way to facilitate the need but it doesn't work for my scenario. The problem to simply run grep is that it may also capture unnecessary information. For example, I have below logs on my /var/log/messages.

Aug 3 12:40:06 test-box kernel: [1645652.295156] CPU0: Temperature/speed normal
Aug 3 15:05:28 test-box kernel: [1645673.980296] CPU1: Temperature/speed normal

If I simply use grep to catch above log entry in a minute interval, I will probably be informed at 12:40.06 and then all the way till the end of the day (i.e. when the /var/log/messages be rotated and cleaned up). Not to mention starting from 15:05:28, the grep script will catch 2 errors from the log, the lines being recorded at 12:40:05 and 15:05:28. However, the fact is that I actually need the one on 15:05:28.

After some research, it looks like gawk (GNU awk) would be the perfect tool to solve the problem. Eventually I come up with below gawk script to read /var/log/messages.


$ cat gawk-script
BEGIN {
    # Declare field seperator.
    FS="[- :.]";


    
    # Generate the timestamp
    NOW=systime();

    
    # The time interval that I want this script to read from
    # As long as I will run this script with cron in every minutes
    # I set the time interval to 1sec*60
    # So, for example when the script is run at 12:05, only log entries between 12:04 ~ 12:05 will be read.
    PAST=NOW-(60);


    
    # /var/log/messages start with the line in format of "Month date" (e.g Mar 23)
    # Ideally we can convert it to something like 23/3 but I want to be lazy, 
    # so I simply make use of the system time library and 
    # picked %Y %m to represent the Year and Month attributes.
    format="%Y %m";

    # LOGMTH will be something like "2012 10" (Oct 2012)
    # This will be used later to generate the timestamp of the log entry.
    LOGMTH=strftime(format, THEN)
}
{

    # Read the line of the log and convert it to a timestamp
    LOGTIME=mktime(LOGMTH " " $2 " " $3 " " $4 " " $5);

    # Below 3 lines can be commented out to debug with the value being read.
    #{print $2, $3, $4, $5};
    #{print LOGMTH};
    #{print LOGTIME};

    
    # print the log if the timestamp of the line is between PAST <> NOW.
    if(PAST<LOGTIME){print}
}


So here is a demo of the script.

$ cat /var/log/messages  ### totally 4 lines here
Aug 3 12:40:06 test-box kernel: [1645652.295156] CPU0: Temperature/speed normal
Aug 3 15:05:28 test-box kernel: [1645673.980296] CPU1: Temperature/speed normal Aug 3 18:03:31 test-box kernel: [1640277.321612] usb 2-2: new high speed USB device using ehci_hcd and address 28
Aug 3 18:03:32 test-box kernel: [1640277.474483] usb 2-2: configuration #1 chosen from 4 choices


$ date  #Lets check the current time
Wed Oct 3 18:04:24 HKT 2012


$ gawk -f gawk-script /var/log/messages
Oct 3 18:03:31 test-box kernel: [1640277.321612] usb 2-2: new high speed USB device using ehci_hcd and address 28
Oct 3 18:03:32 test-box kernel: [1640277.474483] usb 2-2: configuration #1 chosen from 4 choices


So, the script now only print everything captured within last minute. You can then use the script together with grep utility to catch the string you are interested.

2012年7月24日 星期二

Natting TCP port 2000 behind Cisco device.

Recently I am mudding on a storage appliance named NexentaStor. NexentaStor is based on OpenSolaris and make use of ZFS implementation which looks pretty promising. It has a clean and easy to use GUI, hence it supports quite a lot of storage protocol like NFS, CIFS, ISCSI and even support Link aggregation on network layer too.

Everything is going smooth so far just one minor obstacles, that is its Web GUI by default listen on tcp port 2000. Basically tcp port 2000 is a valid port but somehow I am not able to access the Web GUI and the connection towards the port keep timing out from outside, though the port is working on the same subnet.

I started to suspect there is something to do with NAT and yeah it is. I put my NexentaStor server behind a Cisco ASA firewall with NAT enabled. However, it looks like the port 2000 traffic of NexentaStor overlapped with the Cisco SCCP (http://en.wikipedia.org/wiki/Skinny_Call_Control_Protocol) on port 2000 too. Eventually I have to change the port of the GUI to a non-2000 port.

Just an additional notes, to reconfigure NexentaStor Web GUI port, I have to get into the console and execute below command.

nmc@myhost:/$ setup appliance init

2012年6月24日 星期日

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 6

Resources Management 
Below examples show how could one mange the HA resources in between the nodes.

- Check Cluster status

[root@dbmaster-01 ~]# crm_mon -1
============
Last updated: Fri Dec 30 00:43:51 2011
Last change: Fri Dec 30 00:20:38 2011 via crm_attribute on dbmaster-01.localdomain
Stack: openais
Current DC: dbmaster-01.localdomain - partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
5 Resources configured.
============

Online: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

Resource Group: dbGroup
ClusterIP (ocf::heartbeat:IPaddr2): Started dbmaster-01.localdomain
DBstore (ocf::heartbeat:Filesystem): Started dbmaster-01.localdomain
MySQL (ocf::heartbeat:mysql): Started dbmaster-01.localdomain
Clone Set: pingclone [check-ext-conn]
Started: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

-  Put node to offline mode
[root@dbmaster-01 ~]# crm node standby
[root@dbmaster-01 ~]# crm_mon -1
============
Last updated: Fri Dec 30 00:44:45 2011
Last change: Fri Dec 30 00:44:39 2011 via crm_attribute on dbmaster-01.localdomain
Stack: openais
Current DC: dbmaster-01.localdomain - partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
5 Resources configured.
============

Node dbmaster-01.localdomain: standby
Online: [ dbmaster-02.localdomain ]

Resource Group: dbGroup
ClusterIP (ocf::heartbeat:IPaddr2): Started dbmaster-02.localdomain
DBstore (ocf::heartbeat:Filesystem): Started dbmaster-02.localdomain
MySQL (ocf::heartbeat:mysql): Started dbmaster-02.localdomain
Clone Set: pingclone [check-ext-conn]
Started: [ dbmaster-02.localdomain ]
Stopped: [ check-ext-conn:0 ]

- Put node to online mode

[root@dbmaster-01 ~]# crm node online
[root@dbmaster-01 ~]# crm_mon -1
============
Last updated: Fri Dec 30 00:45:12 2011
Last change: Fri Dec 30 00:45:10 2011 via crm_attribute on dbmaster-01.localdomain
Stack: openais
Current DC: dbmaster-01.localdomain - partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
5 Resources configured.
============

Online: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

Resource Group: dbGroup
ClusterIP (ocf::heartbeat:IPaddr2): Started dbmaster-02.localdomain
DBstore (ocf::heartbeat:Filesystem): Started dbmaster-02.localdomain
MySQL (ocf::heartbeat:mysql): Started dbmaster-02.localdomain
Clone Set: pingclone [check-ext-conn]
Started: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

- Migrate resources to neighbor node

[root@dbmaster-01 ~]# crm resource migrate dbGroup dbmaster-02.localdomain
[root@dbmaster-01 ~]# crm_mon -1
============
Last updated: Fri Dec 30 00:47:50 2011
Last change: Fri Dec 30 00:47:35 2011 via crm_resource on dbmaster-01.localdomain
Stack: openais
Current DC: dbmaster-01.localdomain - partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
5 Resources configured.
============

Online: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

Resource Group: dbGroup
ClusterIP (ocf::heartbeat:IPaddr2): Started dbmaster-01.localdomain
DBstore (ocf::heartbeat:Filesystem): Started dbmaster-01.localdomain
MySQL (ocf::heartbeat:mysql): Started dbmaster-01.localdomain
Clone Set: pingclone [check-ext-conn]
Started: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

- Start / Stop / Restart specific resouce on node

[root@dbmaster-01 ~]# crm resource status MySQL
resource MySQL is running on: dbmaster-01.localdomain
...
[root@dbmaster-01 ~]# crm resource stop MySQL

....
[root@dbmaster-01 ~]# crm resource start MySQL

2012年6月21日 星期四

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 5

Cluster management

Corosync service is responsible for Cluster management while pacemaker is responsible for resource on top of the clustering service. The dependency of startup sequence will be 1) corosync and then 2) pacemaker. The shutdown sequence will be 1) pacemaker and then 2) corosync

- Check service status
[root@dbmaster-02 ~]# /etc/init.d/corosync status
corosync (pid 23118) is running...
[root@dbmaster-02 ~]# /etc/init.d/pacemaker status
pacemakerd (pid 8714) is running...

- Stop pacemaker and corosync
 
If the subject node is in active state, resources will be failed over to standby node. Alternatively if it is standby node, no changes will be made on active node. 
[root@dbmaster-02 ~]# /etc/init.d/pacemaker stop
Signaling Pacemaker Cluster Manager to terminate: [ OK ]
Waiting for cluster services to unload:....... [ OK ]
[root@dbmaster-02 ~]# /etc/init.d/corosync stop
Signaling Corosync Cluster Engine (corosync) to terminate: [ OK ]
Waiting for corosync services to unload:. [ OK ]

- Start corosync and pacemaker
If there isn’t any node running in the cluster, the first up shown up in the cluster will be the active node. If there is one active node in the cluster, the 2nd node will automatically become the standby

[root@dbmaster-02 ~]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@dbmaster-02 ~]# /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager: [ OK ]


2012年6月18日 星期一

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 4

- Configure Cluster Resources

Now the cluster is up, and we will have to add cluster resources (e.g. VIP, MySQL DB store, MySQL DB service) on top of the cluster. We only need to run this once on dbmaster-01 as the configuration changes will be written to cluster configuration file and being replicated to dbmaster-02.

- Configure misc cluster parameter

[root@dbmaster-01 ~]# crm configure property stonith-enabled=false
[root@dbmaster-01 ~]# crm configure property no-quorum-policy=ignore
[root@dbmaster-01 ~]# crm configure property start-failure-is-fatal="false"
[root@dbmaster-01 ~]# crm configure rsc_defaults resource-stickiness=100

- Configure VIP

[root@dbmaster-01 ~]# crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params ip=192.168.0.10 cidr_netmask=32 op monitor interval=10s meta migration-threshold="10"

- Configure MySQL DB store, i.e. the shared-disk
[root@dbmaster-01 ~]# crm configure primitive DBstore ocf:heartbeat:Filesystem params device="/dev/sdb" directory="/mysql" fstype="ext4" meta migration-threshold="10"
WARNING: DBstore: default timeout 20s for start is smaller than the advised 60
WARNING: DBstore: default timeout 20s for stop is smaller than the advised 60

- Configure MySQL services
[root@dbmaster-01 ~]# crm configure primitive MySQL ocf:heartbeat:mysql params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" group="mysql" datadir="/mysql" log="/mysql/mysqld.log" \
> op start interval="0" timeout="60s" \
> op stop interval="0" timeout="60s" \
> op monitor interval="1min" timeout="60s" \
> meta migration-threshold="10" target-role="Started"
WARNING: MySQL: specified timeout 60s for start is smaller than the advised 120
WARNING: MySQL: specified timeout 60s for stop is smaller than the advised 120

- Configure all resources as a resource group for failover
If we don't configure them as a resource group, indivdual resources will be failovered seperately so eventually you may be seeing VIP on dbmaster01 while DB store on dbmaster02 which is something we don't want to see.

[root@dbmaster-01 ~]# crm configure group dbGroup ClusterIP DBstore MySQL

-  Define external ping monitoring and failover policy
This part of configuration will be a complicated, basically it means it will try to ping the gateway. In case the active node failed to ping gateway (e.g. internet connectivity down), it will fail over all services to standby node

[root@dbmaster-01 ~]# crm configure primitive check-ext-conn ocf:pacemaker:ping \
> params host_list="192.168.0.1" multiplier="100" attempts="3" \
> op monitor interval="10s" timeout="5s" start stop \
> meta migration-threshold="10"
WARNING: check-ext-conn: default timeout 20s for start is smaller than the advised 60
WARNING: check-ext-conn: specified timeout 5s for monitor is smaller than the advised 60
[root@dbmaster-01 ~]# crm configure clone pingclone check-ext-conn meta globally-unique="false"
[root@dbmaster-01 ~]# crm
crm(live)# configure
crm(live)configure# location dbnode dbGroup \
> rule $id="dbnode-rule" pingd: defined pingd \
> rule $id="dbnode-rule-0" -inf: not_defined pingd or pingd lte 10 \
> rule $id="dbnode-rule-1" 20: uname eq dbmaster-01.localdomain \
> rule $id="dbnode-rule-2" 20: uname eq dbmaster-01
crm(live)configure# end
There are changes pending. Do you want to commit them? Yes
crm(live)configure# exit
bye

- Review all configuration details.
[root@dbmaster-01 ~]# crm configure show
node dbmaster-01.localdomain
node dbmaster-02.localdomain
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="192.168.0.10" cidr_netmask="32" \
op monitor interval="10s" \
meta migration-threshold="10"
primitive DBstore ocf:heartbeat:Filesystem \
params device="/dev/sdb" directory="/mysql" fstype="ext4" \
meta migration-threshold="10"
primitive MySQL ocf:heartbeat:mysql \
params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" group="mysql" datadir="/mysql" log="/mysql/mysqld.log" \
op start interval="0" timeout="60s" \
op stop interval="0" timeout="60s" \
op monitor interval="1min" timeout="60s" \
meta migration-threshold="10" target-role="Started"
primitive check-ext-conn ocf:pacemaker:ping \
params host_list="192.168.0.1" multiplier="100" attempts="3" \
op monitor interval="10s" timeout="5s" start stop \
meta migration-threshold="10"
group dbGroup ClusterIP DBstore MySQL
clone pingclone check-ext-conn \
meta globally-unique="false"
location dbnode dbGroup \
rule $id="dbnode-rule" pingd: defined pingd \
rule $id="dbnode-rule-0" -inf: not_defined pingd or pingd lte 10 \
rule $id="dbnode-rule-1" 20: uname eq dbmaster-01.localdomain \
rule $id="dbnode-rule-2" 20: uname eq dbmaster-01
property $id="cib-bootstrap-options" \
dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
start-failure-is-fatal="false"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

2012年6月16日 星期六

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 3

- Cluster software installation and configuration

Now it is time to proceed with cluster software installation and configuration. If you are installing on CentOS, the packages could be fetched from default yum repository but if you are doing it on a RHEL6, you will probably need to add CentOS repository.


Below packages have to be installed on both nodes.

[root@dbmaster-01 yum.repos.d]# yum -y install pacemaker corosync
Loaded plugins: product-id, rhnplugin, subscription-manager
Updating certificate-based repositories.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package corosync.x86_64 0:1.4.1-4.el6 will be installed
**********************
***** detail skipped ****
**********************
Installed:
corosync.x86_64 0:1.4.1-4.el6 pacemaker.x86_64 0:1.1.6-3.el6

Dependency Installed:
cifs-utils.x86_64 0:4.8.1-5.el6
cluster-glue.x86_64 0:1.0.5-2.el6
cluster-glue-libs.x86_64 0:1.0.5-2.el6
clusterlib.x86_64 0:3.0.12.1-23.el6
corosynclib.x86_64 0:1.4.1-4.el6
keyutils.x86_64 0:1.4-3.el6
libevent.x86_64 0:1.4.13-1.el6
libgssglue.x86_64 0:0.1-11.el6
libibverbs.x86_64 0:1.1.5-3.el6
librdmacm.x86_64 0:1.0.14.1-3.el6
libtalloc.x86_64 0:2.0.1-1.1.el6
libtirpc.x86_64 0:0.2.1-5.el6
nfs-utils.x86_64 1:1.2.3-15.el6
nfs-utils-lib.x86_64 0:1.1.5-4.el6
pacemaker-cli.x86_64 0:1.1.6-3.el6
pacemaker-cluster-libs.x86_64 0:1.1.6-3.el6
pacemaker-libs.x86_64 0:1.1.6-3.el6
resource-agents.x86_64 0:3.9.2-7.el6
rpcbind.x86_64 0:0.2.0-8.el6

Complete!

- Configure Corosync and Pacemaker
Create configuration file /etc/corosync/corosync.conf. We only need to run this on dbmaster-01 as we will replicate the file over to dbmaster-02

[root@dbmaster-01 ~]# export ais_port=5405
[root@dbmaster-01 ~]# export ais_mcast=226.94.1.1
[root@dbmaster-01 ~]# export ais_addr=`ip addr | grep "inet " | grep eth0 | awk '{print $4}' | sed s/255/0/`
[root@dbmaster-01 ~]# env | grep ais_
ais_mcast=226.94.1.1
ais_port=5405
ais_addr=192.168.0.255
[root@dbmaster-01 ~]# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
[root@dbmaster-01 ~]# sed -i.bak "s/.*mcastaddr:.*/mcastaddr:\ $ais_mcast/g" /etc/corosync/corosync.conf
[root@dbmaster-01 ~]# sed -i.bak "s/.*mcastport:.*/mcastport:\ $ais_port/g" /etc/corosync/corosync.conf
[root@dbmaster-01 ~]# sed -i.bak "s/.*bindnetaddr:.*/bindnetaddr:\ $ais_addr/g" /etc/corosync/corosync.conf
[root@dbmaster-01 ~]# cat <<-END >>/etc/corosync/service.d/pcmk
> service {
> # Load the Pacemaker Cluster Resource Manager
> name: pacemaker
> ver: 1
> }
> END

- Review the configuration file /etc/corosync/corosync.conf

[root@dbmaster-01 ~]# cd /etc/corosync
[root@dbmaster-01 corosync]# cat corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.114.127
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

- Replicate the configuration to neighbor node (dbmaster-02) and start corosync service.

[root@dbmaster-01 corosync]# for f in /etc/corosync/corosync.conf /etc/corosync/service.d/pcmk /etc/hosts; do scp $f dbmaster-02:$f ; done
[root@dbmaster-01 corosync]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@dbmaster-01 corosync]# grep -e "corosync.*network interface" -e "Corosync Cluster Engine" -e "Successfully read main configuration file" /var/log/messages
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [TOTEM ] The network interface [192.168.0.11] is now up.
[root@dbmaster-01 corosync]# grep TOTEM /var/log/messages
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [TOTEM ] Initializing transport (UDP/IP Multicast).
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [TOTEM ] The network interface [192.168.0.11] is now up.
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
[root@dbmaster-01 ~]# ssh dbmaster-02 -- /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]

- Monitoring startup status of corosync
Make sure pacemaker module is loaded successfully.

[root@dbmaster-01 corosync]# grep pcmk_startup /var/log/messages
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [pcmk ] info: pcmk_startup: CRM: Initialized
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [pcmk ] Logging: Initialized pcmk_startup
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [pcmk ] info: pcmk_startup: Service: 10
Dec 29 03:08:39 dbmaster-01 corosync[27718]: [pcmk ] info: pcmk_startup: Local hostname: dbmaster-01.localdomain

- Startup pacemaker on both nodes
[root@dbmaster-01 ~]# chown -R hacluster:haclient /var/log/cluster
[root@dbmaster-01 ~]# /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager: [ OK ]
[root@dbmaster-01 ~]# grep -e pacemakerd.*get_config_opt -e pacemakerd.*start_child -e "Starting Pacemaker" /var/log/messages
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found 'pacemaker' for option: name
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found '1' for option: ver
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found 'pacemaker' for option: name
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found '1' for option: ver
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Defaulting to 'no' for option: use_logd
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found 'off' for option: debug
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found 'yes' for option: to_logfile
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found '/var/log/cluster/corosync.log' for option: logfile
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Found 'yes' for option: to_syslog
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31333]: info: get_config_opt: Defaulting to 'daemon' for option: syslog_facility
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: main: Starting Pacemaker 1.1.6-3.el6 (Build: a02c0f19a00c1eb2527ad38f146ebc0834814558): generated-manpages agent-manpages ascii-docs publican-docs ncurses trace-logging cman corosync-quorum corosync
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31341 for process stonith-ng
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31342 for process cib
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31343 for process lrmd
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31344 for process attrd
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31345 for process pengine
Dec 29 03:29:05 dbmaster-01 pacemakerd: [31337]: info: start_child: Forked child 31346 for process crmd

[root@dbmaster-01 ~]# ssh dbmaster-02 -- chown -R hacluster:haclient /var/log/cluster
[root@dbmaster-01 ~]# ssh dbmaster-02 -- /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager: [ OK ]

- Verify if heartbeat processes are started
[root@dbmaster-01 ~]# ps axf
PID TTY STAT TIME COMMAND
2 ? S 0:00 [kthreadd]
... lots of processes ....
27718 ? Ssl 0:00 corosync
31337 pts/0 S 0:00 pacemakerd
31341 ? Ss 0:00 \_ /usr/lib64/heartbeat/stonithd
31342 ? Ss 0:00 \_ /usr/lib64/heartbeat/cib
31343 ? Ss 0:00 \_ /usr/lib64/heartbeat/lrmd
31344 ? Ss 0:00 \_ /usr/lib64/heartbeat/attrd
31345 ? Ss 0:00 \_ /usr/lib64/heartbeat/pengine
31346 ? Ss 0:00 \_ /usr/lib64/heartbeat/crmd
[root@test-db1 corosync]# grep ERROR: /var/log/messages | grep -v unpack_resources
[root@test-db1 corosync]#

- Verify the HA cluster is running now

[root@dbmaster-01 ~]# crm_mon
============
Last updated: Thu Dec 29 05:19:52 2011
Last change: Thu Dec 29 05:07:59 2011 via crmd on dbmaster-01.localdomain
Stack: openais
Current DC: dbmaster-01.localdomain - partition with quorum
Version: 1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ dbmaster-01.localdomain dbmaster-02.localdomain ]

2012年6月14日 星期四

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 2

- OS Pre-configuration task
Below listed items have to be done on both master nodes.

- Disable SELinux and iptables
[root@dbmaster-01~]# getenforce
Disabled
[root@dbmaster-01~]# service iptables stop
[root@dbmaster-01~]# chkconfig iptables off


- Network configuration

Each master node has to be configured with 2 network interfaces. One is external nic for external traffic (e.g. MySQL traffic, Internet traffic … etc) while another one is internal heartbeat nic which is used to connect 2 master nodes.

In our scenario the master nodes will have IP address like this.

dbmaster-01: IP 192.168.0.11/24, Gateway 192.168.0.1
dbmaster-02:  IP 192.168.0.12/24, Gateway 192.168.0.1

The VIP is 192.168.0.10/24 which will be floating between 2 nodes, depending on the server availability.

- Generate SSH-key and allow key-based authentication

[root@dbmaster-01~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c7:50:ac:a4:09:fb:5d:f3:1e:13:ed:2e:21:d4:a7:f7 root@dbmaster-01
The key's randomart image is:
+--[ RSA 2048]----+
| .. |
| . ... |
| o +.. . . |
| . o .o+ o o |
| . .Sooo = |
| . ... * o |
| o * . |
| o . E|
| . |
+-----------------+
[root@dbmaster-01~]# ssh-copy-id -i .ssh/id_rsa.pub root@dbmaster-02 ##### replace dbmaster-02 with dbmaster-01 if you are doing it from dbmaster-02 to dbmaster-01
The authenticity of host dbmaster-02 (192.168.0.12)' can't be established.
RSA key fingerprint is 51:73:cb:48:c3:8e:9c:39:88:38:b8:a9:70:b8:fd:76.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'dbmaster-02' (RSA) to the list of known hosts.
root@dbmaster-02's password:
Now try logging into the machine, with "ssh 'root@dbmaster-02'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
[root@ dbmaster-01 ~]# ssh dbmaster-02 date##### replace dbmaster-02 with dbmaster-01 if you are doing it from dbmaster-02 to dbmaster-01
Tue Dec 27 22:08:44 EST 2011

 - Configure ntpd to make sure time are in sync
[root@dbmaster-01 ~]# service ntpd stop
Shutting down ntpd: [ OK ]
[root@dbmaster-01 ~]# ntpdate ntp.asia.pool.ntp.org
27 Dec 22:11:27 ntpdate[1796]: adjust time server x.x.x.x offset 0.000983 sec
[root@dbmaster-01 ~]# service ntpd start
Starting ntpd: [ OK ]


- Add host entries to host file to make sure both nodes see each other via dns name
[root@dbmaster-01 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

192.168.0.11 dbmaster-01 dbmaster-01.localdomain
192.168.0.12 dbmaster-02 dbmaster-02.localdomain
[root@dbmaster-01 ~]# ping dbmaster-01
PING dbmaster-01 (192.168.0.11) 56(84) bytes of data.
64 bytes from dbmaster-01 (192.168.0.11): icmp_seq=1 ttl=64 time=0.017 ms
^C
--- dbmaster-01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 588ms
rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms

[root@dbmaster-01 ~]# ping dbmaster-02
PING dbmaster-02 (192.168.0.12) 56(84) bytes of data.
64 bytes from dbmaster-02 (192.168.0.12): icmp_seq=1 ttl=64 time=0.017 ms
^C
--- dbmaster-01 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 588ms
rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms


- Partition the shared-disk
The shared disk will be used to store MySQL db and have to be partition from master node (either one). If you are using vmware ESXi, you could simply mount the disk on both VM and the disk will probably be presented as second disk (e.g. /dev/sdb). Please be notice that no parallel run is allowed which means only one node could access and write the shared-disk at the same time or otherwise it will crash the disk. In our example, we will assume /dev/sdb as our shared-disk.

[root@dbmaster-01 ~]# mkfs.ext4 /dev/sdb #### /dev/sdb is the shared disk in this example

- Installing MySQL Packages.
This should be pretty striaght forward, given that your server could access the Internet with any issue.
[root@dbmaster-01 ~]# yum -y install mysql-server mysql
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package mysql.x86_64 0:5.1.52-1.el6_0.1 will be installed
****************
**** skipped ****
****************
Dependency Installed:
perl-DBD-MySQL.x86_64 0:4.013-3.el6 perl-DBI.x86_64 0:1.609-4.el6

Complete!

- Configure MySQL
 Edit /etc/my.cnf as below.


[root@dbmaster-01 ~]# cat /etc/my.cnf
[mysqld]
datadir=/mysql
socket=/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
log-bin=mysql-bin  ## Just in case you want replication
binlog-format='ROW'  ## Just in case you want replication

[mysqld_safe]
log-error=/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

- Create the mount point and mount the shared disk
[root@dbmaster-01 ~]# mkdir /mysql
[root@dbmaster-01 ~]# mount /dev/sdb /mysql
[root@dbmaster-01 ~]# chown –R mysql:mysql /mysql

- Run mysql_install_db script to install the base-db, make sure this will only be run once only.

[root@ dbmaster-01 ~]# mysql_install_db
Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h test-db1.dpcloud.local password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

[root@dbmaster-01 ~]# chown –R mysql:mysql /mysql

- Remember to unmount the shared-disk /mysql after that.

[root@ dbmaster-01 ~]# umount /mysql


2012年6月12日 星期二

HA Active-Standby MySQL + Heartbeat 3.x + Coroysnc 1.x + Pacemaker 1.x on RHEL / CentOS - Section 1

- The HA Cluster design.


This HA MySQL clustering configuration will be based on 2 servers which are in an active-backup relationship. Below diagram explain the logical design of the setup.

Both HA-master nodes are installed with heartbeat packages, an internal nic presented to each other with private ip.  Heartbeat package of both nodes will check the status of remote node and take the “active node” role when remote node is down. Active node will be responsible to take over the VIP (Virtual IP, the IP floating between 2 HA master), MySQL DB store and spawn up MySQL process to serve mysql request. If replication slave is needed, a 3rd node could be added to the cluster as replication slave and this 3rd node could hooking to the VIP to fetch binary logs but the detail procedures for replication slave is out of the scope of this howto hence it should be pretty straight forward for those that familiar with mysql master-slave replication setup. 

- Hardware and software requirement of both nodes
So here are the ingredients for the HA setup.
  • 2 server with identical hardware configuration
  • Minimum requirement (at least 1G memory, 20G root disk)
  • Each node will need a pair of nic, public and internal heartbeat nic
  • Public nic will be configured as public facing (or at least the nic is connected to internet to allow packages fetching)
  • Internal nic of both nodes will be connected to each other via either crossover cable or on same VLAN.
  • One shared LUN (Logical Disk Unit) to be seen and mountable by both nodes. You may probably configure this by using iscsi (e.g. openfiler), vmware shared-disk , phyiscal fiber-channel storage or DRBD disk. In this example we used a pre-configured Vmware ESX shared-disk.
  • One shared Virtual IP for IP failover in between 2 nodes. This IP will be on the same segment of public nic's IP.
  • The OS will be RHEL/CentOS 6 64bit with minimal installation. Additional software repositories have to be added so that we could fetch and install Heartbeat 3.x, Corosync 1.x and Pacemaker 1.1.x.

Good reference to improve your slide.

Note: It was quite sometime since I got this book through the O'Reilly Blogger Review Program and I am just about to write the review just recently.

So basically I am a technical geek with no concept of how a presentation should be. After reading this book this does show me how one create an interesting presentation. It is so well written and with lots of example demonstrate the idea. it is definitely a good reference for those that look forward to improve their presentation skill. 

2012年5月24日 星期四

Allowing ddclient on vyatta to push NAT outside IP to DDNS provider

I have a vyatta vpn appliance sit behind NAT and have a need to use dynamic DNS to update its public IP to dynamic dns provide like no-ip.com. So I ran the suggested commands mentioned in their doc

vyatta@vyatta# set service dns dynamic interface eth0 service dyndns host-name myvyattatestbox.no-ip.org
[edit]
vyatta@vyatta# set service dns dynamic interface eth0 service dyndns server dynupdate.no-ip.com
[edit]
vyatta@vyatta# set service dns dynamic interface eth0 service dyndns login myusername
[edit]
vyatta@vyatta# set service dns dynamic interface eth0 service dyndns password mypassword
[edit]

vyatta@vyatta# commit
[edit]

However, somehow it updated its internal IP of the nic instead of the NAT outside public IP address.

$ show dns dynamic status
interface    : eth0
ip address   : 192.168.0.80
host-name    : myvyattatestbox.no-ip.org
last update  : Wed May 11 04:07:20 2012
update-status: good


It looks like the way that vyatta will update the IP binded to interface though I would expect it to update with the NAT outside address. To let vyatta to update with the NAT outside, we could make a trick on /opt/vyatta/sbin/vyatta-dynamic-dns.pl, replace the line from

     97     #$output .= "use=if, if=$interface\n\n\n";

to
     98     $output .= "use=web, web=checkip.dyndns.com/, web-skip='IP Address: '\n";

By replacing the line, vyatta will query the NAT outside IP against checkip.dyndns.com and then use the polled IP to update against the dynamic DNS provider.

$ show dns dynamic status
interface    : eth0
ip address   : 1.2.3.4
host-name    : myvyattatestbox.no-ip.org
last update  : Wed May 11 05:07:20 2012
update-status: good



2012年5月15日 星期二

Using kpartx to mount partition(s) from disk image.

Scenario:

You got a disk image which was just dumped via dd (e.g. dd if=/dev/hda of=/disk.img). There are 3 partitions on your source disk /dev/hda and they looks like this,


[root@localhost /]# fdisk -l


Disk /dev/hda: 75.1GB, 75161927680 bytes
255 heads, 63 sectors/track, 9137 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes


   Device  Boot      Start       End       Blocks   Id   System
/dev/hda1 *                  1         25       200781  83   Linux
/dev/hda2                   26     1330  10482412+ 82   Linux swap / Solaris
/dev/hda3               1331     9137  62709727+ 83   Linux
 
Now, as long as your disk image /disk.img is a complete dump of /dev/hda, you should be able to mount the partitions within the disk image via loopback device with proper offset value. For e.g.

#### Create the mount point
[root@localhost /]# mkdir -p /chroot/boot

#### Mount the first partition with offset (Start * sectors/track * 512, i.e. 1*63*512)
[root@localhost /]# mount -o loop,offset=$((1*63*512)) /disk.img /chroot/boot
[root@localhost /]# mount | grep /chroot/boot
/disk.img on /chroot/boot type ext3 (rw,loop=/dev/loop1,offset=32256)

So you successfully mounted the first partition (/dev/hda1) and then planning to mount the 3rd partition (/dev/hda3) to /chroot

#### Try mounting the 3rd partition with offset 1331
[root@localhost /]# mount -o loop,offset=$((1331*63*512)) /disk.img /chroot
hfs: unable to find HFS+ superblock
mount: you must specify the filesystem type

Apparently there is something wrong with mount and for some reason it didn't handle offset very well. The util-linux on my box is ver 2.13 which claims to support offset higher than 32bit (well i didn't do any deeper for this specific matters though) but unfortunately it didn't help. As what I want is a quick fix, I come across a tools called "kpartx"which is actually a swiss knife to mount partitions within a disk image file. So here is a demo of how it works.

Solutions:

To list partitions on a disk image. So in this example kpartx see 3 partitions from image disk.img
[root@localhost /]# kpartx -l /disk.img
loop0p1 : 0 401562 /dev/loop 63
loop0p2 : 0 20964825 /dev/loop 401625
loop0p3 : 0 21366450 /dev/loop 21366450

To activate these partitons, we can run kpartx with option -av
[root@localhost /]# kpartx -av /disk.img
add map loop0p1 : 0 401562 linear /dev/loop 63
add map loop0p2 : 0 20964825 linear /dev/loop 401625
add map loop0p3 : 0 21366450 linear /dev/loop 21366450

We can see the device is now mounted as loopback device and presented via /dev/mapper
[root@localhost /]# losetup -a
/dev/loop0: [0800]:49154 (/disk.img)
[root@localhost /]# ls /dev/mapper/loop0p*
/dev/mapper/loop0p1  /dev/mapper/loop0p2  /dev/mapper/loop0p3


Lets see if we could mount these loopback partitions.

[root@localhost /]# mount /dev/mapper/loop0p1 /chroot/boot
[root@localhost /]# mount /dev/mapper/loop0p3 /chroot
[root@localhost /]# mount | grep chroot
/dev/mapper/loop0p1 on /chroot/boot type ext3 (rw)
/dev/mapper/loop0p3 on /chroot type ext3 (rw)

So it looks like the partitions are mounted and file systems on it are recognized without any issue. Let say we finished with the operations on these partitions and now we plan to unmount it and clean it up.

[root@localhost /]# umount /chroot/boot /chroot
[root@localhost /]# losetup -d /dev/mapper/loop0p1
[root@localhost /]# losetup -d /dev/mapper/loop0p2
[root@localhost /]# losetup -d /dev/mapper/loop0p3
[root@localhost /]# kpartx -d /test.img

Pretty much it.

2012年5月7日 星期一

Reinstalling GRUB after upgrade from Ubuntu 9 to Ubuntu 10.04

So I just upgraded my Ubuntu 9.10 desktop (well, EOL for longtime) to a more recent release, Ubuntu 10.04 LTS Lucid Lynx. So far the upgrade was pretty smooth except it took me like 3 hours to download and complete all the installation files.

Unfortunately it shows something like this after the reboot.

GRUB loading.
error: the symbol 'grub_puts' not found

grub rescue>


So I thought there is something going on with the GRUB during the upgrade, I grep a Ubuntu 10.04 Desktop iso (make sure it is Desktop iso instead of server iso, so that we could boot it up to Live CD mode. ) and boot it up to perform system rescue.

Once the Live CD is booted, I mounted / to somewhere under /mnt

/dev/sda1 on /mnt type ext4 (rw)

I tried chroot to /mnt and run grub-install from there, no dice.

root@ubuntu:/# grub-install  --force /dev/sda
/usr/sbin/grub-probe: error: cannot find a device for /boot/grub (is /dev mounted?).
No path or device is specified.
Try `/usr/sbin/grub-probe --help' for more information.
Auto-detection of a filesystem module failed.
Please specify the module with the option `--modules' explicitly.

So it is apparently the grub-installation on the disk is corrupted or something, I quited from chroot mode and simply run grub-install from the Live CD.

root@ubuntu:~# grub-install --root-directory=/mnt/ /dev/sda
Installation finished. No error reported.

After a reboot my machine is booting without any issue.

Good site for python egg explanation.

Just so much better than perl's CPAN.

http://peak.telecommunity.com/DevCenter/PythonEggs

2012年4月11日 星期三

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class ABC at /etc/puppet/manifests/nodes.pp:14 on node XYZ

So this error is pretty confusing to understand at first glance,

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class ABC at /etc/puppet/manifests/nodes.pp:14 on node XYZ

Basically it is kind of complaining on missing of specific modules ABC. There are 2 things could be checked here,

1. Make sure you have define the modulepath parameter per the document
http://docs.puppetlabs.com/puppet/2.7/reference/modules_fundamentals.html

2. Make sure you have properly define the class name within ABC/manifests/init.pp (well it happens to me couples of time and it turns out there was a typo within the init.pp)

So below init.pp is deemed to see the declared class not found error.


# cat /etc/puppet/modules/ABC/manifests/init.pp
class ABCtypo {

  exec { "blah ....":
       }

}

2012年4月10日 星期二

How to add a puppet client to puppet master.

Before a new puppet client be allowed to fetch manifest from puppet server, the client will have to be signed and below command would do the job

[root@puppetclient ~]# puppet agent --server puppetmaster --test --waitforcert 30



The above command will execute puppet as agent mode and connect to server puppetmaster (** remote server name in here have to match the remote server hostname or otherwise client agent will come up with error "err: Could not retrieve catalog from remote server: hostname was not match with the server certificate"). Option "--test" means the agent will be executed in test mode and then --waitforcert 30 means the puppet client will wait for 30 seconds for server to sign up the certificate. If 30 seconds passed and the client certificate is still not signed, the client agent will stop and exit.

So on server, below command would list out the certs pending for approval

root@puppetmaster:~# puppetca --list
  puppetclient

(CC:2B:2B:9D:4A:EF:3F:15:EF:60:C7:73:C9:18:FF:D1)


root@puppetmaster:~# puppetca --sign puppetclient
notice: Signed certificate request for puppetclient
notice: Removing file Puppet::SSL::CertificateRequest puppetclient at '/var/lib/puppet/ssl/ca/requests/puppetclient
'

AWS Storage Gateway, can it be seated behind NAT?

Recently I was testing the AWS Storage Gateway. AWS storage gateway is a product combing AWS console frontend and a ESXi-based Linux VM (we call it storage gateway VM). The AWS console is responsible to handle user instruction on the storage gateway and pass the instruction to storage VM (e.g. create iscsi target on Storage VM, take or restore snapshot ... etc) while the ESXi-based storage VM is the actual host handling the instruction and storage thing.

From observation, there will be 2 ports be listening on the storage gateway VM, they are TCP port 80 and 3260. Port 80 is actually a java instance which is responsible to serve API call (user submit the request via AWS console or AWS API, and then AWS pass the request to the API handler on port 80 of Storage VM). Port 3260 is the ISCSI target which is responsible to handle all ISCSI request..

So, i was asked if it is possible to run this storage gateway VM behind NAT, i.e. sitting the VM in private network. With this subjective, there are 2 possible scenarios,

 - one is sitting the VM behind NAT but without any port mapping on port 80 and 3260
 - while another scenario is putting the VM behind NAT but with port mapping enabled (i.e. exposing and mapping the port 80 and 3260 on wan outside to port 80 and 3260 on the VM with private IP address).


Unfortunately both scenario wouldn't work. For first scenario, though the storage VM could be activated without problem, AWS just not able to communicate with the storage gateway VM therefore all instruction from users wouldn't be passed over to storage gateway VM at all. No ISCSI target could be created, no snapshot could be created or restored as all instructions are pended and timed out. For second scenario, we could activate the storage gateway VM, create volume, create and restore snapshot but the ISCSI target is just not working at all due to the ISCSI implementation restriction. ISCSI initiator (or ISCSI guest) could discover the ISCSI target via port 3260 but it just couldn't login to the resources.
 
To explain why it wont work behind NAT, we will have to go through the process in connecting or mapping an ISCSI target.

The ISCSI connection establishment process is actually a two-step process. The first step would be the iscsi initiator to scan and discover iscsi remote resource on the remote iscsi target. During the test, we could successfully perform this step as we do see the iscsi target during iscsi discovery. However, on the 2nd step when we tried to login into the ISCSI resource and then we see issue. The situation is that, iscsi resource is presented with a combination of on-host ip and iqn, e.g "10.1.1.1, iqn-name". The ip address here is the ip on the host, therefore that is NATTed private address. Once iscsi initiator(the guest VM) try to map the remote resource, due to implementation restriction it will connect to the ip address being presented, therefore the private ip address. As the IP address presented is private, VM initiator from public network wouldn't be able to talk to that IP address and connecting to the target would lead to request timeout.

The only possible workaround we could think of right now is to create a VPN tunnel between Initiator and the storage gateway VM behind NAT. In this way the AWS storage VM's iSCSI targets can be seen as they would be on the same LAN segment.  However with this approach it would definitely add extra overhead on the ISCSI's I/O performance.

BTW, this storage appliance is designed for access from on-premise device (i.e., natively they should be on the same network segemtn) and which means the ISCSI traffic should not really need to get through public network. In situation like this the appliance should be good to use.