Create clean RHEL/CentOS 6 Template for VMware

Here is how I’m creating Templates for VMware

1.) Update the OS and install vmware tools

# yum update

2.) Clean the yum cache

# yum clean all

2.) Remove SSH host keys

# rm -f /etc/ssh/ssh_host_*

3.) Remove MAC and UUID’s from network configuration files.

# sed -i ‘/^(HWADDR|UUID)=/d’ /etc/sysconfig/network-scripts/ifcfg-eth*

4.) Remove persistent device rules

# rm -f /etc/udev/rules.d/70-persistent-*

5.) Force the log rotate and clean the log files

# logrotate –f /etc/logrotate.conf
# rm –f /var/log/*-???????? /var/log/*.gz
# cat /dev/null > /var/log/audit/audit.log
# cat /dev/null > /var/log/wtmp
# cat /dev/null > /var/log/messages

6.) Clean the /tmp

# rm -rf /tmp/*
# rm -rf /var/tmp/*

7.) un-configured the system if you’r not using customization specification

# touch /.unconfigured

7.) Remove the shell history

# rm -f ~/.bash_history
# unset HISTFILE

8.) Finally poweroff the system.

# poweroff

NIC Channel Bonding in Linux!

As you know network channel bonding is grouping physical network interfaces in to one single virtual interface to provide redundancy and increased throughput. In linux we have seven (7) bonding modes (mode 0 – mode 6) to support the network channel bonding. you can check the all available bonding method here and you can select the best method to use in your environment, but form my experience, most of the times we can go with mode 1, mode 4 and mode 6.

Assume if you need only the fault tolerance, then you can use mode 1 as your bonding mode or you need load balancing + fault tolerance, then you can go with mode 4 or 6 that depend on your underling physical network switch. if it support and configured to use IEEE 802.3ad Dynamic link aggregation, surely you can use mode 4 and if not simply go with mode 6. And also most of our physical boxes we have more than one network interfaces and I’m pretty sure most of us only using one interface to connect to the network. In other hand it can lead to single point of failure, so if you are doing any production deployment make sure to avoid single point of failures as much as possible and we can use NIC channel bonding to avoid network port failures, network cable failure or NIC failure (if you have two physical network cards) in our linux server.

Okay cool! how to configure that?

If you are using Fedora/RHEL or based distributions like CentOS or Oracle Linux NIC channel bonding process is quite simple but you have to edit few files here. First to enable bonding kernel module for your virtual network interface bond0, create a new file called “bonding.conf” in “/etc/modprobe.d/ directory and edit as follows or you need to create more than one bonding interfaces, need to add separate alias for them as “alias bondX bonding”.

# cat /etc/modprobe.d/bonding.conf

alias bond0 bonding

And then you have to create a new network interface configuration file for “bond0” virtual interface as “ifcfg-eth0” in “/etc/sysconfig/network-scripts” directory and all network related stuffs are need to defined.

# cat /etc/sysconfig/network-scripts/ifcfg-bond0

BONDING_OPTS="mode=1 miimon=100"

Okay! now we create the bond0 interface and we need to configure eth0 and eth1 network interfaces as slave interfaces for the bond0 virtual interfaces. for that we need to edit “ifcfg-eth0” and “ifcfg-eth1” files located in the same directory “/etc/sysconfig/network-scripts”.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0

# cat /etc/sysconfig/network-scripts/ifcfg-eth1


HAHA! now you are almost done! time to restart the network service or if it’s possible restart the server. after that, you can see the newly configured bond0 virtual interface is up and running.

# ifconfig

bond0 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
inet addr: Bcast: Mask:
inet6 addr: fe80::210:e0ff:fe22:5070/64 Scope:Link
RX packets:6001623 errors:0 dropped:928245 overruns:0 frame:0
TX packets:2547959 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:853632915 (814.0 MiB) TX bytes:551819829 (526.2 MiB)

eth0 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
RX packets:5073378 errors:0 dropped:0 overruns:0 frame:0
TX packets:2547964 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:768161611 (732.5 MiB) TX bytes:551820999 (526.2 MiB)

eth1 Link encap:Ethernet HWaddr 00:10:E0:22:50:70
RX packets:928245 errors:0 dropped:928245 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85471304 (81.5 MiB) TX bytes:0 (0.0 b)

lo Link encap:Local Loopback
inet addr: Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:24102239 errors:0 dropped:0 overruns:0 frame:0
TX packets:24102239 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5913772444 (5.5 GiB) TX bytes:5913772444 (5.5 GiB)

If your using Debian/Ubuntu or based distribution, process is also simple but quite different than the Fedora/RHEL based distributions, In here you need to install additional package called “ifenslave – Attach and detach slave network devices to a bonding device” to support the network bonding.

# apt-get install ifenslave

Now we have to enable the bonding kernel module for the Debian/Ubuntu based system, for that we have to append the “bonding” keywords to the “/etc/modules” file.

# echo "bonding" >> /etc/modules

And now you can edit the network interface configuration file to configure the virtual bonding interface (bond0) and slave “eth0” and “eth1” interfaces. please note that in Fedora/RHEL based systems each network interface have their own configuration file and in Debian/Ubuntu based system we have only “/etc/network/interfaces” file to edit all network interfaces configuration.

# cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto bond0
iface bond0 inet static
bond-mode 1
bond-miimon 100

Finally restart the network and you’r almost done. you can simply cable unplug or “ifdown” one network interface and check your network connectivity. feel free to comment if you have anything to clarify!

RHCS – Cluster from Scratch | Part 02

Okay after long time back, anyway I’m here and let’s get started with clustering 😀 We are going to implement two (2) node Active/Passive cluster for provide continues web service to the end users. I’n my scenario I’m  using two virtual servers as a cluster nodes and network attach storage (NAS) as a shared storage for both servers. also there are three (3) virtual networks for public network, private network(cluster heartbeat network) and for the storage network. here all of this virtual servers, network and storage are deployed on a CentOS 6 environment using KVM based hypervisor. anyway all virtual resources work as a actual physical resources.


above figure show the initial architecture for our high availability web service deployment, In each virtual server has three (3) network interface cards (NIC) for connect to the public, private and storage networks. In addition to the servers, network attached storage has two(2) ip address. we use both this address to configure multi-path for provide efficient and reliable storage infrastructure to our cluster deployment. here the configuration details of servers and storage.

Server 01
2.6 GHZ 2 vCPU’s with 1 GB RAM
Hostname :
NIC 01 : (Public Network)
NIC 02 : (Private Network)
NIC 03 : (Storage Network)

Server 02
2.6 GHZ 2 vCPU’s with 1 GB RAM
Hostname :
NIC 01 : (Public Network)
NIC 02 : (Private Network)
NIC 03 : (Storage Network)

Network Attach Storage (NAS)
NIC 01 : (Storage Network)
NIC 02 : (Storage Network)
LUN 01 : 10GB (

Now we are ready to go and next part is all about configuring servers, wait till the next then, see ya soon.

RHCS – Cluster from scratch | Part 01

In my previous post, I’ve just touch Red Hat High Availability Add-on for Red Hat Enterprise Linux and it’s eliminate single point of failures, so in case if the active cluster member on which a high availability service group is running become inoperative, the high availability service will start up again(fail over) in another cluster node without interruption.

Okay let’s get started with high availability clustering! but first of all, let’s understand some basic concepts. if you need clear and fully understand about all of these things, I highly recommend to read Red Hat Enterprise Linux 6 Cluster Administration Guide. It’s the best resources for RHEL6 HA clustering. and also you can use CentOS or Oracle Linux as alternatives to follow this article series without Using Red Hat Enterprise Linux.

Cluster Node
Cluster node is a server that is configured to be a cluster member. Normally shared storage (SAN,NAS) is available to all cluster members.

Cluster Resources
Cluster resources is the things you going to keep high available and all of these resources need to available for all cluster nodes. all or some of these resources can be associated with an application you plan to keep highly available.

Cluster Service Group
A collection of related cluster resources that defines actions to be taken during a fail-over operation of the access point of resilient resources. These resilient resources include applications, data, and devices.

Fencing is the method that cuts off access to a cluster resource (shared storage, etc.) from a node in your cluster if it loses contact with the rest of the nodes in the cluster.

There are some more things related to clustering including this basic components and we can learn most of them when we deploying our high availability web service. So wait till the next post 😉

RHCS – Cluster from scratch

According to the Red Hat, Red Hat Cluster Suite (RHCS) High Availability Add-On provides on-demand failover to make applications highly available. It delivers continuous availability of services by eliminating single points of failure. And clustering is a group of computers (called node or members) to work together as a team to provide continued service when system components fail.

Assume that we are running a critical database service on a standalone server, if a software or hardware component failed on that database server, administrative intervention is required and database service will be unavailable until the crashed server is fixed, but with clustering that database service get automatically restarted on another available node in the cluster without administrator invention and database service will be continuously available to the end-users. cluster can deploy as a active/active (one active node and one standby node) or active/passive (both nodes are active) to suite our clustering needs.

In this series of “RHCS – Cluster from scratch” articles, I’m planning to deeply explain how to deploy high availability web service as a active/passive cluster using Red Hat High Availability Add-On on a Red Hat Enterprise Linux 6.