Linux: Building Basic High Availability Infrastructure for Web Servers

English! This is how I am going to post this topic. Do not ask me why.
I know, it’s been a very long time since the last topic about something related to servers, but anyway, here goes a new one! This is going to be the ultime “the-recipe-is-the-name“.
Firstly, our logical topology, here is what we’re going to build:Hurray! Six servers (ubuntu 12.04 server i386) divided into two load-balancers, two php web-servers, a “storage” and a dedicated database server.

What is the main objective of this entire topology?

Redundancy and Load Sharing! Imagine a scenario where your single web server is receiving millions and millions of HTTP requests per second, the CPU load is going insane, as well as the memory usage, when suddently “crash!”, the server dies without saying good-bye (probably because of some weird hardware outstage that you certainly won’t have time to debug). Well, this simple scheme might lead you into a brand new world of possibilities (one more time: SIMPLE SCHEME). In order to keep your job I suggest you not to implement “this” under production environments without the proper tuning.

What is this going to solve?

Hardware Failures! We are going to have redundant hardware all over the place, if one goes down, another one will be immediately ready for taking its place. Also, by using load sharing schemes, this is going to solve our High Usage! issue. Balancing the load among every server on our “farm” will reduce the amount of HTTP request per server (but you already figured that out, right?).
Let’s set it up! Firstly, we’re not going to use a domain scheme (let’s keep it simple), make sure your /etc/hosts file looks exactly like the picture below on every machine:
Secondly, here is what you’re going to install (on a per-server basis):
That’s right. We are going to use a very handsome application named “heartbeat” as our first level redundancy, it will be responsible for keeping our HAProxy (our “Reliable, High Performance TCP/HTTP Load Balancer”) redundant. Don’t you agree that having only ONE load balancer means there’s a single point of failure? Well, redundancy is all about overthrowing this statement, we shall never have a single point of failure, and that’s why we are not deploying a single HAProxy.
So… how does this “heartbeat” work? It is very simple, indeed. Look at our topology (first picture). We’re setting up heartbeat to monitor two servers, one of them is called “master” and the other one “slave”, and they are constantly exchanging their status information through eth1 (10.100.100.0/32), a dedicated point-to-point network. Basically, they tell each other: “I’m up! I’m up! I’m up!…” and whenever a node stop listening this, it will act as the new master by adopting what we call the “virtual IP address” (or VIP). By the way, the VIP is the address that your user is going to use whenever he wants to reach the web server. Check out the illustration on the right.
Setting up the heartbeat is easy, after the installation you will need to create those files on both servers:

haproxy@haproxy-01:~$ cat /etc/ha.d/ha.cf
 watchdog            /dev/watchdog
 loglife             /var/log/ha-log
 debugfile           /var/log/ha-debug
 deadtime            5
 warntime            10
 initdead            15
 bcast               eth1
 auto_failback       on

 node                haproxy-01
 node                haproxy-02
 
haproxy@haproxy-02:~$ cat /etc/ha.d/ha.cf
 watchdog            /dev/watchdog
 loglife             /var/log/ha-log
 debugfile           /var/log/ha-debug
 deadtime            5
 warntime            10
 initdead            15
 bcast               eth1
 auto_failback       on

 node                haproxy-01
 node                haproxy-02
 
haproxy@haproxy-01:~$ cat /etc/ha.d/authkeys
 auth 3
 1 crc
 2 sha1 password
 3 md5 password
 
haproxy@haproxy-02:~$ cat /etc/ha.d/authkeys
 auth 3
 1 crc
 2 sha1 password
 3 md5 password
 
haproxy@haproxy-01:~$ cat /etc/ha.d/haresources
haproxy-01 \
       IPaddr::10.0.0.1/24/eth0
haproxy@haproxy-02:~$ cat /etc/ha.d/haresources
haproxy-01 \
        IPaddr::10.0.0.1/24/eth0
 
This must be enough, right? Yes, for the heartbeat. We need to set up HAProxy to share the load between our web servers. After intalling HAProxy, you will notice a single configuration file (/etc/haproxy/haproxy.cfg), and you should append the following:

haproxy@haproxy-01:~$ cat /etc/haproxy/haproxy.cfg
#(…) Append this to the end of the file:
listen     php-cluster         10.0.0.1:80
                 mode http
                 stats enable
                 stats auth admin:password
                 balance roundrobin
                 option httpclose
                 option forwardfor
                 cookie JSESSIONID prefix
                 server php-01 10.0.0.4:80 cookie A check
                 server php-02 10.0.0.5:80 cookie B check
 
haproxy@haproxy-02:~$ cat /etc/haproxy/haproxy.cfg
#(…) Append this to the end of the file:
listen     php-cluster         10.0.0.1:80
                 mode http
                 stats enable
                 stats auth admin:password
                 balance roundrobin
                 option httpclose
                 option forwardfor
                 cookie JSESSIONID prefix
                 server php-01 10.0.0.4:80 cookie A check
                 server php-02 10.0.0.5:80 cookie B check
 
Now try to  “/etc/init.d/haproxy start“. Did you notice anything? No? Of course! You won’t start the haproxy process until you set up this file on both servers:
haproxy@haproxy-01:~$ cat /etc/default/haproxy
ENABLED=1
haproxy@haproxy-02:~$ cat /etc/default/haproxy
ENABLED=1
 
Now try to  “/etc/init.d/haproxy start” again.
Hm… you might find an error message (cannot bind socket) on haproxy-02, this happened because the kernel is not ready (by default) to bind this kind of socket to a remote machine. By this moment, the virtual IP address belongs to the master node (haproxy-01), this why haproxy-01 started the haproxy without any problems. The solution lies here:

root@haproxy-01:~# echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf 
root@haproxy-01:~# sysctl -p 
root@haproxy-02:~# echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf 
root@haproxy-02:~# sysctl -p
 
Did you get it?
Here’s a little piece of information around NFS: it is awesome! I’m not focusing on NFS here, I’m just setting up a really basic “storage” configuration. In real world environments you’re going to have a much larger disk array running the weirdest RAID scheme you can think of, leading to a huge I/O throughput. Summing everything up, here’s what I did:

storage@storage:~# mkdir /var/www
storage@storage:~# chmod 777 /var/www
storage@storage:~# chown nobody:nogroup /var/www
storage@storage:~# apt-get install nfs-common nfs-kernel-server
 
Sharing the /var/www to the whole 10.0.0.0/24 network (with the proper write and read permissions):

storage@storage:~# cat /etc/exports
/var/www           10.0.0.0/24(rw)
storage@storage:~# ls /var/www
index.html
 
Finally, let’s mount the /var/www on our web servers:
php@php-01:~# mount –t nfs storage:/var/www /var/www
php@php-02:~# mount –t nfs storage:/var/www /var/www
 
Afterwards, you can check it by entering the command: df -h. The result should look like this:
Now get yourself a client and try to access http://10.0.0.1/. Awesome, right? See you next post.

Nhận xét

Bài đăng phổ biến từ blog này

CLEANING UP THE ZABBIX DATABASE

Configuring DHCP Relay service on the FortiGate unit

WAN link load balancing