Blog信息 |
blog名称: 日志总数:1304 评论数量:2242 留言数量:5 访问次数:7592041 建立时间:2006年5月29日 |

| |
[MySQL]HOWTO set up a MySQL Cluster for two servers (three servers required for true redundancy) 软件技术
lhwork 发表于 2006/12/26 9:40:25 |
Introduction
This HOWTO was designed for a classic setup of two servers behind a
loadbalancer. The aim is to have true redundancy - either server can be
unplugged and yet the site will remain up.
Notes:
You MUST have a third server as a managment node but this can be
shut down after the cluster starts. Also note that I do not recommend
shutting down the managment server (see the extra notes at the bottom
of this document for more information). You can not run a MySQL Cluster with just two servers And have true redundancy.
Although it is possible to set the cluster up on two physical
servers you WILL NOT GET the ability to "kill" one server and for the
cluster to continue as normal. For this you need a third server running
the managment node.
I am going to talk about three servers:
mysql1.domain.com 192.168.0.1mysql2.domain.com 192.168.0.2mysql3.domain.com 192.168.0.3
Servers 1 and 2 will be the two that end up "clustered". This would
be perfect for two servers behind a loadbalancer or using round robin
DNS and is a good replacement for replication. Server 3 needs to have
only minor changes made to it and does NOT require a MySQL install. It
can be a low-end machine and can be carrying out other tasks.
STAGE 1: Install MySQL on the first two servers:
Complete the following steps on both mysql1 and mysql2:
cd /usr/local/http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/ from/http://www.signal42.com/mirrors/mysql/groupadd mysqluseradd -g mysql mysqltar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gzrm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gzln -s mysql-max-4.1.9-pc-linux-gnu-i686 mysqlcd mysqlscripts/mysql_install_db --user=mysqlchown -R root .chown -R mysql datachgrp -R mysql .cp support-files/mysql.server /etc/rc.d/init.d/chmod +x /etc/rc.d/init.d/mysql.serverchkconfig --add mysql.server
Do not start mysql yet.
STAGE 2: Install and configure the managment server
You need the following files from the bin/ of the mysql directory:
ndb_mgm and ndb_mgmd. Download the whole mysql-max tarball and extract
them from the bin/ directory.
mkdir /usr/src/mysql-mgmcd /usr/src/mysql-mgmhttp://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/ from/http://www.signal42.com/mirrors/mysql/tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gzrm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gzcd mysql-max-4.1.9-pc-linux-gnu-i686mv bin/ndb_mgm .mv bin/ndb_mgmd .chmod +x ndb_mg*mv ndb_mg* /usr/bin/cdrm -rf /usr/src/mysql-mgm
You now need to set up the config file for this managment:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini
Now, insert the following (changing the bits as indicated):
[NDBD DEFAULT]NoOfReplicas=2[MYSQLD DEFAULT][NDB_MGMD DEFAULT][TCP DEFAULT]# Managment Server[NDB_MGMD]HostName=192.168.0.3 # the IP of THIS SERVER# Storage Engines[NDBD]HostName=192.168.0.1 # the IP of the FIRST SERVERDataDir= /var/lib/mysql-cluster[NDBD]HostName=192.168.0.2 # the IP of the SECOND SERVERDataDir=/var/lib/mysql-cluster# 2 MySQL Clients# I personally leave this blank to allow rapid changes of the mysql clients;# you can enter the hostnames of the above two servers here. I suggest you dont.[MYSQLD][MYSQLD]
Now, start the managment server:
ndb_mgmd
This is the MySQL managment server, not maganment console. You
should therefore not expect any output (we will start the console
later).
STAGE 3: Configure the storage/SQL servers and start mysql
On each of the two storage/SQL servers (192.168.0.1 and 192.168.0.2) enter the following (changing the bits as appropriate):
vi /etc/my.cnf
Enter i to go to insert mode again and insert this on both servers
(changing the IP address to the IP of the managment server that you set
up in stage 2):
[mysqld]ndbclusterndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER[mysql_cluster]ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
Now, we make the data directory and start the storage engine:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start
If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server.
Note: you should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the managment.
STAGE 4: Check its working
You can now return to the managment server (mysql3) and enter the managment console:
/usr/local/mysql/bin/ndb_mgm
Enter the command SHOW to see what is going on. A sample output looks like this:
[root@mysql3 mysql-cluster]# /usr/local/mysql/bin/ndb_mgm-- NDB Cluster -- Management Client --ndb_mgm> showConnected to Management Server at: localhost:1186Cluster Configuration---------------------[ndbd(NDB)] 2 node(s)id=2 @192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)id=3 @192.168.0.2 (Version: 4.1.9, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=1 @192.168.0.3 (Version: 4.1.9)[mysqld(API)] 2 node(s)id=4 (Version: 4.1.9)id=5 (Version: 4.1.9)ndb_mgm>
If you see
not connected, accepting connect from 192.168.0.[1/2/3]
in the first or last two lines they you have a problem. Please email
me with as much detail as you can give and I can try to find out where
you have gone wrong and change this HOWTO to fix it.
If you are OK to here it is time to test mysql. On either server
mysql1 or mysql2 enter the following commands: Note that we have no
root password yet.
mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;
You should see 1 row returned (with the value 1).
If this works, now go to the other server and run the same SELECT
and see what you get. Insert from that host and go back to host 1 and
see if it works. If it works then congratulations.
The final test is to kill one server to see what happens. If you
have physical access to the machine simply unplug its network cable and
see if the other server keeps on going fine (try the SELECT query). If
you dont have physical access do the following:
ps aux | grep ndbd
You get an output like this:
root 5578 0.0 0.3 6220 1964 ? S 03:14 0:00 ndbdroot 5579 0.0 20.4 492072 102828 ? R 03:14 0:04 ndbdroot 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd
In this case ignore the command "grep ndbd" (the last line) but kill
the first two processes by issuing the command kill -9 pid pid:
kill -9 5578 5579
Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server has died. To restart it, just issue
ndbd
Note: no --inital!
Further notes about setup
I strongly recommend that you read all of this (and bookmark this page). It will almost certainly save you a lot of searching.
The Managment Server
I strongly recommend that you do not stop the managment server once it has started. This is for several resons:
The server takes hardly any server resourcesIf a
cluster falls over, you want to be able to just ssh in and type ndbd to
stat it. You dont want to have to start messing around with another
serverIf you want to take backups then you need the managment server upThe
cluster log is sent to the management server so to check what is going
on in the cluster or has happened since last this is an important toolAll commands from the ndb_mgm client is sent to the management server and thus no management commands without management server.The
managment server is required in case of cluster reconfiguration
(crashed server or network split). In the case that it is not running,
"split-brain" scenario will occure. The management server arbitration
role is required for this type of setup to provide better fault
tollerance.
However you are welcome to stop the server if you prefer.
Starting and stopping ndbd automatically on boot
To achieve this, do the following on both mysql1 and mysql2:
echo "ndbd" > /etc/rc.d/init.d/ndbd
chmod +x /etc/rc.d/init.d/ndbd
chkconfig --add ndbd
Note that this is a really quick script. You ought really to write
one that at least checks if ndbd is already started on the machine.
Use of hostnames
You will note that I have used IP addresses exclusively throught
this setup. This is because using hostnames simply increases the number
of things that can go wrong. Mikael Ronstro"m of MySQL AB kindly
explains: "Hostnames certainly work with MySQL Cluster. But using
hostnames introduces quite a few error sources since a proper DNS
lookup system must be set-up, sometimes /etc/hosts must be edited and
their might be security blocks ensuring that communication between
certain machines is not possible other than on certain ports". I strongly suggest that while testing you use IP addresses if you can, then once it is all working change to hostnames.
RAM
Use the following formula to work out the amount of RAM that you need on each storage node:
(Size of database * NumberofReplicas * 1.1) / Number of storage nodes
NumberofReplicas is set to two by default. You can change it in
config.ini if you want. So for example to run a 4GB database over two
servers with NumbeOfReplicas set to two you need 4.4 GB of RAM on each
storage node. For the SQL nodes and managment nodes you dont need much
RAM at all. To run a 4GB database over 4 servers with NumberOfReplicas
set to two you would need 2.2GB per node.
Note: A lot of people have emailed me querying the
maths above! Remember that the cluster is fault tolerant, and each
piece of data is stored on at least 2 nodes. (2 by default, as set by
NumberOfReplicas). So you need TWICE the space you would need just for
one copy, multiplied by 1.1 for overhead.
Adding storage nodes
If you decide to add storage nodes, bear in mind that 3 is not an
optimal numbers. If you are going to move from two (above) then move to
4.
Adding SQL nodes
Adding SQL nodes
To add storage nodes, you need to add another [NDBD] section to
config.ini as per the template above, edit the /etc/my.cnf on the new storage node as per the example above and then create the directory
/var/lib/mysql-cluster. You then need to SHUTDOWN the
cluster, start the managment daemon (ndb_mgmd) start all the ndbd nodes
including the new one and then restart all the MySQL servers.
[mysqld]ndbclusterndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER[mysql_cluster]ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
Then you need to make sure that there is another [MYSQLD] line at
the end of config.ini on the managment server. Restart the cluster (see
below for an important note) and restart mysql on the new API. It
should be connected.
Important note on changing config.ini
If you ever change config.ini you must stop the whole cluster and
restart it to re-read the config file. Stop the cluster with a SHUTDOWN command to the ndb_mgm package on the managment server and then restart all the storage nodes.
Some useful configuration options that you will need if you have large tables:
DataMemory: defines the space available to store the actual
records in the database. The entire DataMemory will be allocated in
memory so it is important that the machine contains enough memory to
handle the DataMemory size. Note that DataMemory is also used to store
ordered indexes. Ordered indexes uses about 10 bytes per record.
Default: 80MB
IndexMemory The IndexMemory is the parameter that controls
the amount of storage used for hash indexes in MySQL Cluster. Hash
indexes are always used for primary key indexes, unique indexes, and
unique constraints. Default: 18MB
MaxNoOfAttributes This parameter defines the number of attributes that can be defined in the cluster. Default: 1000
MaxNoOfTables Obvious (bear in mind that each BLOB field creates another table for various reasons so take this into account). Default: 128
View this page for further information about the things you can put in the [NDBD] section of config.ini
A note about security
MySQL cluster is not secure. By default anyone can connect to your
managment server and shut the whole thing down. I suggest the following
precautions:
Install APF and block all ports except those you use (do NOT
include any MySQL cluster ports). Add the IPs of your cluster machines
to the /etc/apf/allow_hosts file.Run MySQL cluster over a second network card on a second, isolated, network.
Thanks
I must thank several others who have contributed to this: Mikael
Ronström from MySQL AB for helping me to get this to work and spotting
my silly mistake right at the end, Lewis Bergman for proof-reading this
page and pointing out some improvements, as well as suffering the
frustration with me and Martin Pala for explaining the final reason to
keep the managment server up as well as a few other minor changes.
Thanks also to Terry from Advanced Network Hosts who paid me to set a cluster up and at the same time produce a HOWTO. |
|
|