Tải bản đầy đủ (.pdf) (50 trang)

MySQL High Availability- P12

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (965.91 KB, 50 trang )

However, is it is possible to use the NDB Cluster technologies without the MySQL
server, but this requires lower-level programming with the NDB API.
The NDB API is object-oriented and implements indexes, scans, transactions, and event
handling. This allows you to write applications that retrieve, store, and manipulate data
in the cluster. The NDB API also provides object-oriented error-handling facilities to
allow orderly shutdown or recovery during failures. If you are a developer and want to
learn more about the NDB API, see the MySQL NDB API online documentation.
How Does MySQL Cluster Differ from MySQL?
You may be wondering, “What is the difference between a cluster and replication?”
There are several definitions of clustering, but it can generally be viewed as something
that has membership, messaging, redundancy, and automatic failover capabilities.
Replication, in contrast, is simply a way to send messages (data) from one server to
another. We discuss replication within a cluster (also called local replication) and
MySQL replication in more detail later in this chapter.
Typical Configuration
You can view the MySQL Cluster as having three layers:
• Applications that communicate with the MySQL server
• The MySQL server that processes the SQL commands and communicates to the
NDB storage engine
• The NDB Cluster components (sometimes called data nodes) that process the
queries and return the results to the MySQL server
You can scale up each layer independently with more server processes
to increase performance.
Figure 15-1 shows a conceptual drawing of a typical cluster installation.
The applications connect to the MySQL server, which accesses the NDB Cluster com-
ponents via the storage engine layer (specifically, the NDB storage engine). We will
discuss the NDB Cluster components in more detail momentarily.
There are many possible configurations. You can use multiple MySQL servers to con-
nect to a single NDB Cluster and even connect multiple NDB Clusters via MySQL
replication. We will discuss more of these configurations in later sections.
What Is MySQL Cluster? | 527


Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Features of MySQL Cluster
To satisfy the
goals of having the highest achievable performance, high availability, and
redundancy, data is replicated inside the cluster among the peer data nodes. The data
is replicated using a synchronous mechanism in which each data node connects to every
other data node and data is stored on multiple data nodes.
It is also possible to replicate data between clusters, but in this case you
use MySQL replication,
which is asynchronous rather than synchro-
nous. As we’ve discussed in previous chapters, asynchronous replication
means you must expect a delay in updating the slaves, slaves do not
report back the progress in committing changes, and you cannot expect
a consistent view across all servers in the replicated architecture like you
can expect within a single MySQL cluster.
MySQL Cluster has several specialized features for creating a highly available system.
The most significant ones are:
Figure 15-1. MySQL Cluster
528 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Node recovery
Data node failures can be detected via either communication loss or heartbeat
failure, and you can configure the nodes to restart automatically using copies of
the data from the remaining nodes. Failure and recovery can comprise single or
multiple storage nodes. This is also called local recovery.
Logging
During normal data updates, copies of the data change events are written to a log
stored on each data node. You can use the logs to restore the data to a point in time.
Checkpointing
The cluster supports two forms of checkpoints, local and global. Local checkpoints

remove the tail of the log. Global checkpoints are created when the logs of all data
nodes are flushed to disk, creating a transaction-consistent snapshot of all node
data to disk. In this way, checkpointing permits a complete system restore of all
nodes from a known good synchronization point.
System recovery
In the event the whole system is shut down unexpectedly, you can restore it using
checkpoints and change logs. Typically, the data is copied from disk into memory
from known good synchronization points.
Hot backup and restore
You can create simultaneous backups of each data node without disturbing exe-
cuting transactions. The backup includes the metadata about the objects in the
database, the data itself, and the current transaction log.
No single point of failure
The architecture is designed so that any node can fail without bringing down the
database system.
Failover
To ensure node recovery is possible, all transactions are committed using read
commit isolation and two-phase commits. Transactions are then doubly safe; that
is, they are stored in two separate locations before the user gets acceptance of the
transaction.
Partitioning
Data is automatically partitioned across the data nodes. MySQL version 5.1 Cluster
supports user-defined partitioning.
Online operations
You can perform many of the maintenance operations online without the normal
interruptions. These are operations that normally require stopping a server or
placing locks on data. For example, it is possible to add new data nodes online,
alter table structures, and even reorganize the data in the cluster.
For more information about MySQL Cluster, see the online reference manual.
What Is MySQL Cluster? | 529

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Local and Global Redundancy
You can create local redundancy (inside a particular cluster) using a two-phase commit
protocol. In principle, each node goes through a round in which it agrees to make a
change, then undergoes a round in which it commits the transaction. During the agree-
ment phase, each node ensures that there are enough resources to commit the change
in the second round. In NDB Cluster, the MySQL server commit protocol changes to
allow updates to multiple nodes. NDB Cluster also has an optimized version of two-
phase commit that reduces the number of messages sent using synchronous replication.
The two-phase protocol ensures the data is redundantly stored on multiple data nodes,
a state known as local redundancy.
Global redundancy uses MySQL replication between clusters. This establishes two
nodes in a replication topology. As discussed previously, MySQL replication is asyn-
chronous because it does not include an acknowledgment or receipt for arrival or ex-
ecution of the events replicated. Figure 15-2 illustrates the differences.
Figure 15-2. Local and global redundancy
530 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Log Handling
MySQL Cluster implements two types of checkpoints: local checkpoints to purge part
of the redo log and a global checkpoint that is mainly for synchronizing between the
different data nodes. The global checkpoint becomes important for replication because
it forms the boundary between sets of transactions known as epochs. Each epoch is
replicated between clusters as a single unit. In fact, MySQL replication treats the set of
transactions between two consecutive global checkpoints as a single transaction.
Redundancy and Distributed Data
Data redundancy uses replicas. Each replica has a copy of the data. This allows a cluster
to be fault tolerant. If any data node fails, you can still access the data. Naturally, the
more replicas you allow in a cluster, the more fault tolerant the cluster will be.
Split-Brain Syndrome

If one or
more data nodes fail, it is possible that the remaining data nodes will be unable
to communicate. When this happens, the two sets of data nodes are in a split-brain
scenario. This type of situation is undesirable, because each set of data nodes could
theoretically perform as a separate cluster.
To overcome this, you need a network partitioning algorithm to decide between the
competing sets of data nodes. The decision is made in each set independently. The set
with the minority of nodes will be restarted and each node of that set will need to join
the majority set individually.
If the two sets of nodes are exactly the same size, a theoretical problem still exists. If
you split four nodes into two sets with two nodes in each, how do you know which set
is a minority? For this purpose, you can define an arbitrator. In the case that the sets
are exactly the same size, the set that first succeeds in contacting the arbitrator wins.
You can designate the arbitrator as either a MySQL server (SQL node) or a management
node. For best availability, you should locate the arbitrator on a system that does not
host a data node.
The network partitioning algorithm with arbitration is fully automatic in MySQL Clus-
ter, and the minority is defined with respect to node groups to make the system even
more available than it would be compared to just counting the nodes.
You can specify how many copies of the data (NoOfReplicas) exist in the cluster. You
need to set up as many data nodes as you want replicas. You can also distribute the
data across the data nodes using partitioning. In this case, each data node has only a
portion of the data, making queries faster. But since you have multiple copies of the
data, you can still query the data in the event that a node fails, and the recovery of the
missing node is assured (because the data exists in the other replicas). To achieve this,
you need multiple data nodes for each replica. For example, if you want two replicas
What Is MySQL Cluster? | 531
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
and partitioning, you need to have at least four data nodes (two data nodes for each
replica).

Architecture of MySQL Cluster
MySQL Cluster is composed of one or more MySQL servers communicating via the
NDB storage engine to an NDB cluster. An NDB cluster itself is composed of several
components: data or storage nodes that store and retrieve the data and one or more
management nodes that coordinate startup, shutdown, and recovery of data nodes.
Most of the NDB components are implemented as daemon processes, while MySQL
Cluster also offers client utilities to manipulate the daemons’ features. A list of the
daemons and utilities follows. Figure 15-3 depicts how each of these components
communicates.
mysqld
The MySQL server
NDBd
A data node
NDBmtd
A multithreaded data node
NDB_mgmd
The cluster’s management server
NDB_mgm
The cluster’s management client
Each MySQL server with the executable name mysqld typically supports one or more
applications that issue SQL queries and receive results from the data nodes. When
discussing MySQL Cluster, the MySQL servers are sometimes called SQL nodes.
The data nodes are NDB daemon processes that store and retrieve the data either in
memory or on disk depending on their configuration. Data nodes are installed on each
server participating in the cluster. There is also a multithreaded data node daemon
named NDBmtd that works on platforms that support multiple CPU cores. You can
see improved data node performance if you use the multithreaded data node on dedi-
cated servers with modern multiple-core CPUs.
The management daemon, NDB_mgmd, runs on a server and is responsible for reading
a configuration file and distributing the information to all of the nodes in the cluster.

NDB_mgm, the NDB management client utility, can check the cluster’s status, start
backups, and perform other administrative functions. This client runs on a host con-
venient to the administrator and communicates with the daemon.
There are also a number of utilities that make maintenance easier. A few of the more
popular ones follow. Consult the NDB Cluster documentation for a complete list.
532 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
NDB_config
Extracts configuration information from existing nodes.
NDB_delete_all
Deletes all rows from an NDB table.
NDB_desc
Describes NDB tables (like SHOW CREATE TABLE).
NDB_drop_index
Drops an index from an NDB table.
NDB_drop_table
Drops an NDB table.
NDB_error_reporter
Diagnoses errors and problems in a cluster.
NDB_redo_log_reader
Checks and prints out a cluster redo log.
NDB_restore
Performs a restore of a cluster. Backups are made using the NDB management
client.
How Data Is Stored
MySQL Cluster keeps all indexed columns in main memory. You can store the re-
maining nonindexed columns either in memory or on disk with an in-memory page
cache. Storing nonindexed columns on disk allows you to store more data than the size
of available memory.
When data is changed (via INSERT, UPDATE, DELETE, etc.), MySQL Cluster writes a record

of the change to a redo log, checkpointing data to disk regularly. As described
Figure 15-3. The MySQL Cluster components
Architecture of MySQL Cluster | 533
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
previously, the log and the checkpoints permit recovery from disk after a failure. How-
ever, because the redo logs are written asynchronously with the commit, it is possible
that a limited number of transactions can be lost during a failure. To mitigate this
possibility, MySQL Cluster implements a write delay (with a default of two seconds,
but this is configurable). This allows the checkpoint write to complete so that if a failure
occurs, the last checkpoint is not lost as a result of the failure. Normal failures of indi-
vidual data nodes do not result in any data loss due to the synchronous data replication
within the cluster.
When a MySQL Cluster table is maintained in memory, the cluster accesses disk storage
only to write records of the changes to the redo log and to execute the requisite check-
points. Since writing the logs and checkpoints is sequential and few random access
patterns are involved, MySQL Cluster can achieve higher write throughput rates with
limited disk hardware than the traditional disk caching used in relational database
systems.
You can calculate the size of memory you need for a data node using the following
formula. The size of the database is the sum of the size of the rows times the number
of rows for each table. Keep in mind that if you use disk storage for nonindexed col-
umns, you should count only the indexed columns in calculating the necessary
memory.
(SizeofDatabase × NumberOfReplicas × 1.1 ) / NumberOfDataNodes
This is a simplified formula for rough calculation. When planning the memory of your
cluster, you should consult the online MySQL Cluster Reference Manual for additional
details to consider.
You can also use the Perl script NDB_size.pl found in most distributions. This script
connects to a running MySQL server, traverses all the existing tables in a set of data-
bases, and calculates the memory they would require in a MySQL cluster. This is con-

venient, because it permits you to create and populate the tables on a normal MySQL
server first, then check your memory configuration before you set up, configure, and
load data into your cluster. It is also useful to run periodically to keep ahead of schema
changes that can result in memory issues and to give you an idea of your memory usage.
Example 15-1 depicts a sample report for a simple database with a single table. To find
the total size of the database, multiply the size of the data row from the summary by
the number of rows. In Example 15-1, we have (for MySQL version 5.1) 84 bytes per
row for data and index. If we had 64,000 rows, we would need to have 5,376,000 bytes
of memory to store the table.
If the script generates an error about a missing Class/Method-
Maker.pm module, you
need to install this class on your system. For
example, on Ubuntu you can install it with the following command:
sudo apt-get install libclass-methodmaker-perl
534 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Example 15-1. Checking the size of a database with NDB_size.pl
cbell@cbell-mini:~/mysql-cluster-gpl-7.0.13-linux-i686-glibc23/bin$ ./NDB_size.pl \
--database=cluster_test --user=root
NDB_size.pl report for database: 'cluster_test' (1 tables)
----------------------------------------------------------
Connected to: DBI:mysql:host=localhost
Including information for versions: 4.1, 5.0, 5.1
cluster_test.City
-----------------
DataMemory for Columns (* means varsized DataMemory):
Column Name Type Varsized Key 4.1 5.0 5.1
district char(20) 20 20 20
population int(11) 4 4 4
ccode char(3) 4 4 4

name char(35) 36 36 36
id int(11) PRI 4 4 4
-- -- --
Fixed Size Columns DM/Row 68 68 68
Varsize Columns DM/Row 0 0 0
DataMemory for Indexes:
Index Name Type 4.1 5.0 5.1
PRIMARY BTREE N/A N/A N/A
-- -- --
Total Index DM/Row 0 0 0
IndexMemory for Indexes:
Index Name 4.1 5.0 5.1
PRIMARY 29 16 16
-- -- --
Indexes IM/Row 29 16 16
Summary (for THIS table):
4.1 5.0 5.1
Fixed Overhead DM/Row 12 12 16
NULL Bytes/Row 0 0 0
DataMemory/Row 80 80 84
(Includes overhead, bitmap and indexes)
Varsize Overhead DM/Row 0 0 8
Varsize NULL Bytes/Row 0 0 0
Avg Varside DM/Row 0 0 0
No. Rows 3 3 3
Rows/32kb DM Page 408 408 388
Fixedsize DataMemory (KB) 32 32 32
Rows/32kb Varsize DM Page 0 0 0
Varsize DataMemory (KB) 0 0 0
Rows/8kb IM Page 282 512 512

Architecture of MySQL Cluster | 535
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
IndexMemory (KB) 8 8 8
Parameter Minimum Requirements
------------------------------
* indicates greater than default
Parameter Default 4.1 5.0 5.1
DataMemory (KB) 81920 32 32 32
NoOfOrderedIndexes 128 1 1 1
NoOfTables 128 1 1 1
IndexMemory (KB) 18432 8 8 8
NoOfUniqueHashIndexes 64 0 0 0
NoOfAttributes 1000 5 5 5
NoOfTriggers 768 5 5 5
Notice
that while Example
15-1 uses a very simple table, the output shows not only
the row size, but also a host of statistics for the tables in the database. The report also
shows the indexing statistics, which are the key mechanism the cluster uses for high
performance.
The script displays the different memory requirements across MySQL versions. This
allows you to see any differences if you are working with older versions of MySQL
Cluster.
Partitioning
One of the most important aspects of MySQL Cluster is data partitioning. MySQL
Cluster partitions data horizontally. That is, the rows are automatically divided among
the data nodes using a function to distribute the rows. This is based on a hashing
algorithm that uses the primary key for the table. In early versions of MySQL, the
software uses an internal mechanism for partitioning, but MySQL versions 5.1 and later
allow you to provide your own function for partitioning data. If you use your own

function for partitioning, you should create a function that ensures the data is distrib-
uted evenly among the data nodes.
If a table does not have a primary key, MySQL Cluster adds a surrogate
primary key.
Partitioning allows the
MySQL Cluster to achieve higher performance for queries be-
cause it supports distribution of queries among the data nodes. Thus, a query will return
results much faster when gathering data across several nodes than from a single node.
For example, you can execute the following query on each data node, getting the sum
of the column on each one and summing those results:
SELECT SUM(population) FROM cluster_db.city;
536 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Data distributed across the data nodes is protected from failure if you have more than
one replica (copy) of the data. If you want to use partitioning to distribute your data
across multiple data nodes to achieve parallel queries, you should also ensure you have
at least two replicas of each row so that your cluster is fault tolerant.
Transaction Management
Another aspect of MySQL Cluster’s behavior that differs from MySQL server concerns
transactional data operations. As mentioned previously, MySQL Cluster coordinates
transactional changes across the data nodes. This uses two subprocesses called the
transaction coordinator and the local query handler.
The transaction coordinator handles distributed transactions and other data operations
on a global level. The local query handler manages data and transactions local to the
cluster’s data nodes and acts as a coordinator of two-phase commits at the data node.
Each data node can be a transaction coordinator (you can tune this behavior). When
an application executes a transaction, the cluster connects to a transaction coordinator
on one of the data nodes. The default behavior is to select the closest data node as
defined by the networking layer of the cluster. If there are several connections available
within the same distance, a round-robin algorithm selects the transaction coordinator.

The selected transaction coordinator then sends the query to each data node and the
local query handler executes the query, coordinating the two-phased commit with the
transaction coordinator. Once all data nodes verify the transaction, the transaction
coordinator validates (commits) the transaction.
MySQL Cluster supports the read-committed transaction isolation level. This means
that when there are changes during the execution of the transaction, only committed
changes can be read while the transaction is underway. In this way, MySQL Cluster
ensures data consistency while transactions are running.
For more information about how transactions work in MySQL Cluster and a list of
important limitations on transactions, see the MySQL Cluster chapter in the online
MySQL Reference Manual.
Online Operations
In MySQL versions 5.1 and later, you can perform certain operations while a cluster is
online, meaning that you do not have to either take the server down or lock portions
of the system or database. The following list briefly discusses a few of the online oper-
ations available in MySQL Cluster and lists the versions that include each feature:
Backup (versions 5.0 and later)
You can use the NDB management console to perform a snapshot backup (a non-
blocking operation) to create a backup of your data in the cluster. This operation
includes a copy of the metadata (names and definitions of all tables), the table data,
Architecture of MySQL Cluster | 537
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
and the transaction log (a historical record of changes). It differs from a mysql
dump backup in that it is nonblocking and does not use a table scan to read the
records. You can restore the data using the special NDB_restore utility.
Adding and dropping indexes (versions 5.1 and later)
You can use the ONLINE keyword to perform the CREATE INDEX or DROP INDEX com-
mand online. When online operation is requested, the operation is noncopying—
it does not make a copy of the data in order to index it—so indexes do not have to
be re-created afterward. One advantage of this is that transactions can continue

during alter table operations and tables being altered are not locked against access
by other SQL nodes. However, the table is locked against other queries on the SQL
node performing the alter operation.
In MySQL versions 5.1.7 and later, add and drop index operations
are performed online when the indexes are on variable-width col-
umns only.
Alter table (versions 6.2 and later)
You can use the ONLINE keyword to execute an ALTER TABLE statement online. It is
also noncopying and has the same advantages as adding indexes online. Addition-
ally, in MySQL versions 7.0 and later, you can reorganize the data across partitions
online using the REORGANIZE PARTITION command as long as you don’t use the INTO
(partition_definitions) option.
Changing default column values or data types online is currently
not supported.
Add data nodes and node groups (versions 7.0 and later)
You can manage the expansion of your data nodes online, either for scale-out or
for node replacement after a failure. The process is described in great detail in the
reference manual. Briefly, it involves changing the configuration file, performing a
rolling restart of the NDB management daemon, performing a rolling restart of the
existing data nodes, starting the new data nodes, and then reorganizing the
partitions.
For more information about MySQL Cluster, its architecture, and its version 7.0 fea-
tures, see the white paper available at />mysql_wp_cluster7_architecture.php.
538 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Example Configuration
In this section, we present a sample configuration of a MySQL Cluster running two
data nodes on two systems, with the MySQL server and NDB management node on a
third system. We present examples of simplified data node setup. Our example system
is shown in Figure 15-4.

Figure 15-4. Sample cluster configuration
You can see
one node that contains both the NDB management daemon and the SQL
node (the MySQL server). There are also two data nodes, each on its own system. You
need a minimum of three computers to form a basic MySQL Cluster configuration with
either increased availability or performance.
This is a minimal configuration for MySQL Cluster and, if the number of replicas is set
to two, the minimal configuration for fault tolerance. If the number of replicas is set to
one, the configuration will support partitioning for better performance but will not be
fault tolerant.
It is generally permissible to run the NDB management daemon on the same node as
a MySQL server, but you may want to move this daemon to another system if you are
likely to have a high number of data nodes or want to ensure the greatest level of fault
tolerance.
Getting Started
You can obtain MySQL Cluster from the MySQL downloads page. It is open source,
like the MySQL server. You can download either a binary distribution or an installation
file for some of the top platforms. You can also download the source code and build
Example Configuration | 539
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
the cluster on your own platform. Be sure to check the platform notes for specific issues
for your host operating system.
You should follow the normal installation procedures outlined in the online MySQL
Reference Manual. Aside from one special directory, the NDB tools are installed in the
same location as the MySQL server binaries.
Before we dive into our example, let us first review some general concepts concerning
configuring a MySQL cluster. The cluster configuration is maintained by the NDB
management daemon and is read (initially) from a configuration file. There are many
parameters that you can use to tune the various parts of the cluster, but we will con-
centrate on a minimal configuration for now.

There are several sections in the configuration file. At a minimum, you need to include
each of the following sections:
mysqld
The familiar section of the configuration file that applies to the MySQL server, the
SQL node.
NDB DEFAULT
A default section for global settings. Use this section to specify all of the settings
you want applied to every node, both data and management. Note that the name
of the section contains a space, not an underscore.
NDB_MGMD
A section for the NDB management daemon.
NDBD
You must add one section with this name for each data node.
Example 15-2 shows a minimal configuration file that matches the configuration in
Figure 15-4.
Example 15-2. Minimal configuration file
[NDBD DEFAULT]
NoOfReplicas= 2
DataDir= /var/lib/mysql-cluster
[NDB_MGMD]
Hostname=192.168.0.183
DataDir= /var/lib/mysql-cluster
[NDBD]
Hostname=192.168.0.12
[NDBD]
Hostname=192.168.0.188
[MYSQLD]
Hostname=192.168.0.183
540 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

This example includes the minimal variables for a simple two data-node cluster with
replication. Thus, the NoOfReplicas option is set to 2. Notice we have set the datadir
variable to /var/lib/mysql-cluster. You can set it to whatever you want, but most
installations of MySQL Cluster use this directory.
Finally, notice we have specified the hostname of each node. This is important, because
the NDB management daemon needs to know the location of all of the nodes in the
cluster. If you have downloaded and installed MySQL Cluster and want to follow along,
make the necessary changes to the hostnames so they match our example.
You should place your cluster configuration file in the /var/lib/mysql-cluster directory
and name it config.ini (the standard name and location for this file).
It is not necessary to install the complete MySQL Cluster binary package
on the data nodes. As you will see later, you need only the NDBd dae-
mon on the data nodes.
Starting a MySQL Cluster
Starting MySQL Cluster requires a specific order of commands. We will step through
the procedures for this example, but it is good to briefly examine the general process:
1. Start the management node(s).
2. Start the data nodes.
3. Start the MySQL servers (SQL nodes).
For our example, we first start the NDB management node on 192.168.0.183. Then
we start each of the data nodes (192.168.0.12 and 192.168.0.188, in either order). Once
the data nodes are running, we can start the MySQL server on 192.168.0.183 and, after
a brief startup delay, the cluster is ready to use.
Starting the management node
The first node to start is the NDB management daemon named NDB_mgmd. This is
located in the libexec folder of the MySQL installation. For example, on Ubuntu it is
located in /usr/local/mysql/libexec.
Start the NDB management daemon by issuing a superuser launch and specify the
--initial and -f options. The --initial option tells the cluster that this is our first
time starting and we want to erase any configurations stored from previous launches.

The -f option tells the daemon where to find the configuration file. Example 15-3 shows
how to start the NDB management daemon for our example.
Example 15-3. Starting the NDB management daemon
cbell@mysql-xps-400:/usr/local/mysql/bin$ sudo ../libexec/NDB_mgmd --initial \
-f /var/lib/mysql-cluster/config.ini
Example Configuration | 541
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
2010-03-25 09:10:28 [MgmtSrvr] INFO
-- NDB Cluster Management Server. mysql-5.1.44 NDB-7.0.14
2010-03-25 09:10:29 [MgmtSrvr] INFO
-- Reading cluster configuration from '/var/lib/mysql-cluster/config.ini'
It
is always
a good idea to provide the -f option when you start, because some instal-
lations have different default locations for the configuration file search pattern. You
can discover this pattern by issuing the command NDB_mgmd --help and searching for
the phrase “Default options are read from.” It is not necessary to specify the -f option
on subsequent starts of the daemon.
Starting the management console
While not absolutely necessary at this point, it is a good idea to now launch the NDB
management console and check that the NDB management daemon has correctly read
the configuration. The name of the NDB management console is NDB_mgm and it is
located in the bin directory of the MySQL installation. We can view the configuration
by issuing the SHOW command, as shown in Example 15-4.
Example 15-4. Initial start of the NDB management console
cbell@mysql-xps-400:/usr/local/mysql/bin$ ./NDB_mgm
-- NDB Cluster -- Management Client --
NDB_mgm> SHOW
Connected to Management Server at: 192.168.0.183:1186
Cluster Configuration

---------------------
[NDBd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.0.188)
id=3 (not connected, accepting connect from 192.168.0.12)
[NDB_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.183 (mysql-5.1.44 NDB-7.0.14)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from 192.168.0.183)
NDB_mgm>
This command displays the data nodes and their IP addresses as well as the NDB man-
agement daemon and the SQL node. This is a good time to check that all of our nodes
are configured with the right IP addresses and that all of the appropriate data nodes are
loaded. If you have changed your cluster configuration but see the old values here, it
is likely the NDB management daemon has not read the new configuration file.
This output tells us that the NDB management daemon is loaded and ready. If it were
not, the SHOW command would fail with a communication error. If you see that error,
be sure to check that you are running the NDB management client on the same server
as the NDB management daemon. If you are not, use the --NDB-connectstring option
542 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
and provide the IP address or hostname of the machine hosting the NDB management
daemon.
Finally, notice the node IDs of your nodes. You will need this information to issue
commands to a specific node in the cluster from the NDB management console. Issue
the HELP command at any time to see the other commands available. You will also need
to know the node ID for your SQL nodes so that they start up correctly.
You can specify the node IDs for each node in your cluster using the
--NDB-nodeid parameter in the config.ini file.
We can also use the STATUS command to see the status of our nodes. Issue ALL STATUS
to see the status of all nodes or node-id STATUS to see the status of a specific node. This

command is handy for watching the cluster start up, because the output reports which
startup phase the data node is in. Refer to the MySQL Cluster section of the online
MySQL Reference Manual for more details about the phases of data node startup.
Starting data nodes
Now that we have started our NDB management daemon, it is time to start the data
nodes. However, before we do that, let’s examine the minimal setup needed for an NDB
data node.
To set up an NDB data node, all you need is the NDB data node daemon (NDBd)
compiled for the targeted host operating system. First, create the folder /var/lib/mysql-
cluster, then copy in the NDBd executable, and you’re done! Clearly, this makes it very
easy to script the creation of data nodes (and many have).
You can start the data nodes (NDBd) using the --initial-start option, which signals
that this is the first time the cluster has been started. You also must provide the --NDB-
connectstring option, providing the IP address of the NDB management daemon.
Example 15-5 shows starting a data node for the first time. Do this on each data node.
Example 15-5. Starting the data node
cbell@mysql-mini:~/mysql-cluster-gpl-7.0.13-linux-x86_64-glibc23/bin$
sudo ./NDBd --initial-start --NDB-connectstring=192.168.0.183
2010-03-25 09:04:18 [NDBd] INFO
-- Configuration fetched from '192.168.0.183:1186', generation: 1
If you are starting a new data node, have reset a data node, or are recovering from a
failure, you can specify the --initial option to force the data node to erase any existing
configuration and cached data and request a new copy from the NDB management
daemon.
Example Configuration | 543
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Be careful when using the --initial options. They really do delete your
data!
Return to the management console and check the status (Example 15-6).
Example 15-6. Status of data nodes

NDB_mgm> SHOW
Cluster Configuration
---------------------
[NDBd(NDB)] 2 node(s)
id=2 @192.168.0.188 (mysql-5.1.41 NDB-7.0.13, Nodegroup: 0, Master)
id=3 @192.168.0.12 (mysql-5.1.41 NDB-7.0.13, Nodegroup: 0)
[NDB_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.183 (mysql-5.1.44 NDB-7.0.14)
[mysqld(API)] 1 node(s)
id=4 (not connected, accepting connect from 192.168.0.183)
You can see that the data nodes started successfully, because information about their
daemons is shown. You can also see that one of the nodes has been selected as the
master for cluster replication. Since we set the number of replicas to 2 in our configu-
ration file, we have two copies of the data. Don’t confuse this notion of master with a
master in MySQL replication. We discuss the differences in more detail later in the
chapter.
Starting the SQL nodes
Once the data nodes are running, we can connect our SQL node. There are several
options we must specify that enable a MySQL server to connect to an NDB cluster.
Most people specify these in the my.cnf file, but you can also specify them on the startup
command line if you start the server in that manner.
NDBcluster
Tells the server that you want to include the NDB cluster storage engine.
NDB_connectstring
Tells the server the location of the NDB management daemon.
NDB_nodeid and server_id
Normally set to the node ID. You can find the node ID in the output from the
SHOW command in the NDB management console.
Example 15-7 shows a correct startup sequence for the SQL node in our cluster
example.

544 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Example 15-7. Starting the SQL node
cbell@mysql-xps-400:/usr/local/mysql/bin$ sudo ../libexec/mysqld -NDBcluster \
--console -umysql
100325 9:14:21 [Note] Plugin 'FEDERATED' is disabled.
100325 9:14:21 InnoDB: Started; log sequence number 0 1112278176
100325 9:14:21 [Note] NDB: NodeID is 4, management server '192.168.0.183:1186'
100325 9:14:22 [Note] NDB[0]: NodeID: 4, all storage nodes connected
100325 9:14:22 [Note] Starting Cluster Binlog Thread
100325 9:14:22 [Note] Event Scheduler: Loaded 0 events
100325 9:14:23 [Note] NDB: Creating mysql.NDB_schema
100325 9:14:23 [Note] NDB: Flushing mysql.NDB_schema
100325 9:14:23 [Note] NDB Binlog: CREATE TABLE Event: REPL$mysql/NDB_schema
100325 9:14:23 [Note] NDB Binlog: logging ./mysql/NDB_schema (UPDATED,USE_WRITE)
100325 9:14:23 [Note] NDB: Creating mysql.NDB_apply_status
100325 9:14:23 [Note] NDB: Flushing mysql.NDB_apply_status
100325 9:14:23 [Note] NDB Binlog: CREATE TABLE Event: REPL$mysql/NDB_apply_status
100325 9:14:23 [Note] NDB Binlog: logging ./mysql/NDB_apply_status
(UPDATED,USE_WRITE)
2010-03-25 09:14:23 [NdbApi] INFO -- Flushing incomplete GCI:s < 65/17
2010-03-25 09:14:23 [NdbApi] INFO -- Flushing incomplete GCI:s < 65/17
100325 9:14:23 [Note] NDB Binlog: starting log at epoch 65/17
100325 9:14:23 [Note] NDB Binlog: NDB tables writable
100325 9:14:23 [Note] ../libexec/mysqld: ready for connections.
Version: '5.1.44-NDB-7.0.14-debug' socket: '/var/lib/mysql/mysqld.sock'
port: 3306 Source distribution
The
output includes extra comments about the NDB cluster connection, logs, and
status. If you do not see these or if you see errors, be sure that you started your SQL

node with the proper options. Of particular importance is the message stating the node
ID and the management server. If you have multiple management servers running, be
sure your SQL node is communicating with the correct one.
Once the SQL node starts correctly, return to the management console and check the
status of all of your nodes (Example 15-8).
Example 15-8. Example status of a running cluster
NDB_mgm> SHOW
Cluster Configuration
---------------------
[NDBd(NDB)] 2 node(s)
id=2 @192.168.0.188 (mysql-5.1.41 NDB-7.0.13, Nodegroup: 0, Master)
id=3 @192.168.0.12 (mysql-5.1.41 NDB-7.0.13, Nodegroup: 0)
[NDB_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.183 (mysql-5.1.44 NDB-7.0.14)
[mysqld(API)] 1 node(s)
id=4 @192.168.0.183 (mysql-5.1.44 NDB-7.0.14)
As you can see, all of our nodes are now connected and running. If you see any details
other than what is shown here, you have a failure in the startup sequence of your nodes.
Be sure to check the logs for each node to determine what went wrong. The most
Example Configuration | 545
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
common cause is network connectivity (e.g., firewall issues). The NDB nodes use port
1186 by default.
The logfiles for the data nodes and the NDB management daemon are located in the
data directory. The SQL node logs are located in the usual location for a MySQL server.
Testing the Cluster
Now that our example cluster is running, let’s perform a simple test (shown in Exam-
ple 15-9) to ensure we can create a database and tables using the NDBcluster storage
engine.
Example 15-9. Testing the cluster

mysql> create database cluster_db;
Query OK, 1 row affected (0.06 sec)
mysql> create table cluster_db.t1 (a int) engine=NDBCLUSTER;
Query OK, 0 rows affected (0.31 sec)
mysql> show create table cluster_db.t1 \G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`a` int(11) DEFAULT NULL
) ENGINE=NDBcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
mysql> insert into cluster_db.t1 VALUES (1), (100), (1000);
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> select * from cluster_db.t1 \G
*************************** 1. row ***************************
a: 1
*************************** 2. row ***************************
a: 1000
*************************** 3. row ***************************
a: 100
3 rows in set (0.00 sec)
Now that you have a running cluster, you can experiment by loading data and running
sample queries. We invite you to “fail” one of the data nodes during data updates and
restart it to see that the loss of a single data node does not affect accessibility.
Shutting Down the Cluster
Just as there is a specific order for startup, there is a specific order to shutting down
your cluster:
546 | Chapter 15: MySQL Cluster
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×