Tải bản đầy đủ (.pdf) (38 trang)

Tài liệu Module 3: Preparing for Cluster Service Installation docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.19 MB, 38 trang )





Contents
Overview 1
Pre-Installation Requirements 2
Identifying Hardware Considerations 8
Assigning IP Addresses Within a Cluster 16
Assigning Names Within a Cluster 18
Determining Domain Considerations 20
Existing Services and Applications 23
Lab A: Configuring Advanced Server for
Cluster Installation 24
Review 31

Module 3: Preparing for
Cluster Service
Installation

Information in this document is subject to change without notice. The names of companies,
products, people, characters, and/or data mentioned herein are fictitious and are in no way intended
to represent any real individual, company, product, or event, unless otherwise noted. Complying
with all applicable copyright laws is the responsibility of the user. No part of this document may
be reproduced or transmitted in any form or by any means, electronic or mechanical, for any
purpose, without the express written permission of Microsoft Corporation. If, however, your only
means of access is electronic, permission to print one copy is hereby granted.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any


license to these patents, trademarks, copyrights, or other intellectual property.

 2000 Microsoft Corporation. All rights reserved.

Microsoft, Active Directory, BackOffice, Jscript, PowerPoint, Visual Basic, Visual Studio, Win32,
Windows, Windows NT are either registered trademarks or trademarks of Microsoft Corporation
in the U.S.A. and/or other countries.

Other product and company names mentioned herein may be the trademarks of their respective
owners.

Program Manager: Don Thompson
Product Manager: Greg Bulette
Instructional Designers: April Andrien, Priscilla Johnston, Diana Jahrling
Subject Matter Experts: Jack Creasey, Jeff Johnson
Technical Contributor: James Cochran
Classroom Automation: Lorrin Smith-Bates
Graphic Designer: Andrea Heuston (Artitudes Layout & Design)
Editing Manager: Lynette Skinner
Editor: Elizabeth Reese
Copy Editor: Bill Jones (S&T Consulting)
Production Manager: Miracle Davis
Build Manager: Julie Challenger
Print Production: Irene Barnett (S&T Consulting)
CD Production: Eric Wagoner
Test Manager: Eric R. Myers
Test Lead: Robertson Lee (Volt Technical)
Creative Director: David Mahlmann
Media Consultation: Scott Serna
Illustration: Andrea Heuston (Artitudes Layout & Design)

Localization Manager: Rick Terek
Operations Coordinator: John Williams
Manufacturing Support: Laura King; Kathy Hershey
Lead Product Manager, Release Management: Bo Galford
Lead Technology Manager: Sid Benavente
Lead Product Manager, Content Development: Ken Rosen
Group Manager, Courseware Infrastructure: David Bramble
Group Product Manager, Content Development: Julie Truax
Director, Training & Certification Courseware Development: Dean Murray
General Manager: Robert Stewart


Module 3: Preparing for Cluster Service Installation iii

Instructor Notes
This module covers the information that is required to plan for the installation
of Microsoft
® Cluster service. Requirements specific to server clusters include
communication networks, shared disks, and data storage, in addition to
hardware considerations for the operating system. This module describes the
naming and addressing conventions that the cluster components require.
Domain considerations are covered, in addition to issues relating to services and
applications that were installed and running prior to installing Cluster service.
After completing this module, students will be able to:
 Determine the pre-installation considerations for a server cluster.
 Identify hardware considerations.
 Assign Internet Protocol (IP) addresses within a cluster.
 Assign names within a cluster.
 Determine domain considerations.
 Determine pre-installation considerations for existing services and

applications.

Materials and Preparation
This section provides the materials and preparation tasks that you need to teach
this module.
Required Materials
To teach this module, you need the following materials:
 Microsoft PowerPoint® file 2087A_02.ppt
 Access to the Hardware Compatibility List at www.microsoft.com/hcl

Preparation Tasks
To prepare for this module, you should:
 Read the materials for this module and anticipate questions students may
ask.
 Be familiar with the HCL.
 Be familiar with the Fibre Channel technologies from the leading hardware
vendors.
 Practice the lab.
 Be familiar with the gratuitous Address Resolution Protocol (ARP)
procedure that Cluster service generates after a failover, and how that relates
to routers and switches.
 Study the review questions and prepare alternative answers for discussion.

Presentation:
60 Minutes

Lab:
15 Minutes
iv Module 3: Preparing for Cluster Service Installation


Module Strategy
Use the following strategy to present this module:
Students need to be aware of pre-installation requirements and considerations
that will enable them to successfully install and deploy Cluster service.
 Pre-Installation Requirements
There is information regarding pre-installation tasks on the opening page of
this section. Be sure students are aware of the need to back up data and
determine potential points of failure. Additionally, students need to know
about the special requirements of Cluster service for network connections,
cluster disks (shared disks), and data storage.
• Cluster Network Requirements: Students need to be able to identify the
public and private network methodology and what travels across those
different networks and how they relate to a server cluster.
• Cluster Disk Requirements: Prior to the installation of Cluster service,
the student must be able to verify that the server that the cluster will use
meets the disk requirements. Students need to configure the disks prior
to installation of Cluster service. Students can add disks to the cluster
after they install the cluster, and the disks must meet the same
requirements.
• Data Storage Requirements: Stress to the students that the operating
system is on the local drives and that all of the partitions of the shared
disks must have identical drive letters on each node of the cluster. The
local drives and partitions of the shared disk must be set up before
installation.
 Identifying Hardware Considerations
Cluster service is more dependent on hardware than other software
produced by Microsoft. Because this dependency is such a critical issue, the
hardware used should be on the HCL. Students also need to consider
hardware outside of the server cluster and how it may affect the
performance of the cluster.

• Cluster Service Compatible Systems: Each node of a cluster requires a
minimum set of hardware for Cluster service to install properly. Stress to
the students that this is an Enterprise solution and that the hardware
should not be substandard.
• Routers, Switches, and Hubs: When a resource fails over, the node
controlling the failed over group must send a gratuitous ARP update to
let clients, switches, and routers know about the ARP update. Some
legacy switches and routers might not be able to accept this ARP update
or forward it to clients. Students need to be aware that there are more
things controlling the outcome of a successful deployment besides the
use of two servers and the software they are running.
• Network Cards: Cluster service does not require separate network cards
for public and private communication, but you need to stress to students
that it is highly recommended. Using a single adapter for public and
private communication can result in failovers of resources that are
caused by traffic congestion on this single network. Remind the students
that there are many types of supported network cards, such as Ethernet,
Fibre, and other specialized interconnects.
Module 3: Preparing for Cluster Service Installation v

• Cluster Disks: Disks that are on the same bus between two servers are
shared disks. After Cluster service is installed, these disks will be
referred to as cluster disks. Considerations for these disks focus on
throughput. For instance, if disk access is a concern, you can use a faster
disk or implement hardware Redundant Array of Independent Disks
(RAID), or move the disk to another shared bus between the two nodes
of a cluster if there is contention for the input/output (I/O) of the shared
bus.
• Cluster Data Access: A server cluster can use a disk that is accessed
through a small computer system interface (SCSI) or Fibre bus. Most

server cluster implementations use Fibre.
 Assigning IP Addresses Within a Cluster: Cluster service needs a minimum
of five static IP addresses. Remind the students that although the private
network can use automatic private addressing, which is a feature of
Microsoft Windows
® 2000, the servers will have faster startup times with a
static address for the private network.
 Assigning Names Within a Cluster: Cluster service uses network basic
input/output system (NetBIOS) names for the administration of the cluster,
virtual servers, and for nodes of the cluster. Inform the student that as a best
practice, they should use the cluster name for administration only, and they
should treat virtual server names as a separate group of resources.
 Determining Domain Considerations
• In this section you will talk about how server clusters interact with a
Windows 2000 domain, and what accounts students will need to create
or manage prior to installing Cluster service.
• User Accounts: The two user accounts that are of concern prior to
installation of server cluster are the account used to install the service,
which needs to have administrative rights on each node of the cluster,
and the service account, which also has to have administrative rights and
the right to log on as a service.
• Computer Accounts: The nodes have computer accounts in the domain
to which they belong, whether they are domain controllers or member
servers. It is a best practice to keep the member server computer
accounts in their own organizational unit so that group policies do not
affect them.
 Existing Services and Applications
• If you are upgrading a server to become a node in a cluster, you need to
consider whether the services and applications that are running on the
server will work after the installation of Cluster service. For example,

Windows Internet Name Service (WINS) can continue to run outside the
cluster on a server that is installed as a node. Microsoft Exchange,
however, will not run on a server that is upgraded to a server cluster
environment and will fail. Tell the students that extensive testing of
applications and services needs to be done if the applications and
services are running on a server that will be installed with Cluster
service.

vi Module 3: Preparing for Cluster Service Installation

Instructor Setup for a Lab
Lab Strategy
This lab is designed to prepare the servers for the installation of Cluster service.
In this lab the students will set up identical drive letters on each server in the
cluster. Then they will configure the private and public networks for cluster
communications. At the end of the second exercise, there is a closing exercise
using the Pre-Installation Checklist. This checklist will also serve as a job aid
for students to use later on.
Lab A: Configuring Advanced Server for Cluster
Installation
To conduct this lab:
 Read though the lab carefully, paying close attention to the instructions and
details.
 Students work in teams of two, grouped together by their shared bus.
 Help the students determine whether they are Node A or Node B. In these
exercises, students will perform all of the steps on both servers.
 Familiarize the students with the Reference Table and how to find their
computers in the table.
 Allow some time to discuss the Pre-Installation Checklist.


Module 3: Preparing for Cluster Service Installation 1

Overview
 Pre-Installation Considerations
 Identifying Hardware Considerations
 Assigning IP Addresses Within a Cluster
 Assigning Names Within a Cluster
 Determining Domain Considerations
 Existing Services and Applications

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
This module covers the information that is required to plan for the installation
of Microsoft
® Cluster service. Requirements specific to server clusters include
communication networks, shared disks, and data storage, in addition to
hardware considerations for the operating system. This module describes the
naming and addressing conventions that the cluster components require.
Domain considerations are covered, in addition to issues relating to services and
applications that were installed and running prior to installing Cluster service.
After completing this module, you will be able to:
 Determine the pre-installation requirements for a server cluster.
 Identify hardware considerations.
 Assign Internet Protocol (IP) addresses within a cluster.
 Assign names within a cluster.
 Determine domain considerations.
 Determine pre-installation considerations for existing services and
applications.

Topic Objective

To provide an overview of
the module topics and
objectives.
Lead-in
This module outlines what
you need to do before
installing Cluster service.
2 Module 3: Preparing for Cluster Service Installation



 Pre-Installation Requirements
Client
Client
Client
Client
Client
Client
Client
Client
Router
Router
Server
Server
Power
Power

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Before installing Cluster service you will need to plan for failures that could

occur in the environment. You also need to consider how the nodes in the
cluster will communicate with each other, and how clients will access the server
cluster. Additionally, you must ensure that your system will have the shared
disks and data storage that are required for a successful Cluster service
installation.
Backup and Restore
Clustering technology provides increased reliability; however, it is not designed
to protect all of the components of your workflow in all circumstances. For
example, Cluster service is not an alternative to backing up data, because it
protects only the availability of data, not the data itself. Therefore, you need to
plan for backup and restoration of data.
Risk Assessment
It is recommended that prior to creating a cluster, you complete a risk
assessment of your network to identify possible single point of failures that can
interrupt access to network resources. A single point of failure is any
component in the environment that would block client access to data or
applications if it failed. Single points of failure can be hardware, software, or
external dependencies. Examples of single points of failure include dedicated
wide area network (WAN) lines or the power supply from a utility company.
Uninterruptible Power Supply
Another recommendation is to consider providing an uninterruptible power
supply (UPS) protection for individual computers and the network, including
hubs, bridges, and routers. A UPS is just one more factor in configuring a total
fault tolerant solution. Many UPS solutions provide power for 5 to 20 minutes,
which is long enough for the Microsoft Windows
® 2000 operating system to
properly shut down when power fails.
Topic Objective
To determine the pre-
installation considerations

for a server cluster that will
protect the components and
data in case of a failure.
Lead-in
A server cluster does not
address single points of
failure outside of the cluster.
You need to assess the
current environment before
you install a server cluster.
Module 3: Preparing for Cluster Service Installation 3

Cluster Network Requirements
 Public Network
 Private Network
 Mixed Network
Private
Network
Public or
Mixed
Network

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
A cluster network has two primary types of communication. The private
communication provides online status and other cluster information to the
nodes. The public communication provides information between the client and
virtual servers. It is recommended that private network communications be
physically separated from public network communications, but have the ability
to use the mixed network to eliminate a single point of failure.

An alternative network type a mixed network, uses the public network for both
private and public communications. The advantage of a mixed network is that
private network communications can be failed over to the public network. If
you dedicate a network to client-to-node communications, node-to-node
communications cannot fail over to that network. The mixed network
configuration is the preferred configuration for a public network.
Network adapters, known to the cluster as network interfaces, attach nodes to
networks. You configure what types of communication will travel across the
networks by using the cluster administration tools.
Topic Objective
To describe how Cluster
service uses public, private,
and mixed networks for
cluster and client
communications.
Lead-in
Server clusters use a
network for
communications. You have
three configuration options
for server cluster
communications.
Delivery Tip
Stress to the students that a
dedicated public network will
never transmit private
communications – have at
least one mixed network.
4 Module 3: Preparing for Cluster Service Installation


Private Network
The private network, also known as the interconnect in a server cluster, will
potentially carry the following five types of information:
 Server heartbeats. Each node in a cluster exchanges IP packets
(approximately every 1.2 seconds) with the other node in the cluster
determining if both nodes are running correctly. The first node of the cluster
to come online is the node that is responsible for sending out the heartbeats.
The second node begins to send heartbeats to inform the first node that the
second node has come online. If a node does not respond to the heartbeat,
the working node identifies the unresponsive node as having failed. If a
node fails to detect six successive heartbeats from another node, the other
node assumes that the unresponsive node is offline and tries to take
ownership of the resources that the nonresponding node owns. Note that
failure to detect a heartbeat message can be caused by node failure, network
interface failure, or network failure.
 Replicated state information. Every node in the cluster uses replicated state
information to communicate which cluster groups and resources are running
on all of the other nodes.
 Cluster commands. Cluster commands manage and change the cluster
configuration. An example of a cluster command would be any node-to-
node communications regarding adding or removing a resource or failing
over a group.
 Application commands. Cluster-aware applications send application
commands through the interconnect to communicate with copies of an
application running on multiple servers.
 Application data. Application data is when cluster-aware applications share
data between nodes.


If the private network fails over to the public network, the Cluster service

employs packet signing for node-to-node communications to provide additional
security over the network.

Public Network
The public network connection extends beyond the cluster nodes and is used for
client-to-cluster communication. The public network cannot function as a
backup connection for node-to-node communication should the private network
fail. The network interface cards (NICs) for the public network must be on the
same subnet.
Note
Module 3: Preparing for Cluster Service Installation 5

Mixed Network
The typical server cluster will have one NIC on each server that is designated
for internal communications (cluster only), and one or more other NICs
designated for all communications (the mixed network serving both cluster and
client). In such a case, the cluster-only NIC is the primary interconnect, and the
mixed network NIC(s) serves as a backup if the primary ever fails.

Cluster service can only do this for the primary cluster interconnect. That is, it
provides the ability to use an alternate network for the cluster interconnect if the
primary network fails. This eliminates an interconnect NIC from being a single
point of failure. There are vendors who offer fault tolerant NICs for
Windows 2000 Advanced Server, and you can use these for the NICs that
connect the servers to the client network.
6 Module 3: Preparing for Cluster Service Installation

Cluster Disk Requirements
 All Disks on Shared Bus
 Disks can be seen from all nodes

 Basic Disks, not Dynamic
 All Disk Partitions must be NTFS

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
A shared disk is a resource that all of the nodes in a cluster can access. This
shared disk will be known as the cluster disk after installation of Cluster
service. The cluster disk allows clients to access data even if a node goes
offline. To install Cluster service on Windows 2000 with a properly functioning
cluster disk, you will need to make certain that:
 All of the shared disks, including the quorum disk, are physically attached
to a shared bus or buses.
 Disks attached to the shared bus can be seen from all of the nodes. You can
verify that you can see all of the nodes at the host adapter setup level.
 Shared disks are configured as basic, not dynamic.
 All disks must be formatted NTFS.
 Cluster service supports hardware RAID not software RAID.


While not required, the use of fault tolerant hardware Redundant Array
of Independent Disks (RAID) is strongly recommended for all cluster disks.

Topic Objective
To describe the function and
requirements of the shared
disk which both nodes of a
server cluster access.
Lead-in
For a cluster to use a
shared disk, the disk must

meet the following criteria.
Note
Module 3: Preparing for Cluster Service Installation 7

Data Storage Requirements
C:
D:
C:
Y:
X:
Local Disk 0
Local Disk 0
Node B
Node B
Cluster Disk 2
1 Partition
Cluster Disk 2
1 Partition
Cluster Disk 1
2 Partitions
Cluster Disk 1
2 Partitions
Node A Disk
Configuration:
C: = Local Disk 0
W: = Cluster Disk 1
X: = Cluster Disk 1
Y: = Cluster Disk 2
Node B Disk
Configuration:

C: = Local Disk 0
D: = Local Disk 1
W: = Cluster Disk 1
X: = Cluster Disk 1
Y: = Cluster Disk 2
W:
Node A
Node A
Local Disk 0
Local Disk 0
Local Disk 1
Local Disk 1

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Cluster service is based on the shared nothing model of clustering. In a shared
nothing model, only one node in the cluster can logically control a disk resource
at a time. Having only one node controlling a disk resource at any given time
allows the Windows 2000 cluster file system to support the native NTFS file
system rather than requiring a dedicated cluster-aware file system.
Identical Drive Letters
After you have configured the bus, disks, and partitions, you must assign drive
letters to each partition on each clustered disk. Drive letters for cluster disks and
partitions must be identical on both nodes. For example, Microsoft
SQL Server

is running on a virtual server that Node A controls with the SQL
database on drive W. If the virtual server fails over to Node B, SQL server can
access data on Node B’s drive W.


It is usually best to assign drive letters from the end of the alphabet to
avoid duplicate drive letters.

Topic Objective
To explain the data storage
requirements of a server
cluster.
Lead-in
Every partition’s drive letter
on the cluster disks must be
the same for both nodes in
the cluster.
Note
8 Module 3: Preparing for Cluster Service Installation



 Identifying Hardware Considerations
 Selecting a Cluster Service Compatible System
 Routers, Switches, and Hubs
 Network Cards
 Cluster Disk
 Cluster Data Access

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
You can choose from a wide range of cluster compatible systems that meet the
minimum Cluster service requirements. It is recommended that all of the
hardware be identical for both nodes. Using identical hardware makes
configuration easier and eliminates potential compatibility problems. Microsoft

supports only complete cluster configurations that have passed cluster
validation testing. Select the Cluster category in the Hardware Compatibility
List (HCL) at to view the list of
tested and validated configurations.
The HCL also lists various system components, such as small computer system
interface (SCSI) adapters, Fibre Channel adapters, and RAID devices, that have
passed cluster component candidate testing.

Using hardware that is incompatible with Cluster service can result in
a variety of problems, including failure of the nodes to start or to restart. It is
strongly recommended that you use hardware that is on the Windows 2000
Hardware Compatibility List for Cluster service.


Topic Objective
To identify hardware and
configuration considerations
for a cluster.
Lead-in
A server cluster is more
reliant on hardware than
other software applications.
You need to identify the
appropriate hardware to use
for a successful cluster
implementation.
Delivery Tip
Emphasize the message in
the Warning note.


If you have Internet access,
open the HCL Web site.
Warnin
g

Module 3: Preparing for Cluster Service Installation 9

Cluster Service Compatible Systems
 Dual PCI Bus
 PCI Disk Controller
 Separate Disk Controller for the shared bus
 PCI Network Adapter
 External Hard Disk
 Cables and Terminators

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Each node in a cluster requires all of the hardware requirements for
Windows 2000 Advanced Server, plus the following requirements that are
specific to Cluster service:
 A Dual Peripheral Component Interconnect (PCI) bus
 A PCI disk controller
 A separate disk controller from the operating system that must be used for
the shared data bus
 A minimum of one PCI network adapter

ISA network cards are not suitable for use in a cluster because of slow
throughput. Saturation on an ISA card can lead to delays in packet transmission
for up to ten seconds. Cluster service will wait for five seconds before
determining failover of the other node.


 An external hard disk that is connected to both computers that the cluster
will use as the shared disk
 Cables and terminators to attach the disk to both computers and properly
terminate the bus


The Cluster Verification utility from www.microsoft.com will verify
proper installation of a two-node cluster. However, the tests that are performed
on the cluster will destroy all data on the clustered disks. Do not use this utility
with an established cluster.

Topic Objective
To list the hardware
requirements for Cluster
service.
Lead-in
A server cluster installation
will consist of the following
hardware.
Note
Note
10 Module 3: Preparing for Cluster Service Installation

Routers, Switches, and Hubs
\\Accounting
IP 10.0.0.5
Router
IP 192.168.5.1
MAC 00-D8-60-33-FA

NodeA
IP 10.0.0.1
MAC 00-40-96-32-37-BA
NodeB
IP 10.0.0.2
MAC 00-D0-59-12-0F-00
\\Accounting=
IP 10.0.0.5
MAC 00-40-96-32-37-BA
\\Accounting=
IP 10.0.0.5
MAC 00-D8-60-33-FA
\\Accounting=
IP 10.0.0.5
MAC 00-40-96-32-37-BA
\\Accounting
IP 10.0.0.5
ARP
Update

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
A server cluster installation requires hardware compatible routers, switches, and
hubs. For the client to respond to a failed virtual server, Microsoft Cluster
service sends a gratuitous Address Resolution Protocol (ARP) update to the
network devices. The ARP update notifies the client of the new media access
control (MAC) to IP address association. The clients can then redirect client to
virtual server communication to the new controlling node.
You need to verify that hubs and switches can forward the ARP update to
clients and routers. However, some devices, such as switches, may not forward

the gratuitous ARP request to other devices. Hubs will always forward ARP
updates. A switch may not forward the update, but you can configure it to
forward the updates. A router never forwards the update,\; however it needs to
be able to accept the update and change its ARP table. It is important to choose
and test routers and switches for compatibility prior to implementation of a
server cluster.
In this example, Node A is controlling \\Accounting with an IP address of
10.0.0.5. The clients and routers contain an address table of the IP to MAC
relationships. When the virtual server changes ownership, Node B sends out a
gratuitous ARP update to the local subnet. Hubs and switches forward the
information to clients and routers on the local subnet. Routers contain an IP to
MAC address table on behalf of clients on remote subnets, therefore routers do
not forward the information but must be able to accept the ARP update.
Topic Objective
To identify considerations
for ensuring that routers,
switches, and hubs in a
server cluster are
compatible with Cluster
service.
Lead-in
Networking devices must be
able to accept an ARP
update after a group
failover.
Module 3: Preparing for Cluster Service Installation 11

Network Cards
 Separate Network Cards
 Network Descriptions

 Supported Network Types
 10 BaseT
 100 BaseT
 FDDI Network Card
 Specialized Interconnects

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
The typical server cluster has one NIC on each server which is designated for
internal, cluster-only communications. One or more other NICs are designated
for all public communications, including a mixed network that can serve both
cluster and client.
Network Descriptions
If the cluster node uses multiple, identical PCI network cards, it may be
difficult to identify them when you run the Cluster service setup. Microsoft
Windows 2000 setup assigns network descriptions to each card. You will need
to identify which network cards that the cluster will use for private, public, or
mixed cluster networks, and enter descriptions for each.
For example, you would use the Transmission Control Protocol/Internet
Protocol (TCP/IP) Ipconfig utility to display the network driver name with an
index (such as E190x1 and E190x2) and the network card’s IP address and
subnet mask. Using this information, you can then assign appropriate names to
the networks when you run the Cluster service setup. For example, if El90x1
uses a private IP address 10.0.0.1, this address is for node-to-node
communication. If El90x2 uses an IP address on the public network, this
address is for client-to-node communication.

Two network adapters are recommended so that the nodes of the cluster
can have a private network for node-to-node communications.


Topic Objective
To explain the use of
network cards in a server
cluster.
Lead-in
Server clusters use a variety
of networks to communicate
with other nodes and with
clients.
Note
12 Module 3: Preparing for Cluster Service Installation

Supported Network Types
A supported Cluster service configuration can use as its interconnect virtually
any network technology that Windows 2000 Advanced Server supports. This
includes the following:
 10BaseT Ethernet
 100BaseT Ethernet
 Fiber Distributed Data Interface (FDDI) network card
 Specialized interconnect technologies such as Tandem ServerNet and
GigaNet Cluster LAN (cLAN)

Module 3: Preparing for Cluster Service Installation 13

Cluster Disks
 Hardware RAID
 Disk Access Speed
 Multiple Shared Bus
Hardware RAIDSingle Disk


*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Applications and services access data that is stored on the cluster disk. In a
multiple server environment, throughput may become a concern on the shared
bus. You need to consider the demand for data from the applications and
services on the cluster to decide whether to implement the following cluster
disk solutions.
Hardware RAID
You could configure hardware RAID on the external disk device. Hardware
RAID will increase data access on the cluster disk for greater performance. A
stripe set, a feature of RAID, will also provide faster read/write functionality
for data access.
Disk Access
Another alternative is to select a faster disk, realizing that you will then need a
wider bus. A faster disk will provide increased read/write functions for data
access. For example, you may select a wide SCSI with a maximum transfer rate
of 20 megabytes (MB) per second, or you may select an ultra-wide SCSI with a
maximum transfer rate of 40 MB per second.
Multiple Shared Bus
If you have a number of disks on the same, shared bus, you may find a decrease
in performance of read/write data access. In an active/active cluster
environment, both nodes access disks on the same shared bus. To increase
read/write performance, consider moving some disks to a separate shared bus.

Cluster service does not support the use of software RAID for the cluster
disks.

Topic Objective
To identify possible
problems and solutions with

throughput because server
cluster applications and
services are on the cluster
disk.
Lead-in
Cluster disks could have
contention of the shared bus
from different nodes on the
cluster. You can improve
performance by
implementing RAID, a faster
disk, or another bus.
Note
14 Module 3: Preparing for Cluster Service Installation

Cluster Data Access
 SCSI Requirements for Cluster Service
 Fibre Requirements for Cluster Service

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Cluster service accesses data that is shared by either a SCSI bus or a Fibre
Channel bus. Based on the amount of throughput that is needed for your cluster,
you will need to decide which will meet your requirements.
SCSI Fibre Channel

Cost Lower cost Increased cost
Configuration Difficult Easy
Storage Area Network
(SAN) -capable

Not supported Supported
Hardware Requirements No specialized equipment Specialized equipment
Transfer Rate Performance 160 MB per second 266 MB per second
Optical Cables N/A 10 km (maximum)
Copper Cables 25 meters 100 meter

Topic Objective
To compare SCSI and Fibre
benefits and requirements
for cluster data access.
Lead-in
Server clusters can use
SCSI or Fibre for the shared
bus. Fibre is preferred
because of greater I/O and
configuration.
Module 3: Preparing for Cluster Service Installation 15


SCSI Requirements for Cluster Service
A PCI SCSI card must have the following features for cluster communications:
 The ability to turn off autobus reset.
 The ability to configure a unique SCSI Controller ID.
 A unique ID for all of the devices on the SCSI bus.

A shared SCSI bus has two controllers, and each controller must have a
different SCSI ID. The default SCSI ID is 7 or 15. You will need to change one
SCSI ID to a unique number on the shared bus.

 Termination at both ends of the bus.



The SCSI card and the last disk in the SCSI chain usually handle
termination within a computer. On an external shared SCSI bus, a resistor
provides termination. As a best practice, you can use a SCSI Y cable to provide
access to a terminator, the SCSI card, and the shared disks. Using a SCSI Y
cable allows you to disconnect the card from the bus in case you need to take
the computer offline or have it serviced.

Fibre Channel Requirements for Cluster Service
A Fibre Channel card must have the following features for cluster
communications:
 A PCI Fibre Channel card or Host Bus Adapter (HBA)
 Fibre switch or Fibre hub
 Storage compatible disk arrays
 Fibre Channel RAID array

Note
Note
16 Module 3: Preparing for Cluster Service Installation

Assigning IP Addresses Within a Cluster
Private
Network
Public
Network
Public Net Cluster
10.0.2.7
255.255.255.0
Public Net Cluster

10.0.2.7
255.255.255.0
Public Net
Node A
10.0.2.4
255.255.255.0
Public Net
Node B
10.0.2.5
255.255.255.0
Private Net
Node A
10.0.1.5
255.255.255.0
Private Net
Node B
10.0.1.6
255.255.255.0

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
Cluster service uses TCP/IP to communicate with either public or private
networks and requires statically assigned IP addresses.
Cluster service requires a minimum of three addresses from the public address
space to configure a cluster. You assign an IP address to the public NIC on each
node for clients to communicate directly to that node, independent of the
cluster. You assign the third address to the cluster as a virtual server, primarily
for client access. The private NIC in each cluster node is also assigned an IP
address. Because you use this interface as a private subnet between cluster
nodes, you do not need to assign private addresses from the public address pool.

You may require additional addresses depending on the nature of the
applications that are being served. Each virtual server on the cluster will have
its own unique IP address.
An important feature of using IP addresses for virtual servers is the ability to
fail over an IP address between nodes. The virtual server IP address remains the
same after fail over. It is released from the original node and bound to the
public network card on the new node.
Topic Objective
To determine appropriate IP
addresses within a cluster.
Lead-in
Cluster service requires
static IP addresses for each
node and all of the virtual
servers.
Module 3: Preparing for Cluster Service Installation 17

Static IP Addresses
Cluster service does not support the use of IP addresses that are assigned from a
Dynamic Host Configuration Protocol (DHCP) server for the cluster service, or
for any IP address resources. However, you can use either static IP addresses or
IP addresses that are permanently leased from a DHCP server. Because a
DHCP server can cause the failure of even a permanently reserved address,
using static IP addresses is recommended to ensure the highest degree of
availability.
If you are configuring the cluster’s private network, consider assigning
addresses from one of the private networks that the Internet Assigned Numbers
Authority (IANA) defines:
 10.0.0.0–10.255.255.255 (Class A), Subnet Mask: 255.0.0.0
 172.16.0.0–172.31.255.255 (Class B), Subnet Mask: 255.255.0.0

 192.168.0.0–192.168.255.255 (Class C), Subnet Mask 255.255.255.0


It is recommended that you use a static TCP/IP address on the
cluster private network. Cluster service can use the automatic private addressing
feature of Windows 2000 but this feature can slow down the startup time of a
server.

Im
p
ortan
t
18 Module 3: Preparing for Cluster Service Installation

Assigning Names Within a Cluster
\\NodeA
\\NodeA
\\NodeB
\\NodeB
\\Cluster
\\Cluster
\\Accounting
\\Accounting
A Client can use
\\NodeA to
Manage NodeA
A Client can use
\\Accounting to
access the Virtual
Server

A Client can use
\\Cluster to
Manage the
Cluster

*****************************
ILLEGAL FOR NON-TRAINER USE******************************
When installing Cluster service, there are three types of names that you will
assign to each part of the server cluster. All of the names must conform to
standard network basic input/output system (NetBIOS) naming conventions.
You will need to assign a Cluster service NetBIOS name to each of the
following components:
 Node Name. Each node of the cluster needs a server name or node name for
management of the server. Each node is listed in WINS, Dynamic DNS and
Active Directory

Users and Computers.
 Cluster Name. The cluster name refers to the first virtual server created
during the installation of Cluster service. The network administrator uses the
cluster name to configure and administer the cluster. If the network
administrator has configured each node to use WINS and Dynamic DNS,
the cluster name will also be registered on the network. Cluster names are
not listed in Active Directory Users and Computers.
 Virtual Server Name. Each virtual server needs a name that clients use to
gain access to resources on virtual servers. If the network administrator has
configured the node to use WINS and DDNS, each virtual server name will
also be listed. Virtual server names are not listed in Active Directory Users
and Computers.



In a Windows Internet Naming Service environment, Node Names,
Cluster Names and Virtual Server Names register their name with the WINS
server for NetBIOS name resolution.

Topic Objective
To determine user and
computer account
considerations when
installing a cluster.
Lead-in
There are three main types
of names in a server cluster:
The Node Name, the
Cluster Name, and the
Virtual Server Name. All of
the names are NetBIOS
names; however Dynamic
DNS can use them.
Note
Module 3: Preparing for Cluster Service Installation 19

The NetBIOS namespace within a network must be different from all of the
other namespaces and is 16 characters in length. Microsoft networking
components, such as Windows 2000 and Windows 2000 Server services, allow
the user or administrator to specify the first 15 characters of a NetBIOS name,
but reserve the 16th character of the NetBIOS name to indicate a resource type
(00-FF hex). You can register names as unique (one owner) or as group
(multiple owners) names.

×