Tải bản đầy đủ (.pdf) (10 trang)

Khám phá windowns server 2008 - p 10 pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (430.27 KB, 10 trang )

ptg6432687
70
2 Best Practices at Planning, Prototyping, Migrating, and Deploying Windows
Server 2008 Hyper-V
. Create a scope of work detailing the servers that you want to virtualize.
. Define high-level organizational goals.
. Define departmental goals.
. Determine which components and capabilities of the network are most important
and how they contribute to or hinder the goals expressed by the different units.
. Clearly define the technical goals of the project on different levels (50,000-foot,
10,000-foot, 1,000-foot, and so on).
The Discovery Phase
. Review and evaluate the existing environment to make sure the network foundation
in place will support the new virtualized environment.
. Make sure the existing environment is configured the way you think it is, and iden-
tify existing areas of exposure or weakness in the network.
. Define the current network stability and performance measurements and operation.
. Use external partners to produce more thorough results and predict the problems
that may emerge midway through a project and become “showstoppers.”
. Start the discovery process with onsite interviews.
. Review and evaluate every affected device and application to help determine its role
in the new environment.
. Maintain and protect business-critical information.
. Determine where data resides, what file stores and databases are out there, how the
data is maintained, and whether it is safe.
The Design Phase
. Create a design document including the salient points of the discussion, the reasons
the project is being invested in, the scope of the project, and the details of what the
results will look like.
. Create a migration document providing the road map showing how the end state
will be reached.


. Use a consultant with hands-on experience designing and implementing Windows
2008 Hyper-V virtualization to provide leadership through this process.
. Determine what hardware and software will be needed for the migration.
. Determine how many server software licenses will be required.
. Detail the level of redundancy and security required and that the solution will ulti-
mately provide.
. Present the design and migration documents to the project stakeholders for review.
Download at www.wowebook.com
ptg6432687
71
Best Practices
The Migration Planning Phase
. Create a migration document containing the details of the steps required to reach
the end state with minimal risk or negative impact to the network environment.
. Create a project plan that provides a list of the tasks, resources, and durations
required to implement the solution.
The Prototype Phase
. Create a lab environment in which the key elements of the design as defined in the
design document can be configured and tested.
. Isolate the lab environment from the production network so that any problems
created or encountered in the process don’t affect the user community.
. Thoroughly test all applications in a virtual environment.
The Pilot Phase
. Identify the first group of servers that will be moved to the new Windows 2008
Hyper-V virtual environment. Servers that are already redundant and have limited
failure points should be chosen first.
. Clarify a rollback strategy, just in case unexpected problems occur.
. Test the disaster-recovery and redundancy capabilities thoroughly.
. Fine-tune the migration processes and nail down time estimates.
The Migration/Implementation Phase

. Verify that applications have been thoroughly tested, administrators and support
personnel have been trained, and common problem resolution is clearly docu-
mented.
. Conduct a check of end-user satisfaction.
. Allocate time to verify ongoing support and maintenance of the new environment,
before migrating the last servers into the new virtualized networking environment.
. Plan a project-completion party.
2
Download at www.wowebook.com
ptg6432687
This page intentionally left blank
Download at www.wowebook.com
ptg6432687
3
Planning, Sizing, and
Architecting a Hyper-V
Environment
IN THIS CHAPTER
. Logically Distributing Virtual
Servers on Specific Host
Systems
. Choosing Servers to Virtualize
. Capturing the Workload
Demands of Existing Servers
. Analyzing the Workload
Demands of Existing Servers
. Choosing the Hyper-V Host
System Environment
. Sizing a Hyper-V Host System
Without Existing Guest Data

Whereas Chapter 2, “Best Practices at Planning,
Prototyping, Migrating, and Deploying Windows 2008
Hyper-V,” focused on the project management process for a
migration of physical servers to virtual servers, this chapter
focuses on the technical assessment of existing physical
servers and the host server sizing that is needed to prepare a
virtual host environment. Instead of just randomly virtual-
izing physical servers onto host systems sequentially, orga-
nizations can better utilize host server hardware systems by
technically assessing the server loads of existing physical
servers and logically placing them on host servers to
balance virtual guest sessions.
Logically Distributing Virtual
Servers on Specific Host Systems
Moving physical servers to virtualized host servers is not a
process that should be done randomly. A fine balance exists
between the distribution of server workloads, the distribu-
tion of servers for redundancy and fault tolerance, and the
distribution of servers for application performance and user
connectivity.
Distributing Virtual Servers Based on
Workload
Some server sessions are processor intensive (for example,
index servers, transaction-analysis servers), whereas some
server sessions are I/O intensive (for instance, file servers,
messaging servers). Putting several processor-intensive
server sessions on a single host can overload the processing
Download at www.wowebook.com
ptg6432687
74

3 Planning, Sizing, and Architecting a Hyper-V Environment
capabilities of the server, whereas balancing host servers with some processor-intensive
server workloads with some I/O-intensive server workloads can better extend the capabili-
ties of a host system.
The variables and constraints of workload on a server can be technically categorized as
follows:
. Processor workload—This refers to the demands a guest session places on the
processor, typically from applications that do calculations or analysis of information.
All applications use the processor of a server; some can get away with sharing a
processor with other server sessions, whereas other server sessions require the dedica-
tion of one, two, or four processors to properly allocate processor capabilities to the
guest session. Key in evaluating processor workload is to look at sustained processor
workload versus burst workload. Some applications use a lot of processing speed, but
only to do periodic reports or transactions, which might be an end-of-day posting of
information or a month-end or quarter-end task. Differentiate between sustained
workload and periodic workload so that you don’t allocate two or four processors to
a session when the processor transaction occurs only once a month.
. Disk I/O workload—This refers to the demands a guest session places on the disk
for reading and writing of information. In the normal processing of information, a
guest session may read and write information periodically to disk. For some applica-
tions, however, the guest session is constantly reading information, fetching data to
place in cache, or writing transaction logs, data, or both simultaneously in the
management of disk information. For guest sessions with high disk I/O workloads,
you can assign dedicated disks to the virtual guest session rather than share a
common disk storage system. If you dedicate the disk location, reading and writing
of information should not cause an application to bottleneck (and thus you avoid
degradation of performance of all other virtual guest sessions on the host server).
. Network I/O workload—This refers to the demands a guest session places on a
network adapter from sending and receiving data to other servers or systems on the
network. Applications that are gateways or frontend servers to backend data stores

may have significant network I/O because all traffic passes through a specific system.
Guest sessions with significant network I/O can cause all the guest sessions to slow
down if all the guest sessions share a single network adapter in the host system. By
identifying the guest session that has high network I/O workloads, an administrator
can add an additional network adapter to the server and dedicate the network
adapter for a given guest session. Doing so allows the isolation of traffic from the
guest session out to a network switch, and offloads the workload data from the
shared host server resource. Other strategies in managing network I/O workload is to
create a dedicated virtual switch within a Hyper-V host server where communica-
tions between guest sessions is virtually dedicated in a core communication path
that can be isolated from the network communications path of other guest sessions.
By creating a virtual network switch between servers dependent on communicating
to each other within a host server, you can greatly enhance communications
between the servers, perhaps even increasing the speeds above those of traditional
Download at www.wowebook.com
ptg6432687
75
Logically Distributing Virtual Servers on Specific Host Systems
Gigabyte Ethernet because the Hyper-V virtual network switch can communicate at
native system bus speeds.
. Guest session RAM requirements—Some virtual guest sessions have a high demand
for memory allocated to the guest session, whether that’s 8GB or 16GB or 32GB for
the session. Many applications use whatever available memory is given to the ses-
sion to load data into RAM and cache the data to provide higher transaction fetch
rates of the information when an application requires access to the information. For
these applications, there appears to be no limit on how much memory the applica-
tion requires; it uses whatever is available. It is important to test these applications
to determine whether an optimal amount of memory can be allocated that provides
a flatline return on performance. For example, an application may perform twice as
fast with 4GB of memory than with 2GB of memory, but the same application gains

no incremental improvement at 8GB or 16GB. These applications can then be
capped at 4GB for the guest session, allowing any additional memory to be used for
other guest sessions.
Distributing Virtual Servers Based on Redundancy
When choosing to distribute virtual guest sessions across virtual host servers, taking in
account redundancy and high availability helps in deciding which guest server sessions to
place on which host servers. As an example, placing both the primary cluster server and a
passive backup cluster server on the same host system nullifies the benefits of clustering if
the host server fails and both cluster nodes are brought offline. So, the placement of
cluster pairs across two host servers as shown in Figure 3.1 will ensure that a guest session
failure will remain operational on the second cluster pair session, and will ensure that a
host server failure will also maintain operations of the second cluster pair on a separate
host server system.
3
Virtual
Session A
Host A
Virtual
Session B
Virtual
Session C
Virtual
Session A
Host B
Clustered
Pair
Virtual
Session B
Virtual
Session C

FIGURE 3.1 Distributing servers to split systems across separate hosts for reliability
purposes.
Download at www.wowebook.com
ptg6432687
76
Frontend A
Host A
Backend B
Frontend A
Host B
Frontend Load
Balanced
Backend B
Backend
Clustered
FIGURE 3.2 Frontend/backend server interrelationships.
3 Planning, Sizing, and Architecting a Hyper-V Environment
Distributing Virtual Servers Based on Server Interrelationships
When analyzing servers to determine where to logically place guest server sessions, look
beyond just server performance demands. Also look at how servers interact with each
other. In many applications, a frontend server and a backend server make up the client
connection portion and the database portion of the application (for instance, Exchange,
Office Communication Server, SharePoint), as shown in Figure 3.2. The frontend and
backend pair are directly associated to each other, so from a redundancy standpoint, if
either is offline, the application doesn’t operate. Therefore, splitting the applications
across two hosts provides no benefit because the application doesn’t work unless both
servers in the pair are operational.
By placing the two servers on the same virtual host system and then establishing a virtual
switch that allows the two applications to communication directly with each other inside
the virtual host system, you can greatly improve the communications between the fron-

tend and the backend server. Likewise, an Exchange server communicates regularly with a
global catalog server to query distribution lists, email address lists, and the like. By placing
a global catalog guest session on the same host server as an Exchange server, you can
greatly improve the communications between the application server and directory server.
Distributing Virtual Servers Based on User Connectivity
Other factors to consider when deciding where to place virtual guest sessions include user
connectivity and where users who need access to the host servers reside. If a physical
server is in a remote site close to users and is then virtualized and centralized in a data
Download at www.wowebook.com
ptg6432687
77
Logically Distributing Virtual Servers on Specific Host Systems
3
Site A
Users
Physical
Server A
Site B
Users
Physical
Server B
FIGURE 3.3 Maintaining links between users and user data.
center on the other side of a WAN connection, the performance between the users and the
virtualized server needs to be taken in account. Although virtualization might be a good
business decision to remove servers from remote locations to simplify administration and
management, performance or reliability of information access across an unreliable or slow
WAN link coul d si gnif icantly and neg ativ ely impact users accessing the ser vers .
During the assessment process, identify where users are and how they interact with the
servers, as illustrated in Figure 3.3. As you can see in this figure, a link is maintained
between users and the data they access. Virtualize the server and centralize the system, but

make sure to consider user access to the resource in the process.
Distributing Virtual Servers Across a WAN Connection
With regard to the virtualization process, many believe that migrating physical servers to
virtual guest sessions in a consolidation process means that the host servers must be
centralized in a single data center. However, if users are in remote locations, servers might
need to be distributed closer to the remote users. Therefore, a virtual host system can be
brought up in a remote location with physical servers in that remote location virtualized
in the remote host system.
A remote host system can also be used as a backup to a host server in a main data center
location so that stretch clusters can be established between guest sessions in host servers
in separate locations. Figure 3.4 shows this distribution of host servers across WAN
connections; such a distribution can provide redundancy, fault tolerance, and disaster
recovery of servers and applications for the enterprise.
Download at www.wowebook.com
ptg6432687
78
3 Planning, Sizing, and Architecting a Hyper-V Environment
Site A Site B
Failover for
Appication A
Appication A
FIGURE 3.4 Distributing servers across a WAN for redundancy purposes.
Choosing Servers to Virtualize
When choosing to virtualize guest sessions, deciding which applications are the best candi-
dates for virtualization is a key factor. Not all server applications can or should be virtual-
ized. That’s not to say, however, that an organization can’t choose to virtualize 100% of
their servers if desired. The key to choosing servers for virtualization is to first pick the
servers that make perfect sense to virtualize, and then make the more difficult decisions
about virtualizing other server systems.
Prioritizing Servers to Virtualize

As mentioned previously, some servers are prime candidates for virtualization—for
example, servers that have low system resource utilization or where multiple servers exist
for shear redundancy and recoverability. Other server systems that have high processor
demands and excessive disk and network I/O requirements may not be the best servers to
virtualize; during the physical to virtual server migration process, these servers may be the
ones chosen for second-round migration.
The process of converting physical servers to virtual servers takes several days, if not
weeks, depending on the number of servers an organization has. Therefore, the organiza-
tion should create a priority list and stage the migration in a logical manner.
In many instances, the priority may be to virtualize a physical server that is failing. Make
sure, however, that the rush to evacuate a server off faulty hardware into a virtual environ-
ment doesn’t create more problems for the organization. Such a quick migration might
not factor in whether the application works well in a virtualized environment, or whether
the system resource demands of the application really suggest that the application should
have instead been migrated off one physical server onto a new physical server.
Candidates for Immediate Virtualization to Guest Sessions
When organizations are prioritizing servers for virtualization, as noted, many server
systems make perfect sense to virtualize. Server roles that are typically simple decisions to
virtualize include the following:
Download at www.wowebook.com
ptg6432687
79
Choosing Servers to Virtualize
3
. DHCP servers—The Dynamic Host Configuration Protocol (DHCP) server assigns
IPv4/IPv6 network addresses to devices on the network. Most organizations have at
least one DHCP server, if not several, for both redundancy and to associate different
IP addresses to different groups of users. However, DHCP servers rarely have more
than 5% utilization and are prime candidates for server virtualization.
. DNS servers—The domain name system (DNS) maintains a list of network servers

and systems and their associated IP addresses. A DNS server is queried and responds
with information. In general, however, DNS servers, like DHCP servers, rarely have
more than 5% utilization. And because DNS servers are so critical in resolving server
names and addresses, organizations generally have several DNS servers for redun-
dancy. These systems are perfect candidates for virtualization.
. Network policy servers—Network policy servers keep track of the policies required
to allow users access to certain network resources, or they may maintain a list of
users authorized to access specific network resources remotely. Remote
Authentication Dial-in User Service (RADIUS) servers are a form of policy server, and
with Windows Server 2008, Microsoft has introduced a server called the Network
Policy Server (NPS). The NPS performs centralized connection authentication, autho-
rization, and accounting for many types of network access, including wireless and
virtual private network (VPN) connections. Because these policy servers are queried
only when a policy requires validation, the demands on policy servers are pretty
limited; they are therefore good candidates for virtualization.
. Web servers—As more and more technologies become web aware and have web
frontend interfaces for user access, enterprise web servers have proliferated. And
because many Microsoft web-based frontend servers don’t work well when combined
together, each frontend web server needs to be on its own server session. This multi-
tude of web frontend servers can be virtualized and hosted on a limited number of
virtual host systems. In this way, you combine the web servers without forcing the
web applications to share the same guest session; instead, those guest sessions share
the same host server system as dedicated virtual guests.
. Certificate and Rights Management servers—As with network policy servers, cer-
tificate servers and rights management servers are queried when certificates are
required or when certificate or rights management policies are requested. Other than
at those limited times, the certificate server or rights management server remains
idle. Hence, certificate servers and rights management servers are prime candidates
for virtualization.
Secondary Candidates for Virtualization to Guest Sessions

A number of application services can be virtualized. These services will be different for
every organization, and so the decision to virtualize these servers must be organization
specific. In general, however, the secondary candidates for virtualization to guest sessions
include the following:
. File servers—Most organizations have a lot of data stored on file servers, but the
reality is that use access to the file servers is an occasional read and write of files
Download at www.wowebook.com

×