Tải bản đầy đủ (.pdf) (82 trang)

ADVANCED SERVER VIRTUALIZATION VMware and Microsoft Platforms in the Virtual Data center phần 6 pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.46 MB, 82 trang )

358  Advanced Server Virtualization
between virtual network adapters connected to that virtual switch.  ere are
three virtual machines confi gured on this ESX Server. Virtual Machine 1 has one
virtual network adapter that is connected to Virtual Switch 0. Network traffi c
from Virtual Machine 1 will be routed through Virtual Switch 0 and on through
vmnic0, if necessary. Note that Virtual Switch 1 does not have a virtual network
adapter connected. No traffi c will be present on Virtual Switch 1 and therefore
no traffi c will be present on vmnic1, the physical network adapter to which
Virtual Switch 1 is bound. Virtual Machine 2 has two virtual network adapters
installed.  e fi rst virtual network adapter, Ethernet0, is connected to Virtual
Switch 2 and the second virtual network adapter, Ethernet1, is connected to
Virtual Switch 3. Virtual Machine 3 has an identical network confi guration as
Physical Server Hardware Layer
VMware ESX Server Virtualization Layer
Virtual Machine 1
Ethernet0
Virtual Machine 2
Ethernet0
Virtual Machine 3
Ethernet0
Service Console
eth0 vmnic0 vmnic1 vmnic2 vmnic3 vmnic4
DIRECT ACCESS TO PHYSICAL
NETWORK ADAPTER
bond0
Virtual Switch 0 Virtual Switch 1 2
3
vmnet_0
Ethernet1 Ethernet1
Physical Network Switch
Virtual Switch


Virtual Switch
Figure 17.13 VMware ESX Server Networking Components.
Marshall_AU3931_C017.indd 358Marshall_AU3931_C017.indd 358 4/13/2006 1:41:50 PM4/13/2006 1:41:50 PM
VMware ESX Server Advanced Topics  359
Virtual Machine 2.  ese two virtual machines must be important because they
both have their external network connections bound to Virtual Switch 2. Virtual
Switch 2 is bound to bond0, which aggregates three physical network adapters.
Bond0 could lose up to two physical network adapters without aff ecting the
connectivity to Virtual Machine 2 and Virtual Machine 3.  e second virtual
network adapter in Virtual Machine 2 and Virtual Machine 3 are connected
to Virtual Switch 3, which is a private network because Virtual Switch 3 is not
bound to any physical network adapters in the server. Virtual Machine 2 and
Virtual Machine 3 can communicate with each other over this link. It is possible
that these two virtual machines are clustered and need a private network link in
order to monitor each other.  is diagram shows a fairly complex networking
scenario that is possible with ESX Server, although even more complex confi gu-
rations are possible.
MAC Addresses
Virtual network adapters can have one or more IP addresses assigned to them.
 is is completely confi gured and controlled by the virtual machine’s guest op-
erating system. Virtual network adapters must also have a MAC address just as
if it were a physical network adapter. Physical network adapters have a glob-
ally unique MAC address permanently assigned to each card. Because virtual
network adapters are created in software, their MAC addresses cannot be per-
manently assigned as in a physical network adapter. Instead, the MAC address
is a confi gurable value assigned to each virtual network adapter either dynami-
cally by ESX Server or statically by an administrator. Dynamic MAC addresses
are automatically generated by ESX Server and static MAC addresses must be
confi gured explicitly by an administrator for each virtual network adapter that
requires a static MAC address.  e value of a virtual network adapter’s MAC

address is stored within the virtual machine’s confi guration fi le. If a virtual ma-
chine has not been explicitly confi gured to use a static MAC address for a virtual
network adapter, it will have a dynamically generated MAC address assigned to
the virtual network adapter.  ere are three keyword/value pairs in the virtual
machine’s confi guration fi le that specify the dynamically generated MAC ad-
dress.  ey are as follows:
Ethernet<id>.addressType = “generated”
Ethernet<id>.generatedAddress = “00:0c:29:1e:
aa:94”
Ethernet<id>.generatedAddressOffset = “0”
 e <id> token represents the id of the specifi c virtual network adapter. For
virtual machines that have only one virtual network adapter, <id> usually equals
0 (Ethernet0). If a virtual machine has more than one virtual network adapter,
the <id> of each virtual network adapter is incremented by 1. A virtual machine
Marshall_AU3931_C017.indd 359Marshall_AU3931_C017.indd 359 4/13/2006 1:41:51 PM4/13/2006 1:41:51 PM
360  Advanced Server Virtualization
confi gured with two virtual network adapters will have a set of entries for Ether-
net0 and another set of entries for Ethernet1 in its confi guration fi le.
 e Ethernet<id>.addressType keyword/value pair defi nes the type of MAC
address that is assigned to the virtual network adapter.  is keyword/value pair
is used for dynamically generated MAC addresses and for static MAC addresses.
If a dynamically generated MAC address is being used, the value is “generated.”
If a static MAC address is being used, the value is “static.”
 e Ethernet<id>.generatedAddress keyword/value pair contains the actual
MAC address that has been dynamically generated and assigned to the virtual
machine.  is keyword/value is created automatically upon powering on the
virtual machine when then Ethernet<id>.addressType keyword has a value of
“generated.” In ESX Server, dynamically generated MAC addresses always use
00:0C:29 as the fi rst three bytes of the MAC address value.  is is one of two
Organizationally Unique Identifi ers (OUI) assigned to VMware for use with

virtual MAC addresses. VMware’s other OUI, 00:50:56, is used for static MAC
addresses.
 e Ethernet<id>.generatedAddressOff set keyword/value pair is also required
when using dynamically generated MAC addresses and its value is usually zero.
 is keyword/value is created automatically upon powering on the virtual ma-
chine when then Ethernet<id>.addressType keyword has a value of “generated.”
 is value is the off set used against the virtual machine’s UUID (Universally
Unique Identifi er) when generating MAC addresses.
ESX Server uses an algorithm for generating dynamic MAC addresses that at-
tempts to create MAC address value unique not only within a single ESX Server,
but also across ESX Servers. Each virtual machine has a keyword/value pair in
its confi guration fi le named uuid.location.  is keyword/value pair contains the
virtual machine’s UUID, which is a 128-bit (16-byte) numeric value that is
universally unique within its given context.  is means that no other virtual ma-
chines will have the same UUID, even across multiple ESX Servers around the
world. UUID (also referred to as GUID for Globally Unique Identifi ers) genera-
tion is very common in many computing scenarios where an object should have
a unique name across all space and time. In ESX Server, each virtual machine’s
UUID is based in part by the absolute path to the virtual machine’s confi gura-
tion fi le and by the ESX Server’s SMBIOS UUID. If a confl ict occurs when
generating a dynamic MAC address on a single ESX Server, the Ethernet<id>.
generatedAddressOff set value is incremented and the algorithm is generates a
new MAC address.  is iterative process repeats until a unique MAC address is
generated. In almost all cases, the unique MAC address is generated on the fi rst
attempt. It is important to note that ESX Server cannot check for confl icting
MAC addresses across multiple ESX Servers.
Instead of relying on ESX Server to create unique MAC addresses, it is pos-
sible to confi gure static MAC addresses for each virtual network adapter. Static
MAC addresses must be explicitly confi gured by an administrator to be used.
Marshall_AU3931_C017.indd 360Marshall_AU3931_C017.indd 360 4/13/2006 1:41:51 PM4/13/2006 1:41:51 PM

VMware ESX Server Advanced Topics  361
Static MAC addresses in ESX Server must use the VMware OUI, 00:50:56 as
the fi rst 3 bytes of the static MAC address value.  is is in stark contrast to Mi-
crosoft Virtual Server, which allows any MAC address value to be used without
restrictions. Furthermore, ESX Server limits the range of allowable values for the
fourth byte of static MAC addresses to the range 00 to 3F. Static MAC addresses
must be within the following range: 00:50:56:00:00:00 to 00:50:56:3F:FF:FF.
To confi gure a virtual network adapter to use a static MAC address, the vir-
tual machine’s confi guration fi le must be edited as follows.
 Remove the following keyword/value pairs:
Ethernet<id>.generatedAddress
Ethernet<id>.generatedAddressOffset
 Edit the following keyword/value pair:
From:
Ethernet<id>.addressType = “generated”
To:
Ethernet<id>.addressType = “static”
 Add the following keyword/value pair:
Ethernet<id>.address = “<mac>”
In the listing above, <id> is the Ethernet adapter number of the virtual network
adapter being confi gured with a static MAC address and <mac> is the value of
the static MAC address using the format: OO:UU:II:XX:YY:ZZ where OO:UU:
II represents the static MAC address OUI for VMware ESX Server, 00:50:56,
and XX:YY:ZZ represents the unique MAC address value from 00:00:00 to 3F:
FF:FF.
To confi gure a virtual network adapter to use a dynamically generated MAC
address instead of a static MAC address, the virtual machine’s confi guration fi le
must be edited as follows.
 Remove the following keyword/value pairs:
Ethernet<id>.address

 Edit the following keyword/value pair:
From:
Ethernet<id>.addressType = “static”
To:
Ethernet<id>.addressType = “generated” )
 e next time the virtual machine is powered on, the necessary keyword/value
pairs that support a dynamically generated MAC address will automatically be
Marshall_AU3931_C017.indd 361Marshall_AU3931_C017.indd 361 4/13/2006 1:41:51 PM4/13/2006 1:41:51 PM
362  Advanced Server Virtualization
added to the virtual machine’s confi guration fi le as well as the new, dynamically
generated MAC address value.
In ESX Server, MAC address values are colon-delimited unlike Microsoft Vir-
tual Server where MAC address values are hyphen-delimited. As a best practice,
the values of MAC addresses should be in all upper case. Another best practice
is to confi gure static MAC addresses for all virtual network adapters in all virtual
machines and make the necessary confi guration updates before the fi rst time
that the virtual machine is powered on.  is reduces the chances of confi guring
the TCP/IP properties of a virtual machine with a dynamic MAC address and
then later changing it to a static MAC address and confusing the network switch
by ARPing the same IP address with two diff erent MAC addresses.
It is extremely important that MAC addresses within a network are unique.
It is a best practice to use a static, unique MAC address for every virtual net-
work adapter across all physical servers, ESX Servers, and virtual machines in
an entire data center. Even though MAC addresses realistically only need to be
unique within an Ethernet collision domain, the isolation of some physical net-
work switch’s VLAN implementations can be suspect. Also, within ESX Server,
although virtual switches do provide network isolation, if two isolated virtual
switches have virtual network adapters connected which have the same MAC
address, strange eff ects have been experienced even though the two confl icting
MAC addresses could never “see” each other. Keeping all MAC addresses of all

virtual network adapters 100 percent unique is a good method of eliminating
potential and seemingly obscure network problems.
To determine the MAC address of a virtual network adapter within a virtual
machine in the Service Console, open the virtual machine’s confi guration fi le
with an editor such as emacs, vi, or nano in order to manually search for the
MAC address value or use the cat command piped into a grep command such
as:
# cat <confi g_fi le_path> | grep [Ee]thernet
[0-9]. address
or
# cat <confi g_fi le_path> | grep [Ee]thernet
[0-9].generatedAddress
 e <confi g_fi le_path> token is the path to the virtual machine’s confi gura-
tion fi le.  e fi rst command will output only lines for virtual network adapters
containing a static MAC address and the second command will output only
lines for virtual network adapters containing a dynamically generated MAC ad-
dress for the specifi ed virtual machine.
To determine the MAC address of the physical network adapter bound to
the Service Console, use the ifconfi g command and obtain the value from the
Marshall_AU3931_C017.indd 362Marshall_AU3931_C017.indd 362 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
VMware ESX Server Advanced Topics  363
ifconfi g command’s output for eth0, fi eld HWaddr, or simply use the following
command:
# ifconfi g | grep eth0
 e output from the command above should look similar to the following:
eth0 Link encap:Ethernet HWaddr 00:1C:03:B1:14:ED
 e value following the string token, HWaddr, is the MAC address value of
eth0.
 e MAC address value of all physical network adapters allocated to the VM-
Kernel for use by virtual machines cannot be easily determined because they are

never used.  e physical server does not have a valid TCP/IP stack bound to any
of the vmnic network adapters, therefore their burned-in MAC addresses are
never broadcast to the network. Virtual machines connected to virtual switches
that are bound to the vmnic network adapters have a TCP/IP stack bound to the
virtual network adapter bridged to the physical network adapter.  e vmnics act
like a bridge device in this context, connecting the external network to the vir-
tual networks within an ESX Server.  e MAC addresses of the virtual network
adapters are broadcast to the network and for those virtual network adapters
bound to external networks, their MAC addresses are broadcast to the physical
networks to which they are bridged.
Promiscuous Mode
By default, virtual switches in ESX Server are not allowed to operate in pro-
miscuous mode.  is is done for security purposes, reducing the eff ectiveness
of using packet sniff er and network analyzer applications from within a virtual
machine. In some cases, there may be a legitimate need to enable promiscuous
mode for a virtual switch.  is should be done with care. Promiscuous mode can
be enabled on virtual switches that are bound to a physical network adapter or a
vmnet device. When promiscuous mode is enabled for a virtual switch bound to
a physical network adapter, all virtual machines connected to the virtual switch
have the potential of reading all packets sent across that network, from other
virtual machines as well as any physical machines and other network devices.
When promiscuous mode is enabled on a virtual switch not bound to a physical
network adapter (one that is instead bound to a vmnet device), all virtual ma-
chines connected to the virtual switch have the potential of reading all packets
sent across that network, that is, only from other virtual machine connected to
the same virtual switch.  ere is no method of permanently enabling promiscu-
ous mode for a virtual switch. To enable promiscuous mode for a virtual switch,
a value is poked into a special virtual fi le in the /proc fi le system.  is means
that the value takes eff ect in memory only and is not persisted. Upon the next
reboot of the ESX Server, the value will revert to its default value, which is to not

Marshall_AU3931_C017.indd 363Marshall_AU3931_C017.indd 363 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
364  Advanced Server Virtualization
enable promiscuous mode. Because the necessary virtual fi le in the /proc fi le sys-
tem only exists when a virtual switch is connected to either a physical network
adapter or a virtual network adapter, promiscuous mode can only be enabled on
virtual switches not bound to a physical network adapter when a powered-on
virtual machine has a virtual network adapter connected to the virtual switch.
If a virtual switch not bound to physical network adapter has no live connec-
tions from virtual machines, the necessary /proc fi le does not exist and therefore
the value cannot be modifi ed. Virtual switches that do have a physical network
adapter bound to it can have its promiscuous mode enabled or disabled at any
time.  erefore, one method of persisting a virtual switch to have promiscuous
mode enabled is to add the command that enables promiscuous mode to the
/etc/rc.local boot script in the Service Console.
To determine if promiscuous mode is enabled or disabled, enter the following
command in the Service Console using an account with root-level access:
# cat /proc/vmware/net/<device>/confi g | grep
PromiscuousAllowed
 e <device> token represents the name of the network device being queried,
either a vmnic, a vmnet, or a bond. For example, to query vmnic0 for its current
promiscuous mode state:
# cat /proc/vmware/net/vmnic0/confi g | grep
PromiscuousAllowed
PromiscuousAllowed No
 e output from this example displays that promiscuous mode is not enabled
for vmnic0.
To change the promiscuous mode enabled state for a network device, use the
following command:
# echo “PromiscuousAllowed <value>” > /proc/
vmware/net/<device>/confi g

 e <value> token must be no to disable promiscuous mode or yes to enable
promiscuous mode for the specifi ed <device>.  e <device> token represents the
name of the network device being queried, either a vmnic, a vmnet, or a bond.
In the following example, vmnic0 is queried to determine if promiscuous
mode is enabled. Next, a command is issued to enable promiscuous mode for
vmnic0. Finally, the command of the original query is executed again to deter-
mine if the promiscuous mode state has been changed for vmnic0.
# cat /proc/vmware/net/vmnic0/confi g | grep
PromiscuousAllowed
PromiscuousAllowed No
# echo “PromiscuousAllowed yes” > /proc/vmware/
net/vmnic0/confi g
Marshall_AU3931_C017.indd 364Marshall_AU3931_C017.indd 364 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
VMware ESX Server Advanced Topics  365
# cat /proc/vmware/net/vmnic0/confi g | grep
PromiscuousAllowed
PromiscuousAllowed Yes
If a vmnic or bond should have promiscuous mode enabled at all times, the
command to enable promiscuous mode for the particular device can be added at
the end of the/etc/rc.local boot script.  is fi le can easily be edited using emacs,
vi, or nano.
VLAN Tagging (Port Groups)
Virtual switches in ESX Server support the use of VLANs, Virtual Local Area
Networks.  is feature is also referred to as Port Groups in ESX Server. In the
networking community, VLANs are very common as they provide a method of
abstracting and isolating network segments from each other. VLAN technol-
ogy is usually implemented in managed network switches. It is no surprise to
discover that in ESX Server, VLANs are implemented as a feature of virtual
switches.  e term Port Groups is used synonymously with VLAN Tagging. In
this context, the term port refers to a virtual Ethernet port in a virtual switch

and is not to be confused with the term port as it is used in TCP/IP. VLAN Tag-
ging allows groups of ports in a switch to be bound together to form a virtual
local area network, or VLAN.  e groups of switch ports defi ned with the same
VLAN ID act as if they were on a dedicated switch and do not see traffi c from
other VLANs.  e VLAN Tagging feature in ESX Server allows connections to
virtual switches to belong to a VLAN, which can participate in VLANs external
to the ESX Server environment in the physical network. By default, the VLAN
Tagging feature is enabled in the VMKernel, but is not used until one or more
Port Groups have been confi gured.
Resource Management
ESX Sever provides very rich facilities for resource management of virtual ma-
chines, including the Service Console.  ere are several techniques used to con-
trol and shape the amount of resources allocated to virtual machines.  ese
techniques include:
 Shares
 CPU affi nity
 Min/Max percentages
 Min/Max amounts
 Network traffi c shaping
 e resource management techniques listed above may be used independent-
ly or in combination to achieve the desired amount of performance from virtual
Marshall_AU3931_C017.indd 365Marshall_AU3931_C017.indd 365 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
366  Advanced Server Virtualization
machines. Most virtual machines should not require resource tweaking.  e re-
source management features of ESX Server are designed to be applied to specifi c
virtual machines that have a high sensitivity to performance.
 e primary method used to control how much of a particular resource is
given to a virtual machine at a particular point in time is the use of shares.  e
shares system applies to CPU, memory, and disk resources. By default, all virtual
machines are created with an equal number of shares for CPU, memory, and

disk.  e default number of shares allocated per resource per virtual machine is
1000. Using this default setting, all virtual machines receive the same amount
of resources.  e default value of 1000 is considered to be the normal amount
of shares.  e shares system of resource allocation is proportional; therefore if
the normal amount of shares is 1000, assigning 2000 shares of a particular re-
source for a virtual machine allocates double the amount of that resource for
that virtual machine relative to the other virtual machines that have the normal
amount of shares (1000). For example, consider an ESX Server with three virtual
machines: Vm1, Vm2, and Vm3. Vm1 has 2000 CPU shares, Vm2 has 1000
CPU shares, and Vm3 has 500 CPU shares. Vm1 will receive twice the amount
of CPU cycles then Vm2 and four times as many CPU cycles as Vm3. VM3 will
receive half as many CPU cycles as Vm2.  e same amount of CPU cycle would
be allocated to the virtual machines if the shares allocated were set to the follow-
ing values: Vm1 has 200 CPU shares, Vm2 has 100 CPU shares, and Vm3 has
50 CPU shares.  is is due to the proportional or relative nature of the shares
resource allocation system.
Most ESX Servers run on multiprocessor hardware system such as dual pro-
cessor or quad processor servers. It is possible to assign virtual machines to run
on specifi c processors.  is feature is called CPU affi nity. By default, virtual
machines’ instructions are load balanced across all processors in the server. ESX
Server’s scheduler determines which processors will execute particular instruc-
tions for virtual machines. Using the CPU affi nity feature, it is possible to con-
fi gure a virtual machine to run only on specifi c processors in the system. Using
CPU affi nity greatly reduces ESX Server’s fl exibility in the scheduler to provide
optimum performance for all virtual machines.
Another technique used to control resource allocation to virtual machines
is Min/Max percentages. ESX Server uses Min/Max percentages with CPU re-
sources. In this scheme, virtual machines can be confi gured to receive a mini-
mum and a maximum amount of CPU cycle by the overall percentage of CPU
cycles available.  is is often used to guarantee that a virtual machine receives a

guaranteed minimum number of CPU cycles in order to avoid CPU starvation
issues. Additionally, a virtual machine can be confi gured with a maximum of
less than 100 percent to limit the amount of CPU cycles allocated to the virtual
machine.  is is often confi gured on very low priority virtual machines to avoid
having those virtual machines consume too many CPU cycles.  e Min/Max
percentages can be used independently or together on specifi c virtual machines
Marshall_AU3931_C017.indd 366Marshall_AU3931_C017.indd 366 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
VMware ESX Server Advanced Topics  367
as needed. By default, virtual machines are created with a minimum CPU per-
centage of 0 percent and a maximum CPU percentage of 100 percent.
ESX Server uses the Min/Max amounts technique in addition to shares to
control memory allocation for virtual machines. Virtual machines are always
confi gured with an amount of memory.  is value is the virtual machine’s maxi-
mum amount of memory. Memory is allocated to virtual machines as they re-
quire it, based upon their shares, and the state of the virtual machine. Memory
can be reclaimed and reallocated dynamically by ESX Server when a virtual
machine is idle or frees up a block of previously allocated memory. Some virtual
machines may require a minimum amount of memory to be always present.
Virtual machines can have a minimum amount of memory allocated to them.
Upon powering on the virtual machine, ESX Server will allocate the minimum
amount of memory to the virtual machine. By default, virtual machines have a
minimum memory value of zero.  e more memory that is allocated as min-
imum memory to virtual machines reduces the eff ectiveness of ESX Server’s
memory management features.
ESX Server uses only the proportional shares technique to manage disk re-
sources. Disk resources are measured in terms of disk bandwidth for each physi-
cal disk or LUN, each represented by a vmhba.  e disk bandwidth is calculated
in consumption units in which each SCSI command equals on consumption
unit and the size of the data to be transferred is converted into a proportional
number of additional consumption units. Additionally, each virtual machine

may, by default, issue up to eight SCSI commands before being preempted by
another virtual machine requesting disk access.
Network bandwidth resources are managed in a much diff erent manner than
other resources. Instead of using the proportional shares or Min/Max meth-
ods, network bandwidth is controlled by a pluggable network packet fi lter mod-
ule. ESX Server ships and supports only one fi lter module at this time named
nfshaper.  is module implements a transmit fi lter that performs network traffi c
shaping on outgoing traffi c.  e nfshaper module can be attached and confi g-
ured for each virtual machine.  e traffi c shaping feature implemented by the
nfshaper module can be used to limit the average bandwidth, peak bandwidth,
and the maximum burst size measured in bits per second (bps).
Performance Optimization
Here are some best practices that can be used to gain optimum performance for
virtual machines hosted on an ESX Server:
 Ensure that the proper guest operating system type is confi gured for each
virtual machine, Ensure that VMware Tools is properly installed and is up
to date in each virtual machine. Before placing a virtual machine running
Windows into production, defragment all virtual hard drives attached to
Marshall_AU3931_C017.indd 367Marshall_AU3931_C017.indd 367 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
368  Advanced Server Virtualization
the virtual machine. Confi gure virtual machines that run time-dependant
services to have a minimum amount of CPU allocation to prevent CPU
starvation. Disable or remove virtual hardware devices that are not needed
or used by the guest operating system in each virtual machine. Stop and
disable any unneeded services or daemons. Disable and software and oper-
ating system features that are not needed in the guest operating systems in
each virtual machine. In Windows guest operating systems, disable screen
savers, desktop backgrounds, and whiz-bang eff ects such as fading or slid-
ing menus. In Linux guest operating systems, disable the X server if pos-
sible.

 Allocate an exact amount of memory to each virtual machine.
Another technique used to improve performance of virtual machines in
ESX Server is to confi gure the maximum amount of memory for each
virtual machine as the minimum amount of memory.  is will eff ectively
counteract the benefi ts of memory overloading in ESX Server and care must
be taken to not allocate more memory than is physically available in the
system (total amount of system RAM – 1GB is a good rule of thumb).  is
technique causes ESX Server to allocate an exact amount of physical RAM
for each virtual machine upon powering on the virtual machine.  e allo-
cation process occurs slowly until the maximum amount of Ram has been
allocated.  is improves overall system performance because the VMKernel
does not have to dynamically resize the amount of memory for virtual ma-
chines. Ensure that the Service Console is confi gured with enough memory.
It is important to ensure that the Service Console has enough memory
allocated relative to the number of virtual machines running concurrently
and the number and types of system management and backup agents.
Mware recommends 192MB for systems hosting up to 8 virtual machines,
272MB for up to 16 virtual machines, 384MB for up to 32 virtual ma-
chines, and 512MB for more than 32 virtual machines.  is recommenda-
tion does not consider the amount and type of system management and
backup agents that may be installed and running in the Service Console.
As a best practice, confi gure an amount of memory for the Service Console
that is at least one step higher than the recommended amount of memory
for the amount of virtual machine that will be hosted. If the system will
have more than 50 virtual machines registered, it is recommended to con-
fi gure the maximum amount of memory for the Service Console, 800MB.
Close all unused VMRC application windows as soon as possible. Each
VMRC application window consumes CPU resources in the Service Con-
sole while it is connected. It is highly recommended to only open VMRC
windows as needed and close them as soon as possible. Do not leave idle

VMRC windows open for long period of time. Do not run CPU-intensive
applications within the Service Console. Although the Service Console is
designed to run system management and backup agents, it is not designed
Marshall_AU3931_C017.indd 368Marshall_AU3931_C017.indd 368 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM
VMware ESX Server Advanced Topics  369
for heavy processing loads.  e Service Console is a virtual machine itself
which runs on CPU0. Although its CPU resources can be modifi ed to
enhance the Service Console’s performance, it is recommended to keep
programs with heavy processing loads out of the Service Console. Reduce
the density of virtual machines running on each ESX Server. If there are
many virtual machines running on one ESX Server that are consuming
and severely competing for CPU, memory, or disk resources, consider re-
ducing the density of virtual machines on the ESX Server by moving some
of the virtual machine to another ESX Server. Although ESX Server is
highly optimized and can run many virtual machines concurrently, it is
still possible to stress one or more resources by having heavy processing
virtual machines.
Summary
VMware ESX Server contains an amazing amount of features that can be used
to create advanced virtualized systems in the data center.  is chapter covered
the most important advanced features that allow administrators and system en-
gineers to quickly become familiar with ESX Server so that eff ective solutions
can be quickly developed. By far, this chapter is not a defi nitive study into every
advanced feature and capability of ESX Server because that amount of knowl-
edge could easily fi ll many volumes. More detailed technical information on
VMware ESX Server is available at />esx_resources.html.  is site contains links to documentation, white papers, and
technical briefs regarding the current release of ESX Server. VMware ESX Server
is continuously being advanced and many new advanced features are likely to be
supported in the next major product release from VMware (ESX Server 3.0).
Marshall_AU3931_C017.indd 369Marshall_AU3931_C017.indd 369 4/13/2006 1:41:52 PM4/13/2006 1:41:52 PM

Marshall_AU3931_C017.indd 370Marshall_AU3931_C017.indd 370 4/13/2006 1:41:53 PM4/13/2006 1:41:53 PM
Part V
Implementing VMware
GSX Server
Marshall_AU3931_P005.indd 371Marshall_AU3931_P005.indd 371 3/31/2006 3:06:52 PM3/31/2006 3:06:52 PM
Marshall_AU3931_P005.indd 372Marshall_AU3931_P005.indd 372 3/31/2006 3:07:00 PM3/31/2006 3:07:00 PM
373
Chapter 18
The VMware GSX Server
Platform
VMware GSX Server is a widely distributed server virtualization platform,
used mostly in smaller workgroup-sized server implementations. Available for
both the Linux and Microsoft Windows platforms, this chapter will introduce
the platform by detailing the history and background of the product as well
as discussing the hardware and software requirements for both editions of the
product.
Product Background
In 1999, VMware launched the release of their fi rst product now known as
VMware Workstation.  is was considered by many to be the fi rst commercially
available virtualization platform on the x86-based architecture. In the years fol-
lowing its release, VMware has continued to mature their product line based
around their patented virtual machine technology. Near the end of 2000, VM-
ware signifi cantly added to their product line by announcing VMware GSX
Server 1.0.  rough the years, VMware has upgraded and updated the GSX
Server product to create a powerful, stable, and scalable server virtualization
platform. As an added bonus, GSX Server also provides a direct upgrade path to
VMware ESX Server, VMware’s most powerful and scalable server virtualization
product and is itself an upgrade path for VMware Workstation users. In April
of 2004, VMware announced their 64-bit roadmap for virtualization. With the
release of GSX Server 3.1, VMware completed the fi rst milestone for their sup-

port of 64-bit computing. It was the fi rst x86 server virtualization product to
be released that added support for 64-bit host operating systems, which means
Marshall_AU3931_C018.indd 373Marshall_AU3931_C018.indd 373 3/31/2006 2:51:43 PM3/31/2006 2:51:43 PM
374  Advanced Server Virtualization
there are 64-bit drivers present that allow installation of the product on x86
64-bit platforms. Unfortunately, VMware offi cially only supports 32-bit guest
operating systems within a virtual machine running on a 64-bit host server.  is
does, however, make it possible to upgrade to 64-bit host operating systems and
continue to run existing 32-bit operating systems in virtual machines. With the
introduction of support for 64 bit guest operating systems within the VMware
Workstation 5.5 release, it is only a matter of time before GSX Server adds of-
fi cial support as well.
VMware GSX Server is enterprise-class virtual infrastructure software de-
signed to run on the x86-based server architecture. GSX Server transforms phys-
ical servers into a pool of as many as 64 virtual machines.  e product runs as an
application on a host operating system to provide a secure, uniform platform to
easily deploy, manage, and remotely control multiple servers running as virtual
machines. Guest operating systems and applications are isolated within multiple
virtual machines residing on a single host server.  is means that completely
independent installations of Microsoft Windows, Linux, or Novell server oper-
ating systems and their applications can run side by side on a single x86 server,
and at the same time, save on hardware and management costs. Since VMware
GSX Server gives the VM direct access to the host server’s resources (such as
processor, memory, and disk), virtual machines deliver near-native performance.
System resources are then allocated out to any virtual machine based on need to
deliver maximum capacity utilization.
GSX Server is installed on a physical server that is running either Microsoft
Windows or a Linux server operating system.  en, virtual machines are con-
fi gured within the software much like a physical server would be. An operating
system, known as a guest operating system, would then be installed on the vir-

tual machine.  ese servers can be loaded with various guest operating systems
including standard Microsoft Windows and Linux operating systems. VMware
GSX Server handles the task of abstracting the real hardware and providing a
virtual system platform for each virtual machine.  erefore, each virtual ma-
chine has its own virtual hardware, including a single processor system with an
Intel 440BX motherboard complete with a Phoenix BIOS (version 4.0, release
6.0). Depending on the host server’s capacity, up to 3.6GB of memory can be
allocated to a virtual machine. Each virtual machine can also receive a SVGA
graphics card, up to four IDE devices, up to 21 SCSI devices across three virtual
SCSI controllers, up to two 1.44 MB fl oppy drives, up to four serial ports, two
parallel ports and two 1.1 USB ports, as many as four virtual Ethernet cards
and a virtual keyboard and mouse. Almost any physical device supported by
the host operating system can be made available to a virtual machine as long
as GSX Server supports it; GSX Server has the broadest device support among
all other virtual machine software. Another advantage to using GSX Server is
the ability to either bridge virtual LAN interfaces directly to the physical net-
work adapter or to create up to nine virtual Ethernet switches. Creating virtual
Marshall_AU3931_C018.indd 374Marshall_AU3931_C018.indd 374 3/31/2006 2:51:53 PM3/31/2006 2:51:53 PM
The VMware GSX Server Platform  375
Ethernet switches allows for better network isolation and faster communication
between virtual machines.
An included suite of management tools makes confi guring and managing vir-
tual machines an easy task. For local management and confi guration, VMware
GSX Server provides the VMware GSX Server Management Console that runs
on top of the host server.  e local console allows creating, monitoring, stop-
ping, starting, rebooting, and suspending virtual machines. It also allows the vir-
tual machine to be viewed in full screen mode, which makes the virtual machine
faster because it has exclusive access to the VM. One of the strengths of GSX
Server is that it also allows for remote management. VMware provides either a
Web-based management interface (connecting at http://<hostname>:8222) or

the VMware Remote Console interface that can be installed on a user’s desktop,
giving the user the ability to view the virtual machine’s display at another com-
puter and controlling it across the network. Both the Web interface and the
remote console support secure connections via SSL. In addition to the remote
management interfaces, VMware also provides a VmCOM scripting API and a
VmPerl Scripting API to automate the GSX Server management tasks.
Over the years, virtualization has continued to mature and therefore gain
acceptance within the IT community. With the improvements that VMware has
made to the GSX Server product, VMware has been able to earn a place in the
software testing and development space within many organizations because of
the speed and ease with which an environment can be created, discarded, and
recreated. By utilizing these same techniques, VMware was also able to expand
into software training and software demonstrations. Additionally, the improve-
ments made to the product have dispelled any fears of using it to implement
departmental server consolidation for both new and legacy applications.
Product Versions
VMware GSX Server is currently off ered in two versions, based on the host
operating system:
 VMware GSX Server for Windows— e host operating system this ver-
sion installs on must be one of the supported Microsoft Windows operat-
ing systems discussed below.
 VMware GSX Server for Linux— e host operating system this version
installs on must be one of the supported fl avors of the Linux operating
system discussed below.
Each product is independent of the other. If both host operating systems are
needed, then both versions of VMware GSX Server must be purchased. While
both products can be found on the same installation CD-ROM, each product
has its own serial number and one cannot be used to install the other.
Marshall_AU3931_C018.indd 375Marshall_AU3931_C018.indd 375 3/31/2006 2:51:53 PM3/31/2006 2:51:53 PM
376  Advanced Server Virtualization

In addition to versioning the product by host operating system, VMware also
further breaks the product down by versioning against the number of processors
found within the host server. VMware sells these products, the Windows version
and the Linux version, with the following CPU restrictions:
 GSX Server 2-CPU license: for smaller servers running either a single pro-
cessor or a dual processor confi guration
 GSX Server Unlimited CPU license: supports larger servers with up to 32
CPUs
If the host server supports Hyper reading or contains a dual
core (or multi-core) processor, it will not aff ect the CPU li-
censing.  erefore, while a host server containing two physi-
cal processors that support Hyper reading may appear to
the host operating system as a quad processor server, VMware is only con-
cerned with the number of physical processors per socket when determin-
ing licensing packages.
VMware GSX Server has the broadest hardware compatibility
and support for the largest array of guest operating systems
of any x86 server virtualization platforms on the market (see
Figure 18.1).
Hardware Requirements
Processor
VMware GSX Server supports as many as 64 virtual machines running concur-
rently on a single host server with as many as 32 processors. VMware recom-
mends that no more than four virtual machines should be run concurrently
per physical processor. Ultimately, that number should be determined by the
resource needs of the guest operating systems and their applications. If the guest
operating system has a small footprint when it comes to resources needed, such
as a small Linux machine, then more virtual machines could be executed against
the processor. If on the other hand, the virtual machine contains a CPU in-
tensive application, such as a Microsoft SQL database, then fewer virtual ma-

chines can be executed against the processor. Chapter 7 gives additional details
on how to properly size the deployment on a host server. However, based on the
minimum recommendations of VMware, GSX Server does not require a lot of
processing power.  e processor must be a minimum of an Intel Pentium II pro-
cessor running at a speed of 733 MHz or faster. While this may be the minimum
recommendation, it is certainly nowhere near optimal, and as is true with most
applications, the faster the processor the better.
Marshall_AU3931_C018.indd 376Marshall_AU3931_C018.indd 376 3/31/2006 2:51:53 PM3/31/2006 2:51:53 PM
The VMware GSX Server Platform  377
VMware GSX Supported Guest Operating Systems
Guest Operating System CPU Architecture
Microsoft Windows Code Named Longhorn (Experimental Support Only) 32-bit
Windows Server 2003 Enterprise Edition (RTM and SP1)
32-bit
Windows Server 2003 Small Business Server (RTM and SP1)
32-bit
Windows Server 2003 Standard Edition (RTM and SP1)
32-bit
Windows Server 2003 Web Edition (RTM and SP1)
32-bit
Windows XP Professional (RTM, SP1, and SP2)
32-bit
Windows XP Home Edition (RTM, SP1, and SP2)
32-bit
Windows 2000 Professional (RTM, SP1, SP2, SP3, SP4, and SP4 Checked)
32-bit
Windows 2000 Server (RTM, SP1, SP2, SP3, SP4, and SP4 Checked)
32-bit
Windows 2000 Advanced Server (RTM, SP1, SP2, SP3, SP4, and SP4 Checked)
32-bit

Windows NT 4.0 Server with Service Pack 6a
32-bit
Windows NT Workstation 4.0 with Service Pack 6a
32-bit
Windows NT 4.0 Server Terminal Server Edition with Service Pack 6a
32-bit
Windows ME (Millennium Edition)
32-bit
Windows 98 SE
32-bit
Windows 98 (Including Latest Customer Service Packs)
32-bit
Windows 95 (Including Service Pack 1 and All OSR Releases)
32-bit
Windows for Workgroups 3.11
16-bit
Windows 3.1
16-bit
MS-DOS 6.22
16-bit
GSX Server is compatible with standard 32-bit IA-32 processors and also
processors that implement IA-32 64-bit extensions such as AMD’s Opteron and
Athlon 64 processors and the Intel Xeon EM64T processor when used with
supported 32-bit host operating systems. VMware GSX Server 3.2 does not cur-
rently support the Intel Itanium processor.
Memory
When considering memory requirements for the host server, it is important to
keep in mind that enough memory is needed to run the Microsoft Windows
or Linux host operating system, along with enough memory for each virtual
machine’s guest operating system and the applications running on both the host

server and the virtual machines.  is concept is important to understand, be-
cause the lack of adequate memory will limit the number of virtual machines
that can run concurrently or for that matter can be run at all. Also keep in mind,
Figure 18.1a GSX Server Supported Guest Operating Systems.
Marshall_AU3931_C018.indd 377Marshall_AU3931_C018.indd 377 3/31/2006 2:51:53 PM3/31/2006 2:51:53 PM
378  Advanced Server Virtualization
a guest operating system on a virtual machine will require the same amount of
memory that it does on a physical server.  erefore, if a Windows Server 2003
operating system normally takes a minimum of 512MB of memory to run ef-
fectively, then the virtual machine will require the same amount of memory.
VMware’s recommended minimum amount of memory is 512MB. How-
ever, in reality, this is probably just enough memory for a Windows host server
with the GSX Server software installed and almost no memory left over for a
virtual machine to use. An insuffi cient amount of memory available to a virtual
machine will starve the performance, just like a physical server.  e more mem-
ory installed inside of the host server, the better. Keep in mind, however, the
maximum supported amount of memory for a host server is 64GB for Windows
and Linux hosts that support large memory or PAE mode, 4GB for non-PAE
mode Windows hosts and 2GB for Linux hosts with kernels in the 2.2.x series.
Disk
 e disk space needed for a normal installation of the GSX Server product var-
ies between the Windows version and the Linux version.  e Windows version
VMware GSX Supported Guest Operating Systems
Guest Operating System CPU Architecture
Mandrake Linux 8.0, 8.1, 8.2, 9.0, 9.1, 9.2, 10.0, 10.1 32-bit
Red Hat Enterprise Linux (AS, ES, WS) 2.1, 2.1 Update 6, 3.0, 3.0 Update 4, 4.0
32-bit
Red Hat Enterprise Linux (AS, ES, WS) 3.0
32-bit
Red Hat Enterprise Linux (AS, ES, WS) 3.0 Update 2

32-bit
Red Hat Linux 6.2, 7.0, 7.1, 7.2, 7.3, 8.0, 9.0
32-bit
SuSE Linux Enterprise Server 7 (Including Patch 2)
32-bit
SuSE Linux Enterprise Server 8 (Including Patch 3)
32-bit
SuSE LINUX Enterprise Server 9 Service Pack 1
32-bit
SuSE Linux 7.3, 8.0, 8.1, 8.2, 9.0, 9.1, 9.2, 9.3 (Experimental Support Only)
32-bit
Turbolinux Server 7.0, 8.0
32-bit
Turbolinux Workstation 8.0
32-bit
Novell NetWare 4.2 Support Pack 9
32-bit
Novell NetWare 5.1 Support Pack 6
32-bit
Novell NetWare 6.0 Support Pack 3
32-bit
Novell NetWare 6.5 Support Pack 1
32-bit
Sun Solaris x86 Platform Edition 9 (Experimental Support Only)
32-bit
Sun Solaris x86 Platform Edition 10 Beta (Experimental Support Only)
32-bit
FreeBSD 4.0-4.6.2, 4.8, 4.9, 5.0, 5.2
32-bit
Figure 18.1b GSX Server Supported Guest Operating Systems.

Marshall_AU3931_C018.indd 378Marshall_AU3931_C018.indd 378 3/31/2006 2:51:54 PM3/31/2006 2:51:54 PM
The VMware GSX Server Platform  379
requires 130MB of free disk space to install server, the management interface,
the virtual machine console installation, and both scripting packages, VmPerl
and VmCOM.  e Linux version only requires 20MB of free disk space, but
does not install the VmCOM scripting package because it only works with Win-
dows. VMware also recommends the Linux version should have free disk space
in the /tmp folder equivalent to 1.5 times the amount of memory found on the
host server. Finally, VMware recommends at least 1GB of disk space allocated
for each virtual machine created. For a Linux virtual machine, this may be ap-
propriate; however, a Microsoft Windows server installation will greatly surpass
this amount. Chapter 7 provides further details on the proper way to size and
evaluate hard disk subsystems.
One thing to keep in mind, the suspend VM function will
require additional free disk space. If this feature is used, it will
take up approximately the same amount of free disk space as
the amount of memory found on the virtual machine.  ere-
fore, a virtual machine with 1GB of memory will consume approximately
1GB of disk space when the suspend feature is activated.
Network
GSX Server will support any Ethernet controller card that the host operating
system supports.
While host operating systems do not require permanent network connectiv-
ity, from a practicality stand point, one or more network cards should be present
to have true server class functionality. Specifi c details and options for recom-
mended confi gurations are provided in chapter 7 and GSX Server networking
interfaces are discussed in detail in chapter 22.
Display
Obviously, a graphics adapter for the host server will be needed and VMware
recommends a 16-bit color or better display adapter. In Windows, the color

palette should be set to 65536 colors or true color to allow for the best perfor-
mance. However, it is possible to get by with anything greater than a 256 color
(8-bit) display adapter. Unfortunately, while this may work, it probably will
not function up to expectations. One fi nal additional requirement for Linux
host servers is an X server that meets the X11R6 specifi cation, such as XFree86,
and a video adapter supported by the host server to run virtual machines in full
screen mode. If an X server is not installed, then one must be installed. VMware
recommends XFree86 version 3.3.4 or higher, with XFree86 version 4.0 being
the preferred choice.
Marshall_AU3931_C018.indd 379Marshall_AU3931_C018.indd 379 3/31/2006 2:51:54 PM3/31/2006 2:51:54 PM
380  Advanced Server Virtualization
Software Requirements
Host Operating System
GSX Server provides a wide range of choices for host operating system require-
ments (see Figure 18.2). When installing the VMware GSX Server for Windows
product, there are two sets of choices, 64-bit hosts and 32-bit hosts. To take
advantage of a 64-bit host server, VMware off ers support for the Microsoft Win-
dows Server 2003 x64 Edition operating system. Additionally, 32-bit servers
may also choose between Microsoft Windows Server 2003 (Web, Standard, and
Enterprise including SP1) and Microsoft Windows 2000 Server or Advanced
Server with either service pack 3 or 4 installed. When installing the VMware
GSX Server for Linux product, there are many more choices available. Most of
the major Linux distributions that have been released recently are supported.
Specifi cally, for 64-bit host servers, SUSE LINUX Enterprise Server 8 or one of
the three Red Hat Enterprise Linux 3.0 versions: AS, ES or WS. GSX Server 3.2
adds experimental support for Red Hat Enterprise Linux 4, Red Hat Enterprise
Linux 3 Update 4, SUSE LINUX Enterprise Server 9 Service Pack 1 and SUSE
LINUX 9.2 and 9.3. For 32-bit servers, it is important to check the most recent
list on the VMware support site.
Linux versions not listed may work. However, VMware will

not support it and the trouble of trying to tweak and trouble-
shoot it may not be worth the eff ort. It is therefore best to
stick with a supported operating system.
VMware Management Interface
 e Windows version of the product has two specifi c requirements for the man-
agement interface to function correctly. First, Microsoft Internet Information
Server (IIS) version 5.0 or 6.0 must be installed. Second, one of the following
browsers must be used to view and interact with the management interface:
Microsoft Internet Explorer 5.5 or 6.0, Netscape Navigator 7.0, Firefox 1.x,
or Mozilla 1.x. Similar requirements must be met for the Linux version.  e
inetd process must be confi gured and active to allow connections and it also has
browser requirements, either Netscape Navigator 7.0, Firefox 1.x, or Mozilla
1.x must be used. Other Web browser software may work, and VMware is con-
stantly updating the product’s requirements.
GSX Server Scripting
One of the key features of GSX Server is the ability to automate and script
custom management and control functionality. In order to use the VmPerl API,
both the Windows and Linux versions require the installation of Perl 5.005x or
higher.
Marshall_AU3931_C018.indd 380Marshall_AU3931_C018.indd 380 3/31/2006 2:51:54 PM3/31/2006 2:51:54 PM
The VMware GSX Server Platform  381
VMware GSX Supported Host Operating Systems
Host Operating System CPU Architecture
Microsoft Windows Server 2003 Enterprise Edition 32-bit
Microsoft Windows Server 2003 Standard Edition 32-bit
Microsoft Windows Server 2003 Web Edition 32-bit
Microsoft Windows Server 2003 Service Pack 1 32-bit
Microsoft Windows 2000 Advanced Server, Service Pack 3 and Service
Pack 4 32-bit
Microsoft Windows 2000 Server, Service Pack 3 and Service Pack 4 32-bit

Microsoft Windows Server 2003 x64 Editions 64-bit
Mandrake Linux 9.2, stock 2.4.22-10mdk kernel 32-bit
Mandrake Linux 9.0, stock 2.4.19-16mdk, update 2.4.19-32mdk kernels 32-bit
Mandrake Linux 8.2, stock 2.4.18-6mdk kernel 32-bit
Red Hat Enterprise Linux 3.0 AS, stock 2.4.21-4, update 2.4.21-9,
2.4.21-9.0.1, 2.4.21-15 kernels 32-bit
Red Hat Enterprise Linux 3.0 ES, stock 2.4.21-4, update 2.4.21-9,
2.4.21-9.0.1, 2.4.21-15 kernels 32-bit
Red Hat Enterprise Linux 3.0 WS, stock 2.4.21-4, update 2.4.21-9,
2.4.21-9.0.1, 2.4.21-15 kernels 32-bit
Red Hat Enterprise Linux AS 2.1, stock 2.4.9-3, 2.4.9-e.24summit, update
2.4.9-e.38, 2.4.9-e.40 kernels 32-bit
Red Hat Enterprise Linux ES 2.1, update 2.4.9-16, 2.4.9-e.24summit,
2.4.9-e.38, 2.4.9-e.40 kernels 32-bit
Red Hat Enterprise Linux WS 2.1, update 2.4.9-16, 2.4.9-e.38, 2.4.9-e.40
kernels 32-bit
Red Hat Linux 9.0, update 2.4.20-8, 2.4.20.9, 2.4.20-13, 2.4.20-18,
2.4.20-28, 2.4.20-30.9, 2.4.20-31.9 kernels 32-bit
Red Hat Linux 8.0, stock 2.4.18-14, update 2.4.18-17, 2.4.18-18,
2.4.18-19, 2.4.18-27, 2.4.20-13, 2.4.20-18 kernels 32-bit
Red Hat Linux 7.3, stock 2.4.18-3, update 2.4.9-6, 2.4.9-34, 2.4.18-5,
2.4.18-10, 2.4.18-17, 2.4.18-18, 2.4.1819, 2.4.18-27, 2.4.20-13,
2.4.20-18 kernels 32-bit
Red Hat Linux 7.2, stock 2.4.7-10, update 2.4.9-6, 2.4.9-7, 2.4.9-13,
2.4.9-21, 2.4.9-31, 2.4.9-34, 2.4.18-17, 2.4.18-18, 2.4.18-19, 2.4.18-27,
2.4.20-13, 2.4.20-18 kernels 32-bit
Red Hat Linux 7.1, stock 2.4.2-2, update 2.4.3-12, 2.4.9-6, 2.4.9-34,
2.4.18-17, 2.4.18-18, 2.4.18-19, 2.4.1827, 2.4.20-13, 2.4.20-18 kernels 32-bit
Figure 18.2a GSX Server Supported Host Operating Systems.
Marshall_AU3931_C018.indd 381Marshall_AU3931_C018.indd 381 3/31/2006 2:51:54 PM3/31/2006 2:51:54 PM

382  Advanced Server Virtualization
VMware GSX Supported Host Operating Systems
Host Operating System CPU Architecture
SuSE Linux Enterprise Server 8, stock 2.4.19, update 2.4.21-138,
2.4.21-143, 2.4.21-215 and patch 3 kernels 32-bit
SuSE Linux Enterprise Server 7, stock 2.4.7 and patch 2, update
2.4.18 kernels 32-bit
SUSE LINUX 9.1, stock 2.6.4-52 kernel 32-bit
SUSE LINUX 9.0, stock 2.4.21-99, update 2.4.21-166 kernels 32-bit
SuSE Linux 8.2, stock 2.4.20 kernel 32-bit
SuSE Linux 8.1, update 2.4.19. update 2.4.19-175 kernels 32-bit
SuSE Linux 8.0, stock 2.4.18 kernel 32-bit
SuSE Linux 7.3, stock 2.4.10, update 2.4.18 kernels 32-bit
Turbolinux Server 8.0, stock 2.4.18-1, update 2.4.18-17 kernels 32-bit
Turbolinux Workstation 8.0, stock 2.4.18-1, update 2.4.18-17 kernels 32-bit
Turbolinux Server 7.0, stock 2.4.5-3, update 2.4.18-17 kernels 32-bit
Mandrake Linux 10.0 and 10.1 32-bit
Red Hat Enterprise Linux 4 32-bit
Red Hat Enterprise Linux 3 Update 4 32-bit
Red Hat Enterprise Linux 2.1 Update 6 32-bit
SUSE LINUX Enterprise Server 9 Service Pack 1 32-bit
SUSE LINUX 9.3 (Experimental Support Only) 32-bit
SUSE LINUX 9.2 32-bit
Red Hat Enterprise Linux 3.0 AS — update 2.4.21-15 kernel 64-bit
Red Hat Enterprise Linux 3.0 ES — update 2.4.21-15 kernel 64-bit
Red Hat Enterprise Linux 3.0 WS — update 2.4.21-15 kernel 64-bit
SuSE Linux Enterprise Server 8 — stock 2.4.19, update 2.4.21-138 and
patch 3 kernels 64-bit
Figure 18.2b GSX Server Supported Host Operating Systems.
Additional Software Components

Other Linux host operating system requirements include:
 Linux kernel 2.2.14-5.0 is specifi cally not supported.
 Standard Linux server installation is required with glibc version 2.1 or
higher and libXpm.so.
  e inetd process must be confi gured and active for VMware Virtual Ma-
chine Console and VMware Management Interface connections.
Marshall_AU3931_C018.indd 382Marshall_AU3931_C018.indd 382 3/31/2006 2:51:54 PM3/31/2006 2:51:54 PM

×