Data Center
2
When devices speak different languages
3
Building Control Systems
Security Devices
HVAC,
Power and
Lighting
Manufacturing
4
Preamble
8 bytes
Destination
6 bytes
Source
6 bytes
Type
2 bytes
Data
46-1500 bytes
CRC/FCS
5 Bytes
Ethernet Packet
Common Ground - Packets are just like
letters
5
VOICE CIRCUIT
D
A
T
A
V
O
ICE
D
A
T
A
V
O
I
C
E
I
P
P
L
A
T
F
O
R
M
Carrier meets Enterprise
6
When devices speak a common language
7
Building Control Systems
Security Devices
HVAC,
Power and
Lighting
Manufacturing
8
Today’s Emerging Network – More
Networked Devices than Users
• Early History of Networking – Multiple Users Shared
Computing Resources: User to Device Ratio 1:5 or more
• Recent History – All Users had a computer: User to Device
Ratio 1:1
• Emerging Network – All Users have a computer, IP
Telephone, add to this IP Security Cameras, WiFi Access
Points, Intelligent Building Networks: User to Device Ratio
begins to exceed 3:1
– In excess of three networked devices for each user and growing!
• What are the implications of this ratio on our Networks?
10:1
9
Baseline IP
Bursty
VoIP
WAP
Security
Lighting
HVAC
Total IP Convergence Drives Data
Center Growth
More Networked Devices
drive additional Equipment
in Data Center
Data Centers
11
Telephone Exchange
• Originally circuit switched
• High Availability
• High Density
• Cooling wasn’t of great
concern
• Mechanical, Environmental
and Electrical performance
of the highest importance
12
Enterprise Computer Room
• Traditionally housed stand
alone servers in the
Enterprise
• Large amount of space
needed
• Not designed for Uptime
• Density was low and heat
was not a great concern
• Mainly used for Email and
data storage
13
Carrier Meets Enterprise
• Telephone Exchanges/Central
offices are evolving into Data
Centers
14
Data Centers emerge
• Server numbers increased
• More Space required
• Availability and Power
• Density increased and heat
started be become a
concern
• Centralized applications and
storage start to become a
critical part of doing
business
15
Data Centers of Today
• Blade Servers and Switches
• Super High Density
• Uptime is a must
• Speed, cooling and power
become the major concerns
• Centralized applications,
web hosting, SAN,
Redundancy and Reliability
all become table stakes for
doing business
16
Centralized to Distributed and Back?
?
• As the number of devices in the network increase there
becomes an increasing need to control what is done locally
• Regulations are driving high levels of data storage
• The largest risk to the network are the devices at the
periphery
17
Desktop Blade
SAN
Thin Clients (dumb terminals)
• Blade Servers and Switches
are common place
• The same can be done with
client computers
• Remote access to full
processing power
• Low bandwidth required at
the client end
• Centralized, controlled data
storage
18
Density, Security and Cost
• Better Client Uptime
• Super High Density
• Common Platform
• Fast replacement time
• Easy to upgrade
• Client can be accessed from
multiple locations
19
Cooling Requirements
• Higher density means more
heat
• Equipment selection and
airflow become critical as
space becomes a premium
20
Design Stage
• Thermal site mapping and
airflow simulators are being
used to design high density
space
• Consultants in the Data
Center arena are becoming
more specialized as a result
• Again, designs must be
followed to the letter
21
New Products to suit the needs
• Equipment racks with
dedicated cooling solutions
are starting be emerge
• Density is being addressed
• Airflow access becomes
critical
22
Proper Cooling Techniques
Proper Data Center Design deployment of Hot Aisle/Cold Aisle Cooling
– Good Airflow/Proper Cooling = Optimal Performance of Servers and Switches
23
Proper Cooling Techniques
Proper Data Center Design deployment of Hot Aisle/Cold Aisle Cooling
– Good Airflow/Proper Cooling = Optimal Performance of Servers and Switches
24
Good Product, Bad Practices
But what happens if poor cable management blocks airflow?
25
The Data Center Design Dilemma:
Density vs. Manageability and Reliability
Fundamentally, you cannot use the same
structured cabling products originally
designed for low density LANs and expect
them to perform to the level required in a
Carrier Data Center