Tải bản đầy đủ (.ppt) (57 trang)

ĐIỆN tử VIỄN THÔNG lecture7b ATM networks khotailieu

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.06 MB, 57 trang )

Chapter 7
Packet-Switching
Networks
Network Services and Internal Network
Operation
Packet Network Topology
Datagrams and Virtual Circuits
Routing in Packet Networks
Shortest Path Routing
ATM Networks
Traffic Management


Chapter 7
Packet-Switching
Networks
ATM Networks


Asynchronous Tranfer Mode
(ATM)
 Packet




multiplexing and switching

Fixed-length packets: “cells”
Connection-oriented
Rich Quality of Service support



 Conceived


as end-to-end

Supporting wide range of services




Real time voice and video
Circuit emulation for digital transport
Data traffic with bandwidth guarantees

 Detailed

discussion in Chapter 9


ATM Networking
Voice Video Packet

Voice Video Packet

ATM
Adaptation
Layer

ATM

Adaptation
Layer

ATM Network




End-to-end information transport using cells
53-byte cell provide low delay and fine multiplexing granularity



Support for many services through ATM Adaptation Layer


TDM vs. Packet Multiplexing
Variable bit
rate
Delay
Burst traffic Processing
TDM Multirate
Low, fixed Inefficient
Minimal, very
only
high speed
Packet Easily 
handled

*


Variable

Efficient 

Header & packet*
processing
required

In mid-1980s, packet processing mainly in software and
hence slow; By late 1990s, very high speed packet
processing possible


ATM: Attributes of TDM & Packet
Switching
Voice
Data
packets
Images

1
2

MUX

3

Wasted bandwidth


4

TDM

• Packet structure gives
flexibility & efficiency
• Synchronous slot
transmission gives high
speed & density

3

2

1

4

3

2

1

4

3

1


4

3

2

1

2

1

ATM
3

2

Packet Header


ATM Switching
Switch carries out table translation and routing
1



Switch

5 video 25




6 data 32

N

voice 32

video 61

25
32

N
1

75

32
61

3
2

39
67

67

voice 67

video 67

2

data 39

3



1

video 75

ATM switches can be implemented using shared memory,
shared backplanes, or self-routing multi-stage fabrics

N


ATM Virtual Connections




Virtual connections setup across network
Connections identified by locally-defined tags
ATM Header contains virtual connection information:






8-bit Virtual Path Identifier
16-bit Virtual Channel Identifier

Powerful traffic grooming capabilities



Multiple VCs can be bundled within a VP
Similar to tributaries with SONET, except variable bit rates possible

Virtual paths
Physical link
Virtual channels


VPI/VCI switching & multiplexing
VPI 3
a
b
c
d
e

ATM
Sw
1


a

VPI 5
ATM
Sw
2

ATM
crossconnect

ATM
Sw
3

VPI 2
VPI 1

Sw = switch





ATM
Sw
4

Connections a,b,c bundled into VP at switch 1
 Crossconnect switches VP without looking at VCIs
 VP unbundled at switch 2; VC switching thereafter

VPI/VCI structure allows creation virtual networks

d
e

b
c


MPLS & ATM




ATM initially touted as more scalable than packet
switching
ATM envisioned speeds of 150-600 Mbps
Advances in optical transmission proved ATM to be
the less scalable: @ 10 Gbps






Segmentation & reassembly of messages & streams into
48-byte cell payloads difficult & inefficient
Header must be processed every 53 bytes vs. 500 bytes
on average for packets
Delay due to 1250 byte packet at 10 Gbps = 1 sec; delay

due to 53 byte cell @ 150 Mbps ≈ 3 sec

MPLS (Chapter 10) uses tags to transfer packets
across virtual circuits in Internet


Chapter 7
Packet-Switching
Networks
Traffic Management
Packet Level
Flow Level
Flow-Aggregate Level


Traffic Management
Vehicular traffic management
 Traffic lights & signals
control flow of traffic in city
street system
 Objective is to maximize
flow with tolerable delays
 Priority Services





Police sirens
Cavalcade for dignitaries

Bus & High-usage lanes
Trucks allowed only at night

Packet traffic management
 Multiplexing & access
mechanisms to control flow
of packet traffic
 Objective is make efficient
use of network resources &
deliver QoS
 Priority





Fault-recovery packets
Real-time traffic
Enterprise (high-revenue)
traffic
High bandwidth traffic


Time Scales & Granularities


Packet Level






Flow Level





Queueing & scheduling at multiplexing points
Determines relative performance offered to packets over a
short time scale (microseconds)
Management of traffic flows & resource allocation to ensure
delivery of QoS (milliseconds to seconds)
Matching traffic flows to resources available; congestion
control

Flow-Aggregate Level




Routing of aggregate traffic flows across the network for
efficient utilization of resources and meeting of service
levels
“Traffic Engineering”, at scale of minutes to days


End-to-End QoS
Packet buffer



1





2

N–1

N

A packet traversing network encounters delay and
possible loss at various multiplexing points
End-to-end performance is accumulation of per-hop
performances


Scheduling & QoS











End-to-End QoS & Resource Control
 Buffer & bandwidth control → Performance
 Admission control to regulate traffic level
Scheduling Concepts
 fairness/isolation
 priority, aggregation,
Fair Queueing & Variations
 WFQ, PGPS
Guaranteed Service
 WFQ, Rate-control
Packet Dropping
 aggregation, drop priorities


FIFO Queueing
Arriving
packets

Packet buffer

Packet discard
when full





Transmission
link


All packet flows share the same buffer
Transmission Discipline: First-In, First-Out
Buffering Discipline: Discard arriving packets if
buffer is full (Alternative: random discard; pushout
head-of-line, i.e. oldest, packet)


FIFO Queueing









Cannot provide differential QoS to different packet flows
 Different packet flows interact strongly
Statistical delay guarantees via load control
 Restrict number of flows allowed (connection admission control)
 Difficult to determine performance delivered
Finite buffer determines a maximum possible delay
Buffer size determines loss probability
 But depends on arrival & packet length statistics
Variation: packet enqueueing based on queue thresholds
 some packet flows encounter blocking before others
 higher loss, lower delay



FIFO Queueing with Discard
Priority
(a)

Packet buffer
Arriving
packets
Packet discard
when full

(b)

Transmission
link

Packet buffer
Arriving
packets

Transmission
link
Class 1
discard
when full

Class 2 discard
when threshold
exceeded



HOL Priority Queueing
Packet discard
when full
High-priority
packets
Low-priority
packets
Packet discard
when full





Transmission
link
When
high-priority
queue empty

High priority queue serviced until empty
High priority queue has lower waiting time
Buffers can be dimensioned for different loss probabilities
Surge in high priority queue can cause low priority queue to
saturate


HOL Priority Features




Delay



(Note: Need labeling)





Per-class loads

Provides differential QoS
Pre-emptive priority: lower
classes invisible
Non-preemptive priority:
lower classes impact higher
classes through residual
service times
High-priority classes can
hog all of the bandwidth &
starve lower priority classes
Need to provide some
isolation between classes


Earliest Due Date Scheduling
Arriving
packets


Sorted packet buffer
Tagging
unit
Packet discard
when full



Transmission
link

Queue in order of “due date”
packets requiring low delay get earlier due date
 packets without delay get indefinite or very long due
dates



Fair Queueing / Generalized
Processor Sharing
Packet flow 1

Approximated bit-level
round robin service

Packet flow n







Transmission
link

Each flow has its own logical queue: prevents hogging; allows
differential loss probabilities
C bits/sec allocated equally among non-empty queues




C bits/second




Packet flow 2

transmission rate = C / n(t), where n(t)=# non-empty queues

Idealized system assumes fluid flow from queues
Implementation requires approximation: simulate fluid system; sort
packets according to completion time in ideal system


Buffer 1
at t=0


Fluid-flow system:
both packets served
at rate 1/2

1

Buffer 2
at t=0

t

0

1

Packet from
buffer 2 waiting

2

Packet-by-packet system:
buffer 1 served first at rate 1;
then buffer 2 served at rate 1.

1

Packet from
buffer 1 being 0
served


Both packets
complete service
at t = 2

Packet from buffer 2
being served
1

2

t


2

Buffer 1
at t=0

Fluid-flow system:
both packets served
at rate 1/2

1

Buffer 2
at t=0

Packet from buffer 2
served at rate 1
0


Packet from
buffer 2
waiting
Packet from
buffer 1
served at
rate 1

2

t

3

Packet-by-packet
fair queueing:
buffer 2 served at rate 1

1

0

1

2

3

t



Buffer 1
at t=0

Fluid-flow system:
packet from buffer 1
served at rate 1/4;

Buffer 2
at t=0

1

Packet from buffer 2
served at rate 3/4

Packet from buffer 1
served at rate 1
t

0

1

Packet from
buffer 1 waiting

2


Packet-by-packet weighted fair queueing:
buffer 2 served first at rate 1;
then buffer 1 served at rate 1

1
Packet from buffer 1 served at rate 1
Packet from
buffer 2
0
served at rate 1

t
1

2


×