Tải bản đầy đủ (.ppt) (45 trang)

Chapter 12 :Congestion in Data Networks pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (295.69 KB, 45 trang )


William Stallings
Data and Computer
Communications
Chapter 12
Congestion in
Data Networks

What Is Congestion?

Congestion occurs when the number of packets
being transmitted through the network
approaches the packet handling capacity of the
network

Congestion control aims to keep number of
packets below level at which performance falls
off dramatically

Data network is a network of queues

Generally 80% utilization is critical

Finite queues mean data may be lost

Queues at a Node

Effects of Congestion

Packets arriving are stored at input buffers


Routing decision made

Packet moves to output buffer

Packets queued for output transmitted as fast as
possible

Statistical time division multiplexing

If packets arrive to fast to be routed, or to be
output, buffers will fill

Can discard packets

Can use flow control

Can propagate congestion through network

Interaction of Queues

Ideal
Performance

Practical Performance

Ideal assumes infinite buffers and no overhead

Buffers are finite

Overheads occur in exchanging congestion

control messages

Effects of
Congestion -
No Control

Mechanisms for
Congestion Control

Backpressure

If node becomes congested it can slow down or
halt flow of packets from other nodes

May mean that other nodes have to apply
control on incoming packet rates

Propagates back to source

Can restrict to logical connections generating
most traffic

Used in connection oriented that allow hop by
hop congestion control (e.g. X.25)

Not used in ATM nor frame relay

Only recently developed for IP

Choke Packet


Control packet

Generated at congested node

Sent to source node

e.g. ICMP source quench

From router or destination

Source cuts back until no more source quench message

Sent for every discarded packet, or anticipated

Rather crude mechanism

Implicit Congestion Signaling

Transmission delay may increase with
congestion

Packet may be discarded

Source can detect these as implicit indications of
congestion

Useful on connectionless (datagram) networks

e.g. IP based


(TCP includes congestion and flow control - see chapter 17)

Used in frame relay LAPF

Explicit Congestion Signaling

Network alerts end systems of increasing
congestion

End systems take steps to reduce offered load

Backwards

Congestion avoidance in opposite direction to packet
required

Forwards

Congestion avoidance in same direction as packet
required

Categories of Explicit Signaling

Binary

A bit set in a packet indicates congestion

Credit based


Indicates how many packets source may send

Common for end to end flow control

Rate based

Supply explicit data rate limit

e.g. ATM

Traffic Management

Fairness

Quality of service

May want different treatment for different
connections

Reservations

e.g. ATM

Traffic contract between user and network

Congestion Control in Packet
Switched Networks

Send control packet to some or all source nodes


Requires additional traffic during congestion

Rely on routing information

May react too quickly

End to end probe packets

Adds to overhead

Add congestion info to packets as they cross
nodes

Either backwards or forwards

ATM Traffic Management

High speed, small cell size, limited overhead bits

Still evolving

Requirements

Majority of traffic not amenable to flow control

Feedback slow due to reduced transmission time
compared with propagation delay

Wide range of application demands


Different traffic patterns

Different network services

High speed switching and transmission increases
volatility

Latency/Speed Effects

ATM 150Mbps

~2.8x10
-6
seconds to insert single cell

Time to traverse network depends on
propagation delay, switching delay

Assume propagation at two-thirds speed of light

If source and destination on opposite sides of
USA, propagation time ~ 48x10
-3
seconds

Given implicit congestion control, by the time
dropped cell notification has reached source,
7.2x10
6
bits have been transmitted


So, this is not a good strategy for ATM

Cell Delay Variation

For ATM voice/video, data is a stream of cells

Delay across network must be short

Rate of delivery must be constant

There will always be some variation in transit

Delay cell delivery to application so that
constant bit rate can be maintained to
application

Time Re-assembly of CBR Cells

Network Contribution to
Cell Delay Variation

Packet switched networks

Queuing delays

Routing decision time

Frame relay


As above but to lesser extent

ATM

Less than frame relay

ATM protocol designed to minimize processing
overheads at switches

ATM switches have very high throughput

Only noticeable delay is from congestion

Must not accept load that causes congestion

Cell Delay Variation
At The UNI

Application produces data at fixed rate

Processing at three layers of ATM causes delay

Interleaving cells from different connections

Operation and maintenance cell interleaving

If using synchronous digital hierarchy frames, these
are inserted at physical layer

Can not predict these delays


Origins of Cell Delay Variation

Traffic and Congestion
Control Framework

ATM layer traffic and congestion control should
support QoS classes for all foreseeable network
services

Should not rely on AAL protocols that are
network specific, nor higher level application
specific protocols

Should minimize network and end to end system
complexity

Timings Considered

Cell insertion time

Round trip propagation time

Connection duration

Long term

Determine whether a given new connection can
be accommodated


Agree performance parameters with subscriber

×