Tải bản đầy đủ (.pdf) (20 trang)

Tài liệu Sổ tay của các mạng không dây và điện toán di động P13 docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (107.79 KB, 20 trang )

CHAPTER 13
Transport over Wireless Networks
HUNG-YUN HSIEH and RAGHUPATHY SIVAKUMAR
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta
13.1 INTRODUCTION
The Internet has undergone a spectacular change over the last 10 years in terms of its
size and composition. At the heart of this transformation has been the evolution of in-
creasingly better wireless networking technologies, which in turn has fostered growth in
the number of mobile Internet users (and vice versa). Industry market studies forecast an
installed base of about 100 million portable computers by the year 2004, in addition to
around 30 million hand-held devices and a further 100 million “smart phones.” With
such increasing numbers of mobile and wireless devices acting as primary citizens of
the Internet, researchers have been studying the impact of the wireless networking tech-
nologies on the different layers of the network protocol stack, including the physical,
data-link, medium-access, network, transport, and application layers [13, 5, 15, 18, 4, 17,
16].
Any such study is made nontrivial by the diversity of wireless networking technolo-
gies in terms of their characteristics. Specifically, wireless networks can be broadly clas-
sified based on their coverage areas as picocell networks (high bandwidths of up to 20
Mbps, short latencies, low error rates, and small ranges of up to a few meters), micro-
cell networks (high bandwidths of up to 10 Mbps, short latencies, low error rates, and
small ranges of up to a few hundred meters), macrocell networks (low bandwidths of
around 50 kbps, relatively high and varying latencies, high error rates of up to 10%
packet error rates, and large coverage areas of up to a few miles), and global cell net-
works (varying and asymmetric bandwidths, large latencies, high error rates, and large
coverage areas of hundreds of miles). The problem is compounded when network mod-
els other than the conventional cellular network model are also taken into consideration
[11].
The statistics listed above are for current-generation wireless networks, and can be
expected to improve with future generations. However, given their projected bandwidths,
latencies, error rates, etc., the key problems and solutions identified and summarized in


this chapter will hold equally well for future generations of wireless networks [9].
Although the impact of wireless networks can be studied along the different dimensions
289
Handbook of Wireless Networks and Mobile Computing, Edited by Ivan Stojmenovic´
Copyright © 2002 John Wiley & Sons, Inc.
ISBNs: 0-471-41902-8 (Paper); 0-471-22456-1 (Electronic)
of protocol layers, classes of wireless networks, and network models, the focus of this
chapter is the transport layer in micro- and macrocell wireless networks. Specifically, we
will focus on the issue of supporting reliable and adaptive transport over such wireless
networks.
The transmission control protocol (TCP) is the most widely used transport protocol in
the current Internet, comprising an estimated 95% of traffic; hence, it is critical to address
this category of transport protocols. This traffic is due to a large extent to web traffic
(HTTP, the protocol used between web clients and servers, uses TCP as the underlying
transport protocol). Hence, it is reasonable to assume that a significant portion of the data
transfer performed by the mobile devices will also require similar, if not the same, seman-
tics supported by TCP. It is for this reason that most of related studies performed, and new-
er transport approaches proposed, use TCP as the starting point to build upon and the ref-
erence layer to compare against. In keeping with this line of thought, in this chapter we
will first summarize the ill effects that wireless network characteristics have on TCP’s per-
formance. Later, we elaborate on some of the TCP extensions and other transport proto-
cols proposed that overcome such ill effects.
We provide a detailed overview of TCP in the next section. We identify the mechanisms
for achieving two critical tasks—reliability and congestion control—and their drawbacks
when operating over a wireless network. We then discuss three different approaches for
improving transport layer performance over wireless networks:
1. Link layer approaches that enhance TCP’s performance without requiring any
change at the transport layer and maintain the end-to-end semantics of TCP by us-
ing link layer changes
2. Indirect approaches that break the end-to-end semantics of TCP and improve trans-

port layer performance by masking the characteristics of the wireless portion of the
connection from the static host (the host in the wireline network)
3. End-to-end approaches that change TCP to improve transport layer performance
and maintain the end-to-end semantics
We identify one protocol each for the above categories, summarize the approach fol-
lowed by the protocol, and discuss its advantages and drawbacks in different environ-
ments. Finally, we compare the three protocols and provide some insights into their behav-
ior vis-à-vis each other.
The contributions of this chapter are thus twofold: (i) we first identify the typical
characteristics of wireless networks, and discuss the impact of each of the characteristics
on the performance of TCP, and (ii) we discuss three different approaches to either ex-
tend TCP or adopt a new transport protocol to address the unique characteristics of wire-
less networks. The rest of the chapter is organized as follows: In Section 13.2, we pro-
vide a background overview of the mechanisms in TCP. In Section 13.3, we identify
typical wireless network characteristics and their impact on the performance of TCP. In
Section 13.4, we discuss three transport layer approaches that address the problems due
to the unique characteristics of wireless networks. In Section 13.5, we conclude the
chapter.
290
TRANSPORT OVER WIRELESS NETWORKS
13.2 OVERVIEW OF TCP
13.2.1 Overview
TCP is a connection-oriented, reliable byte stream transport protocol with end-to-end con-
gestion control. Its role can be broken down into four different tasks: connection manage-
ment, flow control, congestion control, and reliability. Because of the greater significance
of the congestion control and reliability schemes in the context of wireless networks, we
provide an overview of only those schemes in the rest of this section.
13.2.2 Reliability
TCP uses positive acknowledgment (ACK) to acknowledge successful reception of a seg-
ment. Instead of acknowledging only the segment received, TCP employs cumulative ac-

knowledgment, in which an ACK with acknowledgment number N acknowledges all data
bytes with sequence numbers up to N – 1. That is, the acknowledgment number in an ACK
identifies the sequence number of next byte expected. With cumulative acknowledgment, a
TCP receiver does not have to acknowledge every segment received, but only the segment
with the highest sequence number. Additionally, even if an ACK is lost during transmission,
reception of an ACK with a higher acknowledgment number automatically solves the prob-
lem. However, if a segment is received out of order, its ACK will carry the sequence num-
ber of the missing segment instead of the received segment. In such a case, a TCP sender
may not be able to know immediately if that segment has been received successfully.
At the sender end, a transmitted segment is considered lost if no acknowledgment for
that segment is received, which happens either because the segment does not reach the
destination, or the acknowledgment is lost on its way back. TCP will not, however, wait
indefinitely to decide whether a segment is lost. Instead, TCP keeps a retransmission time-
out (RTO) timer that is started every time a segment is transmitted. If no ACK is received
by the time the RTO expires, the segment is considered lost, and retransmission of the seg-
ment is performed. (The actual mechanisms used in TCP are different because of opti-
mizations. However, our goal here is to merely highlight the conceptual details behind the
mechanisms.)
Proper setting of the RTO value is thus important for the performance of TCP. If the
RTO value is too small, TCP will timeout unnecessarily for an acknowledgment that is
still on its way back, thus wasting network resources to retransmit a segment that has al-
ready been delivered successfully. On the other hand, if the RTO value is too large, TCP
will wait too long before retransmitting the lost segment, thus leaving the network re-
sources underutilized. In practice, the TCP sender keeps a running average of the segment
round-trip times (RTT
avg
) and the deviation (RTT
dev
) for all acknowledged segments. The
RTO is set to RTT

avg
+ 4 · RTT
dev
.
The problem of segment loss is critical to TCP not only in how TCP detects it, but also
how it TCP interprets it. Because TCP was conceived for a wireline network with very low
transmission error rate, TCP assumes all losses to be because of congestion. Hence, upon
the detection of a segment loss, TCP will invoke congestion control to alleviate the prob-
lem, as discussed in the next subsection.
13.2 OVERVIEW OF TCP
291
13.2.3 Congestion Control
TCP employs a window-based scheme for congestion control, in which a TCP sender is
allowed to have a window size worth of bytes outstanding (unacknowledged) at any given
instant. In order to track the capacity of the receiver and the network, and not to overload
it either, two separate windows are maintained: a receiver window and a congestion win-
dow. The receiver window is a feedback from the receiver about its buffering capacity, and
the congestion window is an approximation of the available network capacity. We now de-
scribe the three phases of the congestion control scheme in TCP.
Slow Start
When a TCP connection is established, the TCP sender learns of the capacity of the re-
ceiver through the receiver window size. The network capacity, however, is still unknown
to the TCP sender. Therefore, TCP uses a slow start mechanism to probe the capacity of
the network and determine the size of the congestion window. Initially, the congestion
window size is set to the size of one segment, so TCP sends only one segment to the re-
ceiver and then waits for its acknowledgment. If the acknowledgment does come back,
it is reasonable to assume the network is capable of transporting at least one segment.
Therefore, the sender increases its congestion window by one segment’s worth of bytes
and sends a burst of two segments to the receiver. The return of two ACKs from the re-
ceiver encourages TCP to send more segments in the next transmission. By increasing

the congestion window again by two segments’ worth of bytes (one for each ACK), TCP
sends a burst of four segments to the receiver. As a consequence, for every ACK re-
ceived, the congestion window increases by one segment; effectively, the congestion
window doubles for each full window worth of segments successfully acknowledged.
Since TCP paces the transmission of segments to the return of ACKs, TCP is said to be
self-clocking, and we refer to this mechanism as ACK-clocking in the rest of the chap-
ter. The growth in congestion window size continues until it is greater than the receiver
window or some of the segments and/or their ACKs start to get lost. Because TCP at-
tributes segment loss to network congestion, it immediately enters the congestion avoid-
ance phase.
Congestion Avoidance
As soon as the network starts to drop segments, it is inappropriate to increase the conges-
tion window size multiplicatively as in the slow start phase. Instead, a scheme with addi-
tive increase in congestion window size is used to probe the network capacity. In the con-
gestion avoidance phase, the congestion window grows by one segment for each full
window of segments that have been acknowledged. Effectively, if the congestion window
equals N segments, it increases by 1/N segments for every ACK received.
To dynamically switch between slow start and congestion avoidance, a slow start
threshold (ssthresh) is used. If the congestion window is smaller than ssthresh, the TCP
sender operates in the slow start phase and increases its congestion window exponentially;
otherwise, it operates in congestion avoidance phase and increases its congestion window
linearly. When a connection is established, ssthresh is set to 64 K bytes. Whenever a seg-
ment gets lost, ssthresh is set to half of the current congestion window. If the segment loss
292
TRANSPORT OVER WIRELESS NETWORKS
is detected through duplicate ACKs (explained later), TCP reduces its congestion window
by half. If the segment loss is detected through a time-out, the congestion window is reset
to one segment’s worth of bytes. In this case, TCP will operate in slow start phase and in-
crease the congestion window exponentially until it reaches ssthresh, after which TCP will
operate in congestion avoidance phase and increase the congestion window linearly.

Congestion Control
Because TCP employs a cumulative acknowledgment scheme, when the segments are re-
ceived out of order, all their ACKs will carry the same acknowledgment number indicat-
ing the next expected segment in sequence. This phenomenon introduces duplicate ACKs
at the TCP sender. An out-of-order delivery can result from either delayed or lost seg-
ments. If the segment is lost, eventually the sender times out and a retransmission is initi-
ated. If the segment is simply delayed and finally received, the acknowledgment number
in ensuing ACKs will reflect the receipt of all the segments received in sequence thus far.
Since the connection tends to be underutilized waiting for the timer to expire, TCP em-
ploys a fast retransmit scheme to improve the performance. Heuristically, if TCP receives
three or more duplicate ACKs, it assumes that the segment is lost and retransmits before
the timer expires. Also, when inferring a loss through the receipt of duplicate ACKs, TCP
cuts down its congestion window size by half. Hence, TCP’s congestion control scheme is
based on the linear increase multiplicative decrease paradigm (LIMD) [8]. On the other
hand, if the segment loss is inferred through a time-out, the congestion window is reset all
the way to one, as discussed before.
In the next section, we will study the impact of wireless network characteristics on each
of the above mechanisms.
13.3 TCP OVER WIRELESS NETWORKS
In the previous section, we described the basic mechanisms used by TCP to support relia-
bility and congestion control. In this section, we identify the unique characteristics of a
wireless network, and for each of the characteristics discuss how it impacts TCP’s perfor-
mance.
13.3.1 Overview
The network model that we assume for the discussions on the impact of wireless network
characteristics on TCP’s performance is that of a conventional cellular network. The mo-
bile hosts are assumed to be directly connected to an access point or base station, which in
turn is connected to the backbone wireline Internet through a distribution network. Note
that the nature of the network model used is independent of the specific type of wireless
network it is used in. In other words, the wireless network can be either a picocell, micro-

cell, or macrocell network and, irrespective of its type, can use a particular network mod-
el. However, the specific type of network might have an impact on certain aspects like the
available bandwidth, channel access scheme, degree of path asymmetry, etc. Finally, the
13.3 TCP OVER WIRELESS NETWORKS
293
connections considered in the discussions are assumed to be between a mobile host in the
wireless network and a static host in the backbone Internet. Such an assumption is reason-
able, given that most of the Internet applications use the client–server model (e.g., http,
ftp, telnet, e-mail, etc.) for their information transfer. Hence, mobile hosts will be expect-
ed to predominantly communicate with backbone servers, rather than with other mobile
hosts within the same wireless network or other wireless networks. However, with the evo-
lution of applications wherein applications on peer entities more often communicate with
each other, such an assumption might not hold true.
13.3.2 Random Losses
A fundamental difference between wireline and wireless networks is the presence of ran-
dom wireless losses in the latter. Specifically, the effective bit error rates in wireless net-
works are significantly higher than that in a wireline network because of higher cochannel
interference, host mobility, multipath fading, disconnections due to coverage limitations,
etc. Packet error rates ranging from 1% in microcell wireless networks up to 10% in
macrocell networks have been reported in experimental studies [4, 17]. Although the high-
er packet error rates in wireless networks inherently degrade the performance experienced
by connections traversing such networks, they cause an even more severe degradation in
the throughput of connections using TCP as the transport protocol.
As described in the previous section, TCP multiplicatively decreases its congestion
window upon experiencing losses. The decrease is performed because TCP assumes that
all losses in the network are due to congestion, and such a multiplicative decrease is es-
sential to avoid congestion collapse in the event of congestion [8]. However, TCP does not
have any mechanisms to differentiate between congestion-induced losses and other ran-
dom losses. As a result, when TCP observes random wireless losses, it wrongly interprets
such losses as congestion losses, and cuts down its window, thus reducing the throughput

of the connection. This effect is more pronounced in low bandwidth wireless networks as
window sizes are typically small and, hence, packet losses typically result in a retransmis-
sion timeout (resulting in the window size being cut down to one) due to the lack of
enough duplicate acknowledgments for TCP to go into the fast retransmit phase. Even in
high-bandwidth wireless networks, if bursty random losses (due to cochannel interference
or fading) are more frequent, this phenomenon of TCP experiencing a timeout is more
likely, because of the multiple losses within a window resulting in the lack of sufficient
number of acknowledgments to trigger a fast retransmit.
If the loss probability is p, it can be shown that TCP’s throughput is proportional to
1/
͙
p

[14]. Hence, as the loss rate increases, TCP’s throughput degrades proportional to
͙
p

. The degradation of TCP’s throughput has been extensively studied in several related
works [14, 3, 17].
13.3.3 Large and Varying Delay
The delay along the end-to-end path for a connection traversing a wireless network is typ-
ically large and varying. The reasons include:
ț Low Bandwidths. When the bandwidth of the wireless link is very low, transmission
294
TRANSPORT OVER WIRELESS NETWORKS
delays are large, contributing to a large end-to-end delay. For example, with a pack-
et size of 1 KB and a channel bandwidth of 20 Kbps [representative of the band-
width available over a wide-area wireless network like CDPD (cellular digital pack-
et data)], the transmission delay for a packet will be 400 ms. Hence, the typical
round-trip times for connections over such networks can be in the order of a few

seconds.
ț Latency in the Switching Network. The base station of the wireless network is typi-
cally connected to the backbone network through a switching network belonging to
the wireless network provider. Several tasks including switching, bookkeeping, etc.
are taken care of by the switching network, albeit at the cost of increased latency.
Experimental studies have shown this latency to be nontrivial when compared to the
typical round-trip times identified earlier [17].
ț Channel Allocation. Most wide-area wireless networks are overlayed on infrastruc-
tures built for voice traffic. Consequently, data traffic typically share the available
channel with voice traffic. Due to the real-time nature of the voice traffic, data traf-
fic is typically given lower precedence in the channel access scheme. For example,
in CDPD that is overlayed on the AMPS voice network infrastructure, data traffic
are only allocated channels that are not in use by the voice traffic. A transient phase
in which there are no free channels available can cause a significant increase in the
end-to-end delay. Furthermore, since the delay depends on the amount of voice traf-
fic in the network, it can also vary widely over time.
ț Assymmetry in Channel Access. If the base station and the mobile hosts use the same
channel in a wireless network, the channel access scheme is typically biased toward
the base station [1]. As a result, the forward traffic of a connection experiences less
delay than the reverse traffic. However, since TCP uses ACK-clocking, as described
in the previous section, any delay in the receipt of ACKs will slow down the pro-
gression of the congestion window size at the sender end, causing degradation in the
throughput enjoyed by the connection.
ț Unfairness in Channel Access. Most medium access protocols in wireless networks
use a binary exponential scheme for backing off after collisions. However, such a
scheme has been well studied and characterized to exhibit the “capture syndrome”
wherein mobile hosts that get access to the channel tend to retain access until they
are not backlogged anymore. This unfairness in channel access can lead to random
and prolonged delays in mobile hosts getting access to the underlying channel, fur-
ther increasing and varying the round-trip times experienced by TCP.

Because of the above reasons, connections over wireless networks typically experience
large and varying delays. At the same time, TCP relies heavily on its estimation of the
round-trip time for both its window size progression (ACK-clocking), and its retransmis-
sion timeout (RTO) computation (RTT
avg
+ 4 · RTT
dev
). When the delay is large and vary-
ing, the window progression is slow. More critically, the retransmission timeout is artifi-
cially inflated because of the large deviation due to varying delays. Furthermore, the RTT
estimation is skewed for reasons that we state under the next subsection. Experimental
studies over wide-area wireless networks have shown the retransmission timeout values to
13.3 TCP OVER WIRELESS NETWORKS
295
be as high as 32 seconds for a connection with an average round trip time of just 1 second
[17]. This adversely affects the performance of TCP because, on packet loss in the absence
of duplicate ACKs to trigger fast retransmit, TCP will wait for an RTO amount of time be-
fore inferring a loss, thereby slowing down the progression of the connection.
13.3.4 Low Bandwidth
Wireless networks are characterized by significantly lower bandwidths than their wireline
counterparts. Pico- and microcell wireless networks offer bandwidths in the range of 2–10
Mbps. However, macrocell networks that include wide-area wireless networks typically
offer bandwidths of only a few tens of kilobits per second. CDPD offers 19.2 Kbps, and
the bandwidth can potentially be shared by upto 30 users. RAM (Mobitex) offers a band-
width of around 8 Kbps, and ARDIS offers either 4.8 Kbps or 19.2 Kbps of bandwidth.
The above figures represent the raw bandwidths offered by the respective networks; the
effective bandwidths can be expected to be even lower. Such low bandwidths adversely af-
fect TCP’s performance because of TCP’s bursty nature.
In TCP’s congestion control mechanism, when the congestion window size is in-
creased, packets are burst out in a bunch as long as there is room under the window size.

During the slow start phase, this phenomenon of bursting out packets is more pronounced
since the window size increases exponentially. When the low channel bandwidth is cou-
pled with TCP’s bursty nature, packets within the same burst experience increasing round-
trip times because of the transmission delays experienced by the packets ahead of them in
the mobile host’s buffer. For example, when the TCP at the mobile host bursts out a bunch
of 8 packets, then packet i among the 8 packets experiences a round-trip time that includes
the transmission times for the i – 1 packets ahead of it in the buffer. When the packets ex-
perience different round-trip times, the average round-trip time maintained by TCP is arti-
ficially increased and, more importantly, the average deviation increases. This phenome-
non, coupled with the other phenomena described in the previous subsection, results in the
retransmission timeout being inflated to a large value. Consequently, TCP reacts to losses
in a delayed fashion, reducing its throughput.
13.3.5 Path Asymmetry
Although a transport protocol’s performance should ideally be influenced only by the for-
ward path characteristics [17], TCP, by virtue of its ACK-clocking-based window control,
depends on both the forward path and reverse path characteristics for its performance. At
an extreme, a TCP connection will freeze if acknowledgments do not get through from the
receiver to the sender, even if there is available bandwidth on the forward path. Given this
nature of TCP, there are two characteristics that negatively affect its performance:
1. Low Priority for the Path from Mobile Host. In most wireless networks that use the
same channel for upstream and downstream traffic, the base station gains precedence
in access to the channel. For example, the CDPD network’s DCMA/CD channel ac-
cess exhibits this behavior [17]. When such a situation arises, assuming the forward
path is toward the mobile host, the acknowledgments get lower priority to the data
296
TRANSPORT OVER WIRELESS NETWORKS

×