Tải bản đầy đủ (.pdf) (51 trang)

Tài liệu Step by Step Analysis pptx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (252.99 KB, 51 trang )

1
1
Network Traffic Analysis
Using tcpdump
Judy Novak
Judy Novak

Step by Step Analysis
All material Copyright  Novak, 2000, 2001. All rights reserved.
2
2
Step by Step Analysis

Introduction to tcpdump

Writing tcpdump Filters

Examination of Datagram Fields

Beginning Analysis

Real World Examples

Step by Step Analysis
This page intentionally left blank.
3
3
Objectives

Allow you to participate in the analysis process


tcpdump event of interest will be displayed

We’ll walk through the analysis process complete
with missteps to see how it is actually done
In this section, we’ll examine the practical day-to-day aspects of analyzing traffic.
4
4
Event of Interest 1

The following records appear in the hourly wrap-up
00:36:00.510000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
21931:1480@0+)
00:36:00.620000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
28843:1480@0+)
00:36:00.710000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
33707:1480@0+)
00:36:00.830000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3329 (frag
40875:1480@0+)
00:36:00.910000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
46251:1480@0+)
00:36:01.020000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
52907:1480@0+)
On the hourly wrap-up we see fragmented UDP packets arriving from source host 1.2.3.4. One of
the first questions should be what filter extracted these records followed by why are they being
brought to your attention.
5
5
Initial Assessment

Records appeared on hourly wrap-up because of

fragmentation

IP filter looks for IP datagrams with more
fragments flag set and zero fragment offset

So we see only first fragment

Need to verify if this is normal or malicious
These records appeared on the hourly wrap-up because ip.filter examines the IP header for any
datagram that has the more fragments flag set and a zero fragment offset. This will display only the
first fragment. The reason that this was done was to alert you of the fragmentation, yet not
overwhelm you with all the records.
As mentioned before, fragmentation can be a normal by-product of a datagram travelling from a
larger to a smaller network. We’ve also seen where fragmentation can be used for malicious
purposes such as denial of service. We need to examine this fragmentation to see if it appears to be
normal - no overlaps or gaps in fragments and a complete fragment train.
6
6
Dump Fragmented Records

We attempt to see the other fragments using the
following command:
tcpdump –r tcpdumpfile ‘host 1.2.3.4 and port 2444’
In an effort to dump the records associated with this fragmentation, we examine the hour in question
with a filter of the host IP and the destination port number. We use the port number to isolate the
traffic we saw in case there are more connections to other ports that we don’t care to see.
7
7
Results
00:36:00.510000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag

21931:1480@0+)
00:36:00.620000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
28843:1480@0+)
00:36:00.710000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
33707:1480@0+)
00:36:00.830000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3329 (frag
40875:1480@0+)
00:36:00.910000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
46251:1480@0+)
00:36:01.020000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850 (frag
52907:1480@0+)
What appears in the output is exactly what we saw on the hourly wrap-up. Although we asked to see
all fragments, we still see only the first fragment. At this point, we have to wonder why we see the
same records as before. Do we see these results because this is what is truly all there is in the hourly
files? Or, perhaps, for some reason, we haven’t extracted the records.
8
8
Re-examination

tcpdump output on the previous slide appears to
indicate only first fragment was sent

Is this some kind of malicious attack or denial of
service that sends only an initial fragment?

Examine filter used to select records, we used
‘host 1.2.3.4 and port 2444’

Remember subsequent fragments don’t carry
UDP header with port number

Because we only see the first fragment of the fragment train, the initial guess is that this may be
some kind of malicious fragment attack perhaps a denial of service. We should see all fragments in
the fragment train, not just the first.
However, if we re-examine the filter we used, we qualified the traffic by port number. The port
number is located in the UDP header. Recall from the discussion on fragmentation that only the first
fragment in the fragment train inherits the protocol header, or UDP in this case. All subsequent
fragments only carry the remaining UDP data. Therefore, when we search by port number, only the
first fragment appears.
9
9
Another Attempt

We make another attempt to dump the
fragmented records, but we search by IP number
only
tcpdump –r tcpdumpfile ‘host 1.2.3.4’
Attempting to dump the traffic again, we filter by source IP only. The source IP is located in the IP
header and will be found in all the fragments.
10
10
Results
00:36:00.510000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850
(frag 21931:1480@0
+)
00:36:00.510000 1.2.3.4 > 192.168.30.77: (frag 21931:1480@1480
+)
00:36:00.510000 1.2.3.4 > 192.168.30.77: (frag 21931:898@2960
)
00:36:00.620000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850
(frag 28843:1480@0+)

00:36:00.620000 1.2.3.4 > 192.168.30.77: (frag 28843:1480@1480+)
00:36:00.620000 1.2.3.4 > 192.168.30.77: (frag 28843:898@2960)
00:36:00.710000 1.2.3.4.2413 > 192.168.30.77.2444: udp 3850
(frag 33707:1480@0+)
00:36:00.710000 1.2.3.4 > 192.168.30.77: (frag 33707:1480@1480+)
00:36:00.710000 1.2.3.4 > 192.168.30.77: (frag 33707:898@2960)
The results yield all fragments. If you look at the individual fragments and examine the fragment
lengths and the offsets, it appears that this is normal fragmentation in terms of not having any
overlaps or gaps of repeated offsets.
Look at the second set of fragments with fragment ID 28843. The first fragment is 1480 bytes and
the offset is byte 0. The second fragment starts at byte 1480 and has a length of 1480; this is normal.
Finally, the third fragment begins at the expected offset of 2960 for a length of 898 bytes.
11
11
Assessment

If you examine the fragmentation, the fragments
and offsets appear to be normal

This is a false positive

When examining fragmented packets, use a filter
that looks at the IP address and perhaps protocol
type

Using a filter that involves the protocol header
may yield incomplete or misleading results
So, the assessment of this traffic is that it is a false positive; the fragmentation does not appear to be
corrupted in any way. The lesson to be learned from this is when you are dealing with
fragmentation, make sure that you filter for fields found in the IP header only. If you try to examine

protocol fields, in this case, the UDP port number, you will receive incomplete data and you may
come to false conclusions.
12
12
Event of Interest 2

The following records were dumped when 1.2.3.4
appeared in the Shadow’s scan output
19:45:38.650000 1.2.3.4.113 > 192.168.173.120.39426: S 11235938:11235938(0)
ack
614006785 win 32736 <mss 512>
19:47:21.490000 1.2.3.4.113 > 192.168.162.39.39426: S 6825290:6825290(0) ack
614006785 win 32736 <mss 512>
19:55:40.780000 1.2.3.4.113 > 192.168.213.61.39426: S 9268522:9268522(0) ack
614006785 win 32736 <mss 512>
19:57:17.210000 1.2.3.4.113 > 192.168.171.84.39426: S 4259938:4259938(0) ack
614006785 win 32736 <mss 512>
19:57:32.000000 1.2.3.4.113 > 192.168.21.26.39426: S 7249258:7249258(0) ack
614006785 win 32736 <mss 512>
19:57:49.800000 1.2.3.4.113 > 192.168.208.80.39426: S 4354234:4354234(0) ack
614006785 win 32736 <mss 512>
The hourly scan count from Shadow listed host 1.2.3.4 as exceeding 7 connections to internal hosts.
The records were dumped for this hour to investigate what kind of activity host 1.2.3.4 was involved
in. We see a scan from host 1.2.3.4 source port 113 directed at the internal 192.168 hosts going to
port 39426 (unknown port) with the SYN and ACK flags set.
It should be noted that all the destination IP’s in the 192.168 subnet are not assigned IP’s.
13
13
Initial Guess


What is your initial assessment of Event of
Interest 2?

Did you guess a SYN/ACK scan from hostile host
1.2.3.4 to destination port 39426?

An unsolicited SYN/ACK can trigger a RESET by
the receiving host

Possible way to scan for live hosts?
A good initial guess is that this is some kind of scan originating from hostile host 1.2.3.4. An
unsolicited SYN/ACK combination to a destination host regardless of whether the scanned port is
active or not may elicit a RESET. This would be a method of finding live hosts in the network and
possibly eluding any filtering router that has an access control statement which requires inbound
traffic to be established inside the network. In other words, only inbound traffic with the ACK flag
set will be allowed in.
14
14
Another Guess
19:47:21.490000 1.2.3.4.113 > 192.168.162.39.39426: S
6825290:6825290(0) ack 614006785 win 32736 <mss 512>
19:57:17.210000 1.2.3.4.113 > 192.168.171.84.39426: S
4259938:4259938(0) ack 614006785 win 32736 <mss 512>

SYN/ACK is a response to a SYN request to a
listening port

Is it possible someone is spoofing our 192.168
IP’s and sending them to port 113 of 1.2.3.4?


Host 1.2.3.4 responds with a SYN/ACK directed
back to actual 192.168 network
Now, if you take a slightly different approach to viewing this traffic, it may appear that 1.2.3.4 was
simply an intermediate or victim host. What may be happening is that some malicious party for
whatever reason may be spoofing the internal 192.168 IP addresses and sending them to host 1.2.3.4
port 113. If port 113 is listening, it will reply with a SYN/ACK to the alleged source IP.
Port 113 is known as auth or ident. It is used to authenticate a user requesting a service. For
instance, when connections to some servers are made, for example sendmail, the server may try to
connect to the client host on port 113. This is a rather primitive way of trying to authenticate the
user. Not all servers query for port 113 and not all clients will offer port 113.
We see no RESETs from the 192.168 hosts because they are not live hosts.
15
15
More Clues

In order to get a SYN/ACK response back from
1.2.3.4 on port 113, port 113 must be active
telnet 1.2.3.4 113
Trying 1.2.3.4...
Connected to 1.2.3.4.
Escape character is '^]’

This confirms that the host 1.2.3.4 offers the auth
service (port 113)
To see if the spoofing theory is a viable one, host 1.2.3.4 must be listening on port 113. We try a
telnet to host 1.2.3.4 port 113 and see that it responds. This isn’t positive confirmation that this is
the answer, however it does lend more credence to the spoofing theory.
16
16
Assessment


This may be spoofing or network scanning

Try to communicate with the contact of 1.2.3.4
network

Ask if 192.168 hosts have been scanning their
network

Otherwise ask why host 1.2.3.4 is scanning the
192.168 network

There will be many events of interest that have no
conclusive identification
We have not positively identified the nature of the scan. You will find often times that there may be
no definitive answer. This was an example to help you view what may appear to be an obvious scan
from 1.2.3.4 in another way. If you can identify the network contact for 1.2.3.4, you can try to see if
they have been seeing traffic from the 192.168 network directed at them. If they can confirm this, it
means that someone is spoofing the internal IP’s. It is also possible someone is spoofing 1.2.3.4 as a
source IP so they may not even be able to confirm outbound traffic to the 192.168 network.
17
17
Event of Interest 3

The following record appears in Shadow’s hourly
wrap-up
03:44:34.950000 1.2.3.4.3953 > myhost.com.135: S
409085715:409085715(0) win 8192 <mss 1460> (DF)

Dump the records for the foreign IP

tcpdump –r tcpdumpfile ‘host 1.2.3.4’
We see an initial SYN from a foreign host and would like to examine all the traffic from the
unknown source host. So, we dump the hourly records for the foreign IP 1.2.3.4.
18
18
Partial Results

Here is one TCP session extracted by looking at
the data exchanged between the hosts:
03:44:34.950000 1.2.3.4.3953 > myhost.com.135: S
409085715:409085715(0) win 8192 <mss 1460> (DF)
03:44:35.300000 1.2.3.4.3953 > myhost.com.135: P
409085716:409085788(72) ack 174349 win 8760 (DF)
03:44:35.360000 1.2.3.4.3953 > myhost.com.135: P 72:228(156)
ack 61 win 8700 (DF)
03:44:35.430000 1.2.3.4.3953 > myhost.com.135: F 228:228(0)
ack 213 win 8548 (DF)
In the TCP exchange above, we do not see the entire session. It appears we are only seeing traffic
from 1.2.3.4 directed to myhost.com, but none from myhost.com to 1.2.3.4. We know this because
we see a SYN from 1.2.3.4 to myhost.com, but no SYN/ACK in return. We see data pushed from
1.2.3.4 to myhost.com, but no ostensible acknowledgement. Finally, we see 1.2.3.4 end the session,
but no acknowledgement or session termination from myhost.com.
19
19
Initial Assessment

Is it possible for this to be a crafted session from
1.2.3.4?

If this were so, you should see RESET’s from

myhost.com

Is it possible that the sensor is seeing only
inbound traffic and it takes a different route
outbound?
A first assessment might be that we are seeing crafted packets from hostile host 1.2.3.4. In other
words, the hostile host is sending what appears to be a partial session to myhost.com. If this were so,
myhost.com should reply with some kind of RESET since it is not expecting this activity. We do not
see this so that rules out this guess.
For this particular session, we are seeing only inbound traffic. Is it possible that the sensor is
capturing this traffic inbound and fails to capture outbound traffic because it may have a different
egress point? This is a better possibility than the first.
20
20
More Partial Results
03:45:15.120000 myhost.com.1710 > 1.2.3.4.135: S 174378:174378(0)
win 8192 <mss 1460> (DF) [tos 0x10]
03:45:15.180000 1.2.3.4.135 > myhost.com.1710: S
411784890:411784890(0) ack 174379 win 8760 <mss 1460> (DF)
03:45:15.180000 myhost.com.1710 > 1.2.3.4.135: P 1:73(72) ack 1 win
8760 (DF) [tos 0x10]
03:45:15.240000 1.2.3.4.135 > myhost.com.1710: P 1:61(60) ack 73 win
8688 (DF)
03:45:15.250000 myhost.com.1710 > 1.2.3.4.135: P 73:229(156) ack 61
win 8700 (DF) [tos 0x10]
03:45:15.310000 1.2.3.4.135 > myhost.com.1710: P 61:213(152) ack
229 win 8532 (DF)
03:45:15.310000 myhost.com.1710 > 1.2.3.4.135: F 229:229(0) ack 213
win 8548 (DF) [tos 0x10]
In a second exchange between the two hosts, we see traffic flowing both ways. This pretty much

refutes the theory that we were capturing inbound traffic only. In fact, this appears to be a fairly
complete session except for the absence of the final ACK in the three-way handshake and some of
the expected termination protocol.
21
21
Another Assessment

See evidence in previous slide that we are
capturing some traffic outbound from
myhost.com

Appears we have some kind of intermittent
sensor capture

Dropping packets on the sensor?

Multiple ingress/egress points and we see only
traffic that passes by the sensor on a given
transfer
Since we’ve seen traffic going both ways, the sensor is obviously situated correctly to see inbound
and outbound traffic. But, it appears we are not capturing all the traffic. Is it possible that the sensor
is overloaded and dropping packets? This is a possibility and that could be happening. There is no
easy way to tell if packets are being dropped.
The other possibility is that there are multiple entry and exit points into the network and the route
taken each time doesn’t necessarily take the traffic by the sensor. The site configuration and sensor
placement should be re-examined.
22
22
Event of Interest 4


The following output appears in the hourly wrap-up:
05:52:23.040000 wiley.ns.demon.net.15770 >mydns.com.80: udp 4294967289

What kind of mutant behavior is this record exhibiting?

What tools/scripts can we use to discover what is
going on?
We examine a record that appears in the hourly wrap-up because of some filter that we have refined
to look at UDP traffic. Look at the record closely and try to determine what the anomalous behavior
may be that caused it to become an event of interest.
Begin to think about what tools can be used to investigate this.
23
23
Mutant Behavior
wiley.ns.demon.net.15770 >mydns.com.80: udp 4294967289

UDP datagram length reported as 4294967289

What is wrong with this length?

Maximum length value for UDP is a 16 bit field

2
16
-1 = 65,535

How can a datagram of this alleged length be
sent without fragmentation?

What can we do to investigate this?

We detect some output from Shadow that appears to be abnormal. There is a value of 4294967289
in the UDP length field. There are a couple of problems with this. The first is that the UDP length
field is a 16 bit value in the UDP header. The maximum possible value for a length would be
65,535. How could tcpdump possibly report such a large value?
Also, how is it possible that fragmentation did not occur with a UDP datagram length so large?
What tools have we covered in this course to help us assess what is happening?
24
24
Investigation

First, check traffic from the hostile source IP for
the hour in question

Could there be some kind of catalyst for receiving
this datagram?
tcpdump –r tcpdumpfile ‘host hostile IP’
The first thing we do is examine the traffic for the hour in question. This will just help us put the
traffic in some kind of context. It may not explain the mutant UDP datagram length, but we might
be able to see why this record appeared.
Note that we search using the hostile IP number for wiley.ns.demon.net. Searching by IP is more
efficient and perhaps more accurate since additional resolution does not have to be done. If
resolution cannot be done for wiley.ns.demon.com, no records will appear even though there are
records in the tcpdump files for the IP number associated with wiley.ns.demon.com.
25
25
Results From tcpdump
05:52:21.560000 mydns.com.domain > wiley.ns.demon.net.domain: 25303 (38)
(DF)
05:52:21.650000 wiley.ns.demon.net.domain > mydns.com.domain: 25303*-
1/3/3 (176)

05:52:22.530000 mydns.com.domain > wiley.ns.demon.net.domain: 25329 (38)
(DF)
05:52:22.630000 wiley.ns.demon.net.domain > mydns.com.domain: 25329*-
1/3/3 (176)
05:52:22.940000 mydns.com.domain > wiley.ns.demon.net.domain: 25333 (34)
(DF)
05:52:23.040000 wiley.ns.demon.net.15770 > mydns.com.80: udp 4294967289
What we see is that the mutant record appears amidst a stream of seemingly normal DNS traffic
between mydns.com and wiley.ns.demon.net. The pattern appears to be that mydns.com queries
wiley.ns.demon.net for some kind of DNS resolution. In the first two queries wiley.ns.demon.net
responds appropriately to the queries.
On the fifth line of the output, mydns.com queries again with DNS message number 25333. Soon
thereafter, wiley.ns.demon.net responds with an inappropriate packet. There is no response to query
25333, however there is an attempted probe of mydns.com on port 80 with a very large length.

×