Tải bản đầy đủ (.pdf) (10 trang)

CompTIA Network+ Certification Study Guide part 3 ppt

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (90.26 KB, 10 trang )

CHAPTER 1: Network Fundamentals 6
massive changes in the later part of the twentieth century. By looking at
these changes, you will see the development of OSes, hardware, and innova-
tions that are still used today.
Early Telecommunications and Computers
Telecommunications got its start in 1870s in Brantford Ontario, when
Alexander Graham Bell developed the idea of a telephone. After the first
successful words were sent over the device on March 10, 1876, a revolu-
tion of communication began. Within decades of its conception, millions of
telephones were sold, with operators connecting people using manual circuit
switching. This method of calling the operator to have them connect you to
another party was routine until the mid-twentieth century, when mechanical
and electronic circuit switching became commonplace. These events would
have a massive impact on the innovation of computers, even though they
wouldn’t be invented until 60 years after Bell’s first successful phone call.
Although arguments could be made as to whether ancient devices (such
as the abacus) could be considered a type of computer, the first computer that
could be programmed was developed by a German engineer named Konrad
Zuse. In 1936, Zuse created the Z1, a mechanical calculator that was the
NOTES FROM THE FIELD…
Knowledgeable Network Users
The Internet is a vast network of interconnected com-
puters that your computer becomes a part of when-
ever it goes online. Because more people than ever
before use the Internet, this means that many people
are familiar with the basic concepts and features of
networking without even being aware of it. This is a
particular benefit in training the users of a network,
as many will be familiar with using e-mail, having user
accounts and passwords, and other technologies or
procedures. Unfortunately, a little knowledge can also


be a dangerous thing.
When dealing with knowledgeable users, it is
important to realize that they may have developed bad
habits. After all, a user with years of experience will
have found a few shortcuts and may have gotten lazy in
terms of security. For example, the user may use easy-
to-remember passwords that are easy to guess. In such
a case, the solution would be to implement policies on
using strong passwords (passwords with at least eight
characters consisting of numbers, upper and lowercase
letters, and non-alphanumeric characters), changing
passwords regularly, and not sharing passwords with
others. If your company has an intranet, you can pro-
vide information on such policies to employees.
Another problem is that users may attempt to per-
form actions that aren’t permitted in an organization,
such as installing unapproved software or accessing
restricted data. It is also important to setup security
on a network so users can only access what they need
to perform their job. This minimizes the chance that
someone might modify or delete a file, view sensitive
materials that are meant to be confidential, or install
software that contains malicious programming or isn’t
work related (such as games). Even in an environment
with a trusting atmosphere, accidents happen and
problems arise. Setting up proper security can prevent
avoidable incidents from occuring.
What Is a Network? 7
first binary computer. Zuse continued making innovations to his design,
and five years later had reached the point where the Z3 was able to accept

programming. Although the next version of his computer would use punch
cards to store programs, Zuse used movie film to store programming and
data on the Z3 due to a supply shortage of paper during World War II. Just as
his computers evolved, so did his programming skills. Zuse’s achievements
also extended to creating the first algorithmic programming language called
Plankalkül, which later was used to create the first computer chess game.
During this same time, John Atanasoff and Clifford Berry developed
what is acknowledged to be the first electronic-binary computer. Created at
the University of Iowa, the initial prototype acquired this team a grant that
allowed them to build their 700 pound final product, containing more than
300 vacuum tubes and approximately one mile of wire. Because the war
prevented them from completing a patent on their computer, the computer
was dismantled when the physics department needed storage space that was
being used by the machine. The distinction of being first initially went to
John Mauchly and J. Presper Eckert for their Electrical Numerical Integrator
And Calculator (ENIAC I) computer, until a 1973 patent infringement case
determined Atanasoff and Berry were the first.
The ENIAC I was developed with funding from the U.S. government and
based on the work of John Atanasoff. Starting work in 1943, the project took
two and a half years to design and build ENIAC I, at a cost of half a million
dollars. The ENIAC I was faster than previous computers, and used to per-
form calculations for designing a hydrogen bomb, wind-tunnel designs, and
a variety of scientific studies. It was used until 1955 when the 30-ton, 1,800
square foot computer was ultimately retired.
Another computer that was developed during this time was the MARK I
computer, developed by Howard Aiken and Grace Murray Hopper in 1944
in a project cosponsored by Harvard University and International Business
Machines (IBM). Dwarfing the ENIAC at a length of 55 feet and five tons
in weight, the MARK I was the first computer to perform long calculations.
Although it was retired in 1959, it made a lasting mark on the English lan-

guage. When the MARK I experienced a computer failure, Grace Murray
Hopper checked inside the machine and found a moth. She taped it to her
log book and wrote “first actual bug found”, giving us the terms “bug” for a
computer problem, and “debug” for fixing it.
In 1949, Hopper went on from the MARK I project to join a company
created by John Mauchly and J. Presper Eckert, which was developing a
1,500 square foot, 40-ton computer named UNIVersal Automatic Computer
(UNIVAC). UNIVAC was the first computer to use magnetic tape instead
of paper cards to store programming code and data, and much faster than
CHAPTER 1: Network Fundamentals 8
the previous computers we’ve discussed. Although the MARK I took a few
seconds to complete a multiplication operation and ENIAC I could perform
hundreds of operations per second, UNIVAC could perform multiplication
in microseconds. What made UNIVAC popular in the public eye, how-
ever, was a 1952 publicity stunt where the computer accurately predicted
the outcome of the presidential election. Although Dwight Eisenhower and
Adlai Stevenson were believed evenly matched going into the November 4
election night, UNIVAC predicted that Eisenhower would get 438 electoral
votes, while Stevenson would only get 93. In actuality, Eisenhower got 442
electoral votes, while Stevenson got 89. Although political analysts had been
unable to predict the outcome, UNIVAC did so with a one percent margin
of error. Having earned its place in history, the original UNIVAC currently
resides in the Smithsonian Institute.
Although UNIVAC was the more successful computer of its day, 1953
saw IBM release the EDPM 701. Using punch cards for programs, 19 of
these were sold (as opposed to 46 UNIVACs sold to business and govern-
ment agencies). However, development of this computer lead to the IBM
704, considered to be the first super-computer. Because it used magnetic
core memory, it was faster than its predecessor. This series of computers
further evolved to the development of the 7090 computer in 1960, which

was the first commercially available computer to use transistors, and the
fastest computer of its time. Such innovations firmly placed IBM as a leader
in computer technology.
The Space Age to the Information Age
Although IBM and the owners of UNIVAC were contending for clients to buy
their computers, or at least rent computer time, the former USSR launched
Sputnik in 1957. Sputnik was the first man-made satellite to be put into
orbit, which started a competition in space between the USSR and the United
States, and launched a number of events that made advances in computers.
Although author Arthur C. Clark had published an article in 1945 describing
man-made satellites in geosynchronous orbit being used to relay transmis-
sions, communication satellites didn’t appear until after Sputnik’s historic
orbit. In 1960, Bell Telephone Laboratories (AT&T) filed with the FCC to
obtain permission to launch a communications satellite, and over the next
five years, several communication satellites were orbiting overhead.
Obviously, the most notable result of Sputnik was the space race between
the U.S. and USSR, with the ultimate goal of reaching the moon. The U.S.
started the National Aeronautics and Space Administration (NASA), began
launching space missions, and achieved the first manned landing on the
moon in 1969. Using computers that employed only a few thousand lines of
What Is a Network? 9
code (as opposed to the 45 million lines of code used in Windows XP), the
onboard computer systems provided necessary functions and communicated
with other computers on earth. Communications between astronauts and
mission control on earth also marked the furthest distance of people com-
municating to date.
The cold war and space race also resulted in another important milestone
in computer systems and communication systems. As we discussed earlier,
the U.S. government started the Advanced Research Projects Agency, which
developed such important technologies as follows:

 ARPANet, the predecessor of the modern Internet, which connected
multiple institutions and areas of government together for research
purposes.
 Packet Switched Networks, where messages sent over a network are
broken into packets. They are then sent over the network, and reas-
sembled after reaching the destination computer.
 TCP/IP, which specifies rules on how data is to be sent and received
over the network, and provides utilities for working over a network.
Although only a few educational institutions and the government were
networked together through ARPANet, this led to the first e-mail program
being developed in 1972, and the first news server being developed in 1979.
The Internet was years away, but its foundation was set here.
Hardware and Operating Systems
Major advances pushed the development of computers and networking in
the 1970s. In 1971, Intel produced the first microprocessor, which had its
own arithmetic logic unit and provided a way of creating smaller, faster com-
puters. It was the first of many Intel processors produced over the years,
including the:
8008 processor produced in 1972.
8080 processor (an 8-bit processor) produced in 1974.
8086 processor (a 16-bit processor) produced in 1978. Because other 
technology needs to catch up to the speed of the processor, an 8088
processor (an 8-/16-bit processor) is released in 1979. It isn’t until
1983 that IBM releases the XT with the 8086 processor (and option
to add an 8087 math co-processor).
80286 (16-bit processor) produced in 1982.

80386 (32-bit processor) produced in 1985.
CHAPTER 1: Network Fundamentals 10
80486 (32-bit processor) produced in 1989.

Pentium (32-bit processor) produced in 1993, and which ended 
the x86 naming scheme for their processors. After this, Intel chips
bore the Pentium name, inclusive to the Pentium 75 (in 1994),
Pentium 120, 133, and Pentium Pro 200 (in 1995), Pentium MMX
and Pentium II (in 1997), and Pentium III (in 1999). As you would
expect, each generation was faster than the last.
Just as processing changed in the 1970s, so did storage. In 1973, IBM
developed the first hard disk, and an 8” floppy drive, replacing the need to
store data and programs solely on magnetic tapes. This massive floppy was
quickly replaced by the 5.25” floppy in 1976, which was later succeeded by
the 3.5” floppy disk that was developed by Sony in 1980. These methods of
storing data became commonplace until 1989 when the first CD-ROM was
developed, and again changed in 1997 with the introduction of DVDs.
With the advances in technology, it was only a matter of time before
someone developed a home computer. Prior to the mid-1970s, computers
were still too large and expensive to be used by anyone but large corporations
and governments. With the invention of the microprocessor, a company
called Micro Instrumentation and Telemetry Systems (MITS) developed the
Altair 8800 using the Intel 8080 processor. Although it included an 8” floppy
drive, it didn’t have a keyboard, monitor, or other peripherals that we’re
accustomed to today. Programs and data entry were entered using toggle
switches at the front of the machine. Although it couldn’t be compared to
personal computers of today, it did attain the distinction of being the first.
The Altair also provides a point of origin for Microsoft, as Bill Gates
and Paul Allen developed a version of the BASIC programming language for
Altair that was based on a public domain version created in 1964. Microsoft
went on to create an OS that required users to type in commands called
PC-DOS for the first IBM computer named The Acorn in 1981, but main-
tained ownership of the software. This allowed them to market their OS to
other computer manufacturers and build their software empire. Microsoft

went on to develop OSes such as MS-DOS, Windows 1.0–3.0, Windows
for Workgroups, Windows NT, Windows 95, Windows 98, Windows ME,
Windows 2000, Windows Server 2003, Windows XP, Windows Server
2008, and Windows Vista. Newly released as of this writing is Windows 7,
Microsoft’s newest desktop OS.
UNIX was another OS that originated in the 1970s and also led to a
case of one OS leading to another. Ken Thomson and Dennis Ritchie of Bell
Labs developed UNIX in 1970, which came to be used on high-end servers
What Is a Network? 11
and (years later) Web servers for the Internet. Linus Torvalds used UNIX as
the basis for developing Linux 20 years later. Linux is open source, mean-
ing that the code is available to use for free, and is the only OS that acts as
competition for Microsoft and Apple today.
Apple Computer’s start also originates in the 1970s, when Steve Jobs
and Steve Wozniak incorporated their company on April Fools Day of 1976,
and introduced the Apple II the next year. It wasn’t until the 1980s however
when Apple really made its mark. In 1981, Xerox developed a graphical user
interface (GUI) that used the windows, icons, menus, and mouse support
that we’re familiar having in OSes today. However, Xerox never released it
to the public. Developing its Apple Lisa and Macintosh on the work done by
Xerox, Apple introduced Lisa as the first GUI personal computer in 1983.
The Macintosh was found easy to use by the public and made Apple the
major competition of IBM after its release.
The Windows, Icons, Menus, Pointer (WIMP) interface that gave Apple
and Windows their looks wasn’t the only major contribution Xerox made
to computers and networking. While working at Xerox’s Palo Alto research
center, Bob Metcalfe was asked to develop a method of networking their
computers. What he created was called Ethernet. Ethernet was different from
other networks like the Internet, which connected remote computers together
using modems that dialed into one another, or dumb terminals that had no

processing power and were only used to access mainframe computers. It
connected computers together using cabling and network adapters, allowing
them to communicate with one another over these physical connections. If
Ethernet sounds like many networks in use today, you’d be correct; Ethernet
is an industry standard.
After Ethernet was developed, OSes that were specifically designed for
networking weren’t far behind. In 1979, Novell Data Systems was founded
with a focus on developing computer hardware and OSes. In 1983, however,
Novell changed focus and developed NetWare, becoming an industry leader
in network OSes. Unlike other OSes that resided on a computer and could
be used as either a standalone machine or a network workstation, NetWare
has two components. The NetWare OS is a full OS and resides on a server,
which processes requests from a network user’s client machine. The com-
puter that the network user is working on can run any number of different
OSes (such as Windows 9x, NT, etc), but has client software installed on it
that connects to the NetWare server. When a request is made to access a
file or print to a NetWare server, the client software redirects the request to
the server. Because of its popularity as a network OS, it was widely used on
corporate and government networks.
CHAPTER 1: Network Fundamentals 12
The Information Age Appears
In the 1980s, computers became more commonplace in homes and
businesses. Prices had dropped to the point that it was now affordable to
have a computer in the home, and powerful enough to be worth having a
286 or 386 computer. Although many people found computers useful, they
quickly outgrew the desire to have a standalone machine and wanted to be
networked to others.
The 1980s and 1990s saw growing popularity in Bulletin Board Sys-
tems (BBSs), where one computer could use a modem and telephone line to
directly dial another computer. Computers with BBS software provided the

ability for users to enjoy many of the features associated with the Internet,
including message boards, e-mail, chat programs (to send messages instantly
to other users), the ability to download programs and other files, play online
games, or other features. Because it was done over a modem, BBSs were
largely comprised of other computer users within their community, although
message networks were used to have discussions with people in other cities
or countries.
Although BBSs were eventually replaced by the Internet, the 1980s
also saw changes that would affect the future of cyberspace. In 1983, the
University of Wisconsin developed the Domain Name System (DNS). DNS
provided a way of translating IP addresses used to uniquely identify comput-
ers on TCP/IP networks. Using DNS, a number like 207.46.250.222 can be
translated to a friendly domain name such as microsoft.com. In 1984, DNS
became part of ARPANet, and would eventually play a major part resolving
domain names on the Internet.
Also during the mid-1980s, the backbone of the Internet was developed.
The backbone was a central network that connected other networks on the
Internet. Between 1985 and 1988, T1 lines were developed to accommodate
a necessary increase of dataflow and speed on the Internet. The speed that
data could be sent over the backbone was now increased to 1.544 Mbps.
Perhaps the greatest defining moment in the shape of the Internet
was Tim Berners-Lee’s creation of HyperText Markup Language (HTML).
In 1989, a textual version was created that supported hyperlinks, but this
evolved into the language that was released by CERN in 1991, and used to
create documents that can be viewed online (i.e. Web pages). In 1993, Mosaic
became the first Internet browser, allowing users to view such Web pages, but
others such as Netscape and Internet Explorer soon appeared.
When ARPANet was retired in 1990, the first company to provide dial-up
access to the Internet was formed. The World (www.theworld.com) provided
the ability (for a fee) to dial into their system using a modem, and connect to

the Internet. In 1993, other Internet service providers (ISPs) appeared that
What Is a Network? 13
provided this service, increasing the number of people using the Internet
steadily. Although initially used as a repository of programs, research, and
other resources, this opened the Internet up to commercial use, which
evolved it into the entity we know today.
Modern Networking Technologies
Since the year 2000, when Y2K was the biggest rage and the e-mail virus
Melissa was first introduced, there have been countless developments in net-
working. Although there have been many new technologies developed, one
thing remains the same – the fundamentals of networking have not changed.
We still use models to explain, design, administer, and troubleshoot net-
working today and because of those standards more and more proprietary
development (closed source) software, systems, and services are starting to
disappear.
Open source technologies are starting to take hold in the market as
support for them grows within the communities that develop them. Some
newer technologies that we will cover in this book are multiprotocol label
HEAD OF THE CLASS…
Virtualization
The term virtualization is very broad. In its simplest
definition, it’s the term used to explain how hardware
resources and OSes are managed using virtualiza-
tion software such as VMware ESX Server, Windows
Hyper-V, and Citrix Xen to name a few of the most com-
monly used systems.
There are also many types of virtualization – desktop
virtualization, storage virtualization, application virtual-
ization, and server virtualization … the list goes on. The
underlying theory for all is that the OS is abstracted

from the hardware portion of the computer. Now, with a
platform between the resources and the OS managing
them (VMware, Microsoft Hyper-V, Citrix Xen, etc) the
hardware resources can be given to each OS installed
and if configured correctly, gives you a foolproof plan
for getting systems back up and running quickly.
Likely, your virtualized environment will be con-
figured to run on a high-speed storage area network
(SAN) that can be configured as a high-speed back-
bone between your systems, disks, and data. With a
RAID array, disk failure, is imminent but solvable. As
almost all disks come with a Mean Time Between Fail-
ure (MTBF) that renders the disk likely to fail after a
certain amount of time, usage, or abuse, it is important
to have a solution in place to fix your problems as they
occur. With this solution is added with backed up data,
you have a bulletproof solution that works at very high
speed and is completely redundant. You will, however,
pay a price in hardware and software costs as well as
deployment time, migration time, and so on.
With your OSes configured as simple images (or
ISO files), you can also recover from OS malfunctions
quickly. Because your data is backed up, you can
quickly recreate your systems from “snapshots” of
you taken from virtualization software such as VMware
as an example. Restoring a system, its data and add-
ing more storage, hardware resources, and services
becomes very easy to do. As you can see, there are
many benefits to learning about and working within
virtualized environments.

CHAPTER 1: Network Fundamentals 14
switching (MPLS), virtualization, and cloud computing among many others.
They are paving the way for even newer breakthroughs to take place over the
next year. Some jokingly say that the LAN is dead – could it be true?
Mobile networking is also becoming increasingly important because of
this. As more and more games, videos, and songs are distributed over mobile
devices, the demand to supply faster, more robust ways to support them over
the network will take place. As we approach 2010, it’s interesting to look
back on the last decade of networking … how it has evolved and where we
will wind up next.
LOGICAL NETWORKING TOPOLOGIES
Because networks vary from one another depending upon a range of factors,
it should come as no surprise that there are different network models that
can be chosen. The network model you choose will affect a network infra-
structure’s design, and how it is administered. Depending on the model or
models used, it can have an impact on the location of computers, how users
access resources, and the number of computers and types of OSes required.
Some of the models and topologies available to choose from are as follows:
Centralized
Decentralized (Distributed)
Peer-to-Peer
Client/Server
Virtual Private Network (VPN)
Virtual Local Area Network (VLAN)
Because it’s arguable that there is little need for a network without the
use or sharing of resources on it, then we would have to say that all resources
would either have to be centralized, decentralized, or multiple networks con-
figured and accessible to facilitate both models simultaneously.
Note
We cover new technologies for completeness, although not all new (or bleeding edge)

technology – for example, we cover virtualization topics although you will not be tested
on any vendor’s offerings, or how they are configured, deployed, or utilized. As a working
Network+ technician, you are sure to see these technologies deployed while you work so
you should be aware of them.
Logical Networking Topologies 15
Network modeling is important for getting the first leg of network
design complete. Will you be centralizing your resources, or will they be
decentralized?
Centralized
When a centralized network model is used, a network’s resources are centrally
located and administered. This approach allows network administrators to
have better access to equipment and can provide better control over security
issues. However, because responsibility for managing these resources now
rests with the network administrator or Information Technology (IT) staff,
the administrative workload increases.
A centralized model will affect the physical location of servers and certain
other resources on your network by situating them within a specific area.
You’ll remember that servers are computers that accept requests from client
computers (which users of the network work on), and provide services and
resources that the client has proper access to use. As we’ll discuss later
in this chapter, dedicated servers generally have larger hard disks, more
memory, and faster processors than the workstations accessing them. When
a centralized model is used, these servers are generally located in a secure,
central location, such as a dedicated server room. This secured room can
also be used to house other resources, such as routers, switches, firewall,
Web servers, plotters, and other devices.
Because they are stored in a central location, additional work may be
required to manage them. For example, let’s say you had a plotter that was
kept in a server room. Anytime anyone needed the plotter installed as printer
on their computer, you would need to set up permissions for them to use it.

If the user sent a print job to this plotter, someone from the IT staff would
need to enter the secure room to get their printout. In addition, there would
also be the need to replace paper and toners used in the device. In a central-
ized model, administration of the resources is also centralized.
Despite the previous scenario, in some ways managing resources can be
easier with this model. By keeping these resources in one area, a network
administrator can easily change backup tapes, replace hard disks, or fix other
issues as required. Imagine the issues of having servers in offices throughout
a city or region, and having to visit each of them whenever a tape needed to
be replaced after running a tape backup. By keeping resources centralized,
less work is needed for administration of them.
Depending on the requirements of an organization, the centralized net-
work model can also mean that fewer servers or other devices are needed.
Rather than each building having their own server on the premises, users

×