Tải bản đầy đủ (.pdf) (231 trang)

Ebook Information security management handbook (Sixth edition, Volume 6): Part 2

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (11.72 MB, 231 trang )

Chapter 17

Building Application Security
Testing into the Software
Development Life Cycle
Sandy Bacik
Every enterprise should utilize an application development life cycle and within that life cycle
there should be an application security architecture. An application security architecture contains
a strong foundation of the application, providing controls to protect the confidentiality of information, integrity of data, and access to the data when it is required (availability) and ensuring it
is the authorized entities. And an application security architecture carefully considers feature sets,
controls, safer and reliable processes using the enterprise’s security posture. As security controls
are developed for an application, they must be tested during the use test and quality assurance
testing processes. At a very high level, application security testing should consider answering the
following questions:
◾◾ Is the process surrounding this function, service, or feature as safe and strong as possible
without impacting operational requirements? In other words, is this a flawed process?
◾◾ If I were a bad entity, how could/would I abuse this function, service, or feature?
◾◾ If I were an inexperienced user, how could/would I use/abuse this function, service, or
feature?
◾◾ Is the function, service, or feature required to be on by default? If so, are there limits or
options that could help limit the risk from this function, service, or feature?
◾◾ Have success, failure, and abuse been considered when testing this function, service, or
feature?
Security functions, services, and features that are built into an application should be based
on existing application objectives, business requirements, use cases, and then test cases. When
developing security functions, services, and features within an application that are based on documented requirements, the development of test cases for security should be relatively easy. Many
249


250  ◾  Information Security Management Handbook


times, this is not the case. The tester must then attempt to build security testing into the quality
assurance testing processes. If it is the responsibility of the tester to include security testing into
their process without the support of management and security being built into the life cycle, the
job of the tester will be uphill in ensuring that security testing is included as part of the application
life cycle. Building in security requirements and test cases will produce a stronger and more secure
application and application development life cycle.
Over the last decade, many software issues have not improved. Some of the top software development flaws include the following, but this is not an exhaustive list:
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾

Buffer overruns
Format string problems
Integer overflows
SQL and command injection
Failing to handle errors or revealing too much information
Cross-site scripting
Failing to protect network transactions

Use of magic URLs and hidden form fields
Improper use of SSL and TLS
Use of weak authentication mechanism, such as weak passwords
Failing to store and protect data securely
Information leakage
Improper file access
Race conditions
Poor usability

How can we improve this? Yes, extending the application development life cycle to include more
testing, specifically security testing. Without a good foundation to develop security testing, improving
the security of an application cannot be accomplished. Before developing application test cases and
testing requirements, standard definitions need to be accepted by the group. For example,
◾◾ A set of test requirements are technical or administrative actionable statements that are not
subject to interpretation for a tester to develop a test plan/procedure.
◾◾ A test case is a step scenario of the items to be tested based upon a set of use cases and
requirements.
◾◾ A test plan/procedure is a detailed list of tasks based on a requirement to perform the test.
This would be the “how.” For example, a test plan/procedure will contain a requirement,
passed, failed, and remarks about the test. A requirement would be something similar to
“the time stamp shall be read from the clock off a centralized time source.”
◾◾ A test program is a set or collection of test plans/procedures.
◾◾ Defining a test requirement
−− The term “shall” means the requirement is required.
−− The term “should” means the requirement is optional.
−− The requirement shall be positively stated.
−− The requirement shall contain one and only one action.
−− The requirement shall be documented as technical or administrative.



Building Application Security Testing into the Software Development Life Cycle  ◾  251

−− The requirement shall be detailed enough to tell the tester what specifically needs to be
tested and not contain implementation details.
−− The requirement shall include what needs to be verified.
−− The requirement shall use strong verbs. Action verbs are observable and better communicate the intent of what is to be attempted, like to plan, write, conduct, produce, apply,
recite, revise, contrast, install, select, assemble, compare, investigate, develop, demonstrate, find, use, perform, show, assess, identify, illustrate, classify, formulate, indicate,
represent, explain, etc.
−− The requirement shall avoid using verbs that can be misinterpreted, such as understand,
know, think, determine, believe, aware of, familiar with, conceptualize, learn, comprehend, appreciate, and are conscious of.
−− The requirement shall avoid generalities in objective statements and infinitives to avoid
include to know, understand, enjoy, and believe rather than to learn, understand, and
feel. The words need to be not only active but also measurable.

Example of Integrating Security into the
Application Development Life Cycle
As an example of integrating security into the application development life cycle and developing
security application testing, while an application is being developed, use or business cases are
developed to ensure that the application being developed meets the needs of the stakeholders.
Then application use cases form the basis of developing test cases for quality assurance testers to
test. The application use case can provide the following baselines for developing a test case and
test requirements:
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾


Name the system scope and boundaries.
Who are the primary actors or what are the endpoints sending and receiving information?
What is the goal of the system or transaction?
Who are the stakeholders?
What are the requirements?
What are actor/endpoint interests, preconditions, and guarantees?
What is the main success scenario?
What are the steps to success?

From the above information being described in an application use case, application security
requirements can be developed. Therefore, if the application development requirements include
something like the following, again, not an exhaustive list:
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾

Data entry fields shall have secure defaults.
Access shall be based on the principle of least privilege.
The application shall employ a defense-in-depth strategy.
The application shall fail securely and not display sensitive information.
The application shall verify and validate all services.
The application shall employ segregation of duties based on roles.


252  ◾  Information Security Management Handbook


From this list of requirements, we know that the following functions are the minimum that
are required for this application:
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾

Administration
Integration
Authentication
Authorization
Segregation of duties
Access control
Logging
Record/log retention
Reporting, alerting, and monitoring

As the scenarios are developed for test cases, the above functions need to be integrated into
the scenarios and steps within the application. A sample test case paragraph could be as follows:
The application user shall be authenticated using an application user account and
password prior to being placed in an application role and having one and only one user
session at one time. The application shall log all successful and failed authentication
attempts to access the application.
The steps developed within the application test case would then include the following:








1.The application shall display a use logon screen.
2.The user shall enter a user ID and password.
3.The application shall validate the entered user ID and password.
4.If the user ID or password is invalid, the application shall display an invalid logon message.
5.If the user ID or password is invalid, the application shall log an invalid logon message.
6.If the user ID and password are valid, the application shall validate that this is the only
signed-in location for the use account.
7.If the user ID and password are valid, the application shall log a valid logon message.
8.If the user ID and password are valid, the session shall be placed in an application role based
on the use account membership.
From the above set of requirements, the application tester can now produce detailed steps to
perform security testing of the authentication process. These security testing steps need to include
testing as a good user, as an intentionally bad user, as an accidentally bad user, and as a user not
authorized to access and use the application.
Other things that could be considered when testing authentication and authorization could
include the following:
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾


Setting up multiple sessions with the same and different information to overload the system
Valid/invalid/disabled accounts
Password changes/lockouts/resets
Elevating privileges (administrative versus nonadministrative)
Accessing screens/fields/tables/functions
Valid/invalid data in each field
Logging out versus aborting the application


Building Application Security Testing into the Software Development Life Cycle  ◾  253

◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾
◾◾

Information disclosure on errors and aborting
Information and access within log files and alerts
Hidden fields—special areas to click to execute
Can you get to a command line (listing or seeing directory content)?
Can you put in extra characters in a field and get the application to accept them?
Use application security requirements to build security test cases.
Use existing testing cases and look at them from a security point of view to do additional testing.
Look at what can accidently or deliberately be done with the application.

Using the flaws listed above with many applications, the following table describes some of the

tests that could be performed during the quality assurance testing to build security testing into
the application life cycle.
Potential Software Flaw
Buffer overruns

Security Testing to be Included
Carefully check your buffer accesses by using safe string and
buffer handling functions.
Use compiler-based defenses.
Use operating system–level buffer overrun defenses.
Understand what data the attacker controls, and manage that
data safely in code.

Format string problems

Use fixed format strings, or format string from a trusted source.
Check and limit locale requests to valid values.

Integer overflows

Check all calculations used to determine memory allocations to
check that arithmetic cannot overflow.
Check all calculations used to determine array indexes to check
that the arithmetic cannot overflow.
Use unsigned integers for array offsets and memory allocation
sizes.

SQL and command
injection


Understand the database you use.
Check the input for validity and trustworthiness.
Use parameterized queries, prepared statements, placeholders,
or parameter binding to build SQL statements.
Store the database connection information in a location outside
of the application.
Perform input validate on all inputs before passing it to a
command processor.
Handle the failure security if an input validation check failed.

Failing to handle errors

Check the return value of every function.
Attempt to gracefully recover from error conditions.

Cross-site scripting

Check all Web-based inputs for validity and trustworthiness.
HTML-encode all outputs originating from user input.


254  ◾  Information Security Management Handbook

Failing to protect
network traffic

Perform ongoing message authentication for all network traffic.
Use a strong initial authentication mechanism.
Encrypt all data for which privacy is a concern and err on the side
of privacy.

Use SSL/TLS for all on-the-wire crypto needs.

Use of magic URLs and
hidden form fields

Test all Web input, including forms, with malicious input.

Improper user of SSL
and TLS

Use the latest version of SSL/TLS available.

Understand the strengths and weaknesses of the approach, if
you are not using cryptographic primitives to solve some of
these issues.

Use a certificate allow list, if applicable.
Ensure that, before you send data, the peer certificate is traced
back to a trusted CA and within its validity period.
Check that the expected hostname appears in a proper field of
the peer certificate.

Use of weak passwordbased systems

Ensure that passwords are not unnecessarily snoopable over the
wire when authenticating.
Give one a single message for failed login attempts.
Log failed password attempts.
Use a strong, salted cryptographic one-way function based on a
hash for password storage.

Provide a secure mechanism for people who know their
passwords to change them.

Improper file access

Be strict and account what you will accept as a valid filename.

Race conditions

Write code that does not depend on side effects.
Be very careful when writing signal handlers.

Information leakage

Define who should have access to what error and status
information data.
Use operating system defenses such as ACLs and permissions.
Use cryptographic means to protect sensitive data.

Failing to store and
protect data securely

Think about the access controls the application explicitly places
on objects, and the access controls objects inherit by default.
Realize that some data is so sensitive it should never be stored
on a general purpose, production server.
Leverage the operating system capabilities to secure secret and
sensitive data.
Use appropriate permissions.
Remove the secret from memory space once you have used it.

Scrub the memory before you free it.


Building Application Security Testing into the Software Development Life Cycle  ◾  255

Poor usability

Understand security needs and provide the appropriate
information to help them get their jobs done.
Default to a secure configuration whenever possible.
Provide a simple and easy-to-understand message.
Make security prompts actionable.

Conclusion
If the application life cycle includes security from the beginning, then the security application
testing will logically follow when performing the quality assurance and user testing. If security is
not included throughout the application life cycle, it will be harder to accomplish good application security testing within the quality assurance and user test processes. Including application
security testing within the application life cycle will reduce the risk to information assets within
the enterprise.



Malicious Code



Chapter 18

Twenty-Five (or Forty) Years
of Malware History*

Robert M. Slade
As 1986 dawned, computer users around the world were unaware that life, as they knew it, would
never be the same. Within weeks, the Brain computer virus would be unleashed upon an unsuspecting planet, and the computing world would never be the same again!
Well, not quite.
Brain [or BRAIN or (c)BRAIN] was probably written and released some time before 1986. It
did become widespread, and well-known, and was likely the first virus written for the MS-DOS
platform, but it was not the first virus ever written. (We will get back to it.)
It is hard to say where to start with viruses. Viruses work best when they work silently, so the
most important ones did not cause a lot of fuss or fanfare. There are also a lot of people who claim
“the first virus” was a particular game or prank or utility, even when these programs had nothing
to do with reproduction, which is a central aspect of viruses (Figure 18.1).
I suppose we might begin with Howard Aiken. Not that computer viruses were his fault—far
from it! Aiken designed computers in the 1940s and 1950s, which were operated at Harvard
University, mostly in terms of work done for the U.S. Navy. (Grace Hopper was one of the crew
that maintained and programmed these computers.) Aiken’s central design structure made a very
strict separation between the programs that these computers used and the data that was operated
upon. This arrangement, which became known as the Harvard Architecture, would have made it
almost impossible for the viruses that we know today to operate. Unfortunately, the industry preferred the von Neumann architecture, which makes no distinction between programs and data,
and the malware situation that we see today was set to emerge.
Of course, it did not emerge right away. There were a few related items that came up over the
years, though. Like the game of Core Wars that programmers played, where some of the more
successful programs created copies of themselves. Or the prank played one time (ironically to get
attention for a security problem) where two programs would check for each other in a machine
and, if the other had been killed, start up a new copy.
*© Copyright Robert M. Slade. Used by permission.

259


260  ◾  Information Security Management Handbook


(a)

(b)

(c)

Figure 18.1  Some viruses do present some kind of symptom or message. These illustrations
show the effect of the Cascade virus, which caused, over time, characters to fall from their
normal position on the screen toward the bottom, eventually forming “piles” of letters at
the base.


Twenty-Five (or Forty) Years of Malware History  ◾  261

Core Wars
With programmers being who they are, the development of rogue programs
became a sport. This is now enshrined in the game of “Core Wars.” A program
that “simulates” a computer environment is run. A standard set of instructions,
known as “Redstone code,” are used to build programs that battle each other
within the simulated environment. The objective is survival. The use of such
tactics as attack, avoidance, and replication is of interest to virus research, as is
the trade-off between complexity of design and chance of destruction.

“Password trojans” were extremely popular in the university and college environments and
have lately been followed by more malicious identity and monetary theft systems known as phishing. The original programs were simple: a facsimile of the normal login screen will generally get
the user to enter his or her name and password. It is quite simple to have a program write this
information to a file or even mail it to a specific account.
A famous, if relatively harmless, prank in earlier computers was the “cookie” program, which
ran on PDP series computers. This program would halt the operation that the victim was working

on and present a message requesting a cookie. If the user typed “cookie,” then processing would
continue. There was a later viral program that followed this pattern, a “Spanish Cookie” virus.
This copying of ideas—viruses using ideas that came from jokes and gags using symptoms of
viruses—is relatively common (Figure 18.2).
The earliest reproductive program was the one called Creeper, which was created, as an experiment in “mobile” computing, at one of the earliest companies involved in research into computer
networking, in 1971. Creeper copied itself from one machine to another over a network. It is,
therefore, closer to our definition of a worm than a virus.

Viruses and Worms Begin
John Shoch and Jon Hupp, two researchers at Xerox PARC (Palo Alto Research Center) were
interested in the concept of distributed processing—the ability of computers to work cooperatively
on single or related tasks. The specific experimental program they were testing was the one that
would examine other computers on the net, and, if a computer was idle after normal working
hours, e.g., submit a copy of itself to the idle machine. In this way, the original program would
spawn multiple copies of itself to idle machines to make use of the CPU time that would otherwise
go to waste. By breaking a problem down into small chunks, each capable of solution on one of the
machines on the network, you would have a large program consisting of small program segments
working on individual machines. Because biological worms are defined by the fact that they have
segmented bodies, they called this new type of program a “worm.”

Apple 1, 2, 3
The earliest case of a virus that succeeded “in the wild” goes back to late 1981. The idea was
sparked by a speculation regarding “evolution” and “natural selection” in pirated copies of games
at Texas A&M: the “reproduction” of preferred games and the “extinction” of poor ones. This led
to considerations of programs that reproduced on their own. Apple II computer diskettes of that


262  ◾  Information Security Management Handbook

Figure 18.2  This is a screenshot of the prank program known as Anthem. (The program also

plays the French national anthem over a sound card.) This reversal of the screen is also a feature
of an older virus, known as Flip.

time, when formatted in the normal way, always contained the Disk Operating System (DOS).
The programmer attempted to find the minimum change that would make a version of DOS that
was viral, and then tried to find an “optimal” viral DOS. A group came up with an initial version
of such a virus in early 1982, but quarantined it because of adverse effects.
A second version was allowed to “spread” through the disks of group members. A bug was
identified after this viral DOS spread outside the group members, and a third version was written that avoided the memory problems: parts of the coding involve bytes that are both data and
opcode. Version 3 was subsequently found to have spread into disk populations previously felt to
be uninfected, but no adverse reactions were ever reported.
(For those who have Apple DOS 3.3 disks, location B6E8 in memory, toward the end of track 0,
sector 0 on disk, should be followed by 18 zero bytes. If, instead, the text “(GEN xxxxxxx TAMU)”
appears, the digits represented by the “x”s should be a generation counter for virus version 3.)
The story has an interesting postscript. In 1984, a malicious virus was found to be spreading
through the schools where all this took place. Some disks appeared to have immunity. All of these
immune disks turned out to be infected with version 3.

The Work of Dr. Cohen
No historical overview of viral programs can be complete without mention of the work of Fred
Cohen. He first presented his ideas in a data-security seminar in 1983, and his seminar advisor, Len Adelman (the “A” in “RSA”), suggested the term “virus” to apply to Cohen’s concept.


Twenty-Five (or Forty) Years of Malware History  ◾  263

Cohen’s master’s thesis on the topic was published in 1984, and his doctoral dissertation, in 1986,
expanded his earlier research.
His practical work proved the technical feasibility of a viral attack in any computer-system
environment. Equally important, his theoretical study proved that the “universal” detection of a
virus is undecidable, and therefore a “perfect” antiviral program is impossible. Cohen also outlined the three major classes of antiviral protection and that is the basis for all antiviral systems

created to date.

Viruses Start to Spread
It is reasonably certain that the first major virus started to reproduce in 1986. Autumn 1987 really
seemed to get the ball rolling with regard to virus research. In fact, most virus history seems to
have happened between 1986 and 1990, with everything that followed being a repeat, in one form
or another, of what had gone before.

(c)Brain
The “Brain” virus is probably the earliest MS-DOS virus. At one time, it was the most widespread
of PC viral programs. Like the Apple viruses in the early 1980s, it was disk-based, rather than
being related to program files. Until the advent of macro viruses, disk-based viruses (usually technically known as boot sector infectors or BSIs) were “superior” in terms of the numbers of infections created. The Brain family is prolific, although less so than Jerusalem.
(Seemingly, any successful virus spawns a plague of copies as virus writer-wannabes use it as
a template.) Like the later Jerusalem virus, it seems that one of the lesser variants might be the
“original.” The “ashar” version appears to be somewhat less sophisticated than the most common
Brain, and Brain contains text that makes no sense unless Brain is “derived” from ashar. Brain
contains other “timing” information: a “copyright” date of 1986 and an apparent “version” number of 9.0 (Figure 18.3).
Brain is at once sly and brazen about its work. It is, in fact, the first stealth virus, in that a
request to view the boot sector of an infected disk on an infected system will result in a display of

Figure 18.3  (c)BRAIN disk map.


264  ◾  Information Security Management Handbook

Figure 18.4  BRAIN version with address text removed.

the original boot sector. However, the Brain virus is designed not to hide its light under a bushel:
the volume label of infected diskettes becomes “(c)Brain” (or “(c)ashar” or “Y.C.1.E.R.P” for different variants). Hence, the name of the virus (Figure 18.4).


Lehigh
In November 1987, it appeared that certain failed disks reported at Lehigh University were due to
something other than user carelessness. The Lehigh virus infected copies of COMMAND.COM,
and, when run (usually upon booting from an infected disk), the virus stayed resident in memory.
When any access was made to another disk, via the TYPE, COPY, DIR, or other normal DOS
commands, any (and only) uninfected COMMAND.COM files would be infected. A counter was
kept of infections: after four infections, the virus would overwrite the boot and FAT areas of disks.
The extreme destructiveness of Lehigh probably limited its spread: aside from copies in research
“zoos,” the Lehigh virus never spread off the campus.

CHRISTMA exec
In December 1987, IBM mainframe computers in Europe, connected via the EARN network,
experienced a “mailstorm.” Such events were fairly common on the early “internetworks,” caused
by various mailer problems. This particular mailstorm, however, was of unprecedented severity.
The CHRISTMA exec was a message that contained a script program. “Christmas card” messages with the REXX system can be more than just the usual “typewriter picture.” These messages could include forms of animation such as asterisk snowflakes falling on a winter scene, or a
crackling fire from a Yule log. Typing either “christmas” or “christma” would generate the “card.”
It really was not anything special—a very simplistic conifer shape made out of asterisks.
However, at the same time that it was displaying the tree on the screen, it was also searching for the lists of other users that either sent mail to, or received mail from, this account. The
CHRISTMA exec would then mail copies of itself to all of these accounts (Figure 18.5).
CHRISTMA exec was thus the first e-mail virus, and the first script virus, over a decade before
the much later Loveletter or LoveBug virus.
In March 1990, an MS-DOS virus, XA1 Christmas Tree, was discovered. Although it has no
technical or programming aspects related to any of the network worms, it seems to have been written
“in memor” of them. It contains (in German) the message “And still it is alive: the Christmas Tree!”

Jerusalem
Initially known as the “Israel” virus, the version reported by Y. Radai in early 1988 (also sometimes
referred to as “1813” or Jerusalem-B) tends to be seen as the central virus in the family. Although
it was the first to be very widely disseminated and was the first to be “discovered” and publicized,
internal examination suggests that it was, itself, the outcome of previous viral experiments.

Although one of the oldest viral programs, the Jerusalem family still defies description, primarily because the number of variants makes it very difficult to say anything about the virus for


Twenty-Five (or Forty) Years of Malware History  ◾  265

Figure 18.5  Part (mostly the display section) of the script code for the CHRISTMA exec virus.

sure. The “Jerusalem” that you have may not be the same as the “Jerusalem” of your neighbor. Like
Brain before it, Jerusalem was used as a template by young virus writers who wanted to get into
the act, but lacked the necessary programming skills.

MacMag
The MacMag virus was relatively benign. It attempted to reproduce until 2 March 1988, using
the disk-based INIT resource on the Mac system. When an infected computer was booted on that
date, the virus would activate a message that “RICHARD BRANDOW, publisher of MacMag,
and its entire staff would like to take this opportunity to convey their UNIVERSAL MESSAGE
OF PEACE to all Macintosh users around the world.” Fortunately, on 3 March, the message
appeared once and then the virus erased itself.
Richard Brandow was the publisher and editor of the MacMag computer magazine. Brandow
at one point said that he had been thinking about the “message” for 2 years prior to releasing it.
(Interestingly, in view of the fact that the date selected as a trigger, 2 March 1988, was the first anniversary of the introduction of the Macintosh II line. It is also interesting that a “bug” in the virus
that caused system crashes affected only the Mac II.) Indeed, he was proud to claim “authorship,”
in spite of the fact that he did not, himself, write the virus. (Brandow had apparently commissioned
the programming of the virus, and the internal structure contains the name “Drew Davidson.”)
MacMag holds a number of “firsts” in the computer world. It seems to have been released via a
dropper program that was embedded within a HyperStack data file, thus predating the later macro
viruses. It also infected a commercial application and was widely spread in that manner.


266  ◾  Information Security Management Handbook


Scores
The Scores Mac virus is interesting for a number of reasons, but it gets inclusion here simply
because it was the first virus that had a definite company and application as a target.

Stoned, Michelangelo, and Other Variants
The Stoned virus was originally written by a high school student in New Zealand. All evidence
suggests that he wrote it only for study and that he took precautions against its spread. Insufficient
precautions, as it turned out: it is reported that his brother stole a copy and decided that it would
be fun to infect the machines of his friends.
Stoned spawned a large number of mutations ranging from minor variations in the spelling of
the payload message to the somewhat functionally different Empire, Monkey, and No-Int variations. Interestingly, only Michelangelo appears to have been as successful in reproducing.
Like the Apple viruses and Brain, Stoned was disk-based. Until the Word macro viruses came
along in 1994, disk-based viruses were the dominant form.

If They Joke About It, Is It Mainstream?
The Modem virus was first “reported” on 6 October 1988. Although this may not constitute the
very first virus hoax, many subsequent hoaxes have used many of the same features. The original
report was supposed to have come from a telecommunications firm in Seattle (therefore laying
claim to have come from some kind of authority), and claimed that the virus was transmitted
via the “subcarrier” on 2400 bps modems, so you should use only 300 or 1200 bps. (There is no
“subcarrier” on any modem.)
The initial source of the hoax seems to have been a posting on Fidonet, apparently by someone
who gave his name as Mike RoChenle. This pseudonym was probably meant as a joke on “microchannel,” the then-new bus for IBM’s PS/2 machines.

The Internet Worm
The Internet Worm is possibly the preeminent case of a viral program in our time. In many ways, this
fame (or infamy) is deserved: the Internet Worm is the story of data security in miniature. The Worm
used trusted links, password cracking, security holes in standard programs, the almost ubiquitous buffer overflow, standard and default operations, and, of course, the power of viral replication.
Server computers on the networks are generally designed to run constantly—to be ready for

“action” at all times. They are specifically set up to run various types of programs and procedures
in the absence of operator intervention. Many of these utility programs deal with the communications between systems.
When the Worm was well established on a machine, it would try to infect another. Two of the
major loopholes it used were a buffer overflow in the fingerd program and the debug mode of the
sendmail utility, which was frequently left enabled.
Robert Tappan Morris (RTM) was a student of data security at Cornell University, when he
wrote the Worm. The Worm is often referred to as a part of his research, although it was neither
an assigned project, nor had it been discussed with his advisor.
RTM was convicted of violating the computer Fraud and Abuse Act on 16 May 1990. In
March 1991, an appeal was denied. He was sentenced to 3 years of probation, a $10,000 fine, and
400 hours of community service.


Twenty-Five (or Forty) Years of Malware History  ◾  267

More of the Same
At this point in time, most of the major viral and malware technologies had been invented. Most
future nasties simply refined or rang changes on what had gone before.

More Shapes for Polymorphism
Christopher Pile, who was known to the blackhat community as the Black Baron, produced
SMEG, the Simulated Metamorphic Encryption Engine. Polymorphism was a virus technology
that had been known since the relatively unsuccessful V2P1 or 1260 virus in the early days, and
even polymorphic engines were common. In May 1995, Pile was charged with 11 offences under
the United Kingdom’s Computer Misuse Act 1990.

Good Times for All—Not!
The Good Times virus warning hoax is probably the most famous of all false alerts and was certainly the earliest that was widely distributed. The hoax probably started in early December 1994.
Virus hoaxes and false alerts have an interesting double relationship with viruses: the hoax usually
warns about a fictitious virus and also suggests that the reader send the alert to all friends and

contacts, thus getting the user to do the reproductive part.
At the time of the original Good Times message, e-mail was almost universally text-based. The
hoax warned of a viral message that would infect your computer if you even read it, and the possibility of a straightforward text message carrying a virus in an infective form is remote. It provided
no information on how to detect, avoid, or get rid of the “virus,” except for its warning not to read
messages with “Good Times” in the subject line. (The irony of the fact that many of the warnings
contained these words seems to have escaped most people.)
Predictably, a member of the vx community produced a “Good Times” virus. Like the virus
named after the older Proto-T hoax, the programmed “Good Times” was an uninteresting specimen, having nothing in common with the original alert.

Proof of Concept
Concept was not the first macro virus, a virus that embeds executable program script within a
data file, ever created. HyperCard viruses were commonplace in the Macintosh environment,
and a number of antivirus researchers had explored WordBasic and other malware-friendly macro
environments before the virus appeared in August 1995.
However, Concept was the first macro virus to be publicly described as such and certainly the
most successful in terms of spreading. For a while, it was easily the most widely encountered virus in
the world, knocking disk-based viruses out of top spot for the first time. From the fall of 1995 until the
“fast-burner” e-mail viruses of 2000, macro viruses were pretty consistently in the top spot.
Unlike earlier boot sector and program file viruses, macro viruses carried their own source
code. This made it even easier for those with almost no programming skills to produce variants
based on a copy they encountered.

The Power of E-Mail
By 1999, and the turn of the millennium, everyone had become convinced of the benefits of
e-mail. So had virus writers.


268  ◾  Information Security Management Handbook

W97M/Melissa (MAILISSA)

She came from alt.sex. Now, as the old joke goes, that I have your attention ... At this instance,
though, the lure of sex was certainly employed to launch the virus into the wild. The source of
the infestation of the Melissa Word macro virus (more formally identified as some variation on
W97M/Melissa) was a posting on the Usenet newsgroup alt.sex, probably originally on 26 March
1999. The message had an attachment, a Word document. The posting suggested that the document contained account names and passwords for Web sites carrying salacious material.
The document carried a macro that used the functions of Microsoft Word and the Microsoft
Outlook mailer program to reproduce and spread itself. Melissa is not the fastest burning e-mailaware malware to date, but it certainly held the record for a while. In 1994, an e-mail virus was
impossible. Padgett Peterson, author of MacroList, one of the best available macro virus protection
tools, stated, “For years we have been saying you could not get a virus just by opening E-Mail.
That bug is being fixed.”
As a macro virus, Melissa also carried its own source code. As an e-mail virus, it spread widely.
Therefore, it was widely used as a template for other variants, some of which appeared within days.

Happy99 (SKA)
Happy99 used e-mail to spread, but sent itself out as an executable attachment. To do this, it actually took over the computer’s connection to the Internet. Later viruses, using the same method,
would actually prevent users from contacting antivirus Web sites for help with detection and
disinfection (Figure 18.6).

PrettyPark
PrettyPark is versatile: not only is it a worm, but it also steals passwords and has backdoor functionality. Thus, it is one of first examples of the convergence that we have seen recently: malware
containing functions from a variety of classes of nasty programs (Figure 18.7).

VBS/Loveletter
The Love Bug, as it will probably always be known, first hit the Net on 3 May 2000. It spread
rapidly, arguably faster than Melissa had done the previous year.

Figure 18.6  Viruses have many ways to disable protection, or to get you to disable it for them.
The Fakespy would actually pop up this message, to direct the user to a site to download … well,
anything the blackhats wanted, really.



Twenty-Five (or Forty) Years of Malware History  ◾  269

Figure 18.7  Some viruses take over many parts of your computer. The Avril-A virus would open
this (legitimate) Web page in your Internet Explorer browser.

The message consisted of a short note urging you to read the attached love letter. The attachment filename, LOVE-LETTER-FOR-YOU.TXT.vbs, was a fairly obvious piece of social engineering. The .TXT bit was supposed to make people think that the attachment was a text file
and thus safe to read. At that point, many people had no idea what the .VBS extension signified,
and might in any case have been unaware that if a filename has a double extension, only the last
filename extension has any special significance to Windows. Putting vbs in lowercase was likely
meant to play down the extension’s significance.
VBS stood for Visual Basic Script, and, if you had updated your computer to Windows 98 or
2000, or even if you had updated to the latest version of Internet Explorer, it was now associated
with Windows Script Host, a new batch language provided by Microsoft. Almost everybody had.
Almost nobody knew the significance of VBS. As well as hugely rapid spread, Loveletter had a
somewhat destructive payload that cost a lot of people their graphics and MP3 files.

VBS/Stages
VBS/Stages spread via Internet chat clients, e-mail, and mapped network drives. If it arrived
by e-mail, the attachment is called LIFE_STAGES.TXT.SHS. The .SHS extension denotes a
Windows scrap object, a file that can, in principle, be any kind of file and can be executed.
Windows Explorer does not show the .SHS file extension, irrespective of whether file extensions
are set to be displayed, thus providing another interesting way for viruses to hide what they are
(Figures 18.8 through 18.10).

Linux Worms
By the spring of 2001, a number of examples of Linux malware were extent. Interestingly, the new
Linux worms are similar to the Internet/Morris/UNIX worm in that they primarily relied on bugs
in automatic networking software.



270  ◾  Information Security Management Handbook

Figure 18.8 Note that the two “test” file icons look very similar to either text or undefined file
icons.

Poly/Noped
This 2001 VBScript worm displays a message about stopping child pornography. It scans for
JPEG files on the hard disk, looking for specific strings in the filename that the virus author obviously thought might relate to pornography. The worm will collect these files and e-mail them to

Figure 18.9  However, in detail view, although the SHS extension is still not shown, the fact that
the files are scrap objects is noted.


Twenty-Five (or Forty) Years of Malware History  ◾  271

Figure 18.10 A directory listing in a DOS box does shows the SHS extension.

addresses thought to belong to law enforcement agencies. Despite the attempt to prove that viruses
can provide a socially useful service, this does not help anyone.

LINDOSE/WINUX
Summer of 2001 saw a virus that could infect both Linux ELF files and Windows PE-EXE files.
Big deal. Jerusalem and sURIV3 could infect both .COM and .EXE files back in 1987.

Code Red
In 2001, Microsoft’s IIS contained a buffer overrun vulnerability in the index server, and Code
Red used it to spread startlingly quickly as a worm. A later version, Nimda, took the multipartite
concept (spreading via multiple objects or vectors) to new heights and would spread using the same
worm activity, as well as e-mail attachment, file infection, and spreading over local area networks

using drive shares (Figure 18.11).

Sircam
In 2001, Sircam searched for documents on your computer and incorporated the document into
the virus itself, thus changing the size and name of the attachment. However, because it also
mailed the document out to the addresses on your computer, it could breach your privacy. A later
virus, Klez, did something similar. In one case, a confidential document from a security concern
got sent to a mailing list.

Sobig.F
As I was struggling to get a book manuscript off to the publisher in August 2003, I was having a
hard time keeping my e-mail open because of a massive deluge of Sobig.F infected messages. Sobig
was one of the original spambotnet viruses, carrying with it software for its own SMTP server, a
backdoor capability, and other utilities. It was at this point that virus writers seemed to start to think
in commercial terms. Later, the authors of Bagle, Netsky, and MyDoom would actually engage in a
type of war, trying to target and take down each other’s infected nets, while building their own to
“rent” massive numbers of infected machines to spammers. Interestingly, the virus writers are also
using the spambotnets as distribution systems to “seed out” new viruses as they are written.


272  ◾  Information Security Management Handbook

Figure 18.11  Worms can spread quickly, although some have a tendency to confine to a given
location or range of Internet addresses. You can explore the patterns of spread with a worm
simulator that can be downloaded from the Symantec Web site.

Spyware and Adware
There is a lot of controversy over a number of technologies generally described
as adware or spyware. Most people would agree that the marketing functions
are not specifically malicious, but what one person sees as “aggressive selling,”

another will see as an intrusion or invasion of privacy. Therefore, it is not only
difficult to say for sure whether a specific piece of software is adware or spyware, but also exactly when this type of malware started to appear, as malware.
Because it is so hard to draw the line between legitimate and malicious
programs in this area, you will probably have to get spyware detection separately from antivirus scanning. After all, the antivirus companies have a good
test: if you can make it reproduce, it is a virus. Certain companies involved in
detecting spyware are trying to find similar functional definitions of spyware
and adware, but the initial proposals have not been greeted with universal
enthusiasm (Figure 18.12).
Even the spyware companies themselves admit that it is not always possible to determine which spyware is actually unwanted. A number of the spyware or adware programs are related to certain games or utilities. If you want
the program that you downloaded from the Net, you have to let the spyware
or adware run (Figure 18.13).
All questions of the difficulty of defining spyware aside, there is no question
that there is an enormous amount of it out there. In fact, although a computer
that seems to be running more slowly than usual has traditionally been suggested
as the sign of a virus, now we are much more likely to find that the culprit is
adware or spyware. (On one visit to a Web-based greeting card site, I found that


Twenty-Five (or Forty) Years of Malware History  ◾  273

Figure 18.12  Spybot Search and Destroy, one of the spyware detecting programs you can find.

it installed 150 pieces of spyware on my computer. Obviously, we do not read
those greeting cards anymore.) Be careful out there: you no longer have to ask to
download and install games or screensavers to get spyware these days. A lot of
sites will do “browse by” installs when you simply visit or look at the site.

Can I Get a Virus on My Cell?
Or, Blackberry, PDA, smartphone, or other form of mobile computing device?
The mobile issue is fairly simple. Mobile malware is already out there, although

none of it has made much of an impact. So far. Eventually, it will. The only
indicator that we have ever found about prevalence of malware on a given

Figure 18.13  When you run Spybot, it warns you that deleting certain instances of spyware or
adware may cause the program that you actually wanted to cease operating.


×