Tải bản đầy đủ (.pdf) (34 trang)

Visual Basic 2005 Design and Development - Chapter 2 pps

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (692.09 KB, 34 trang )

Lifecycle Methodologies
An application’s lifecycle covers its entire existence, starting with a vaguely formed idea and last-
ing through design, development, deployment, and use by customers. The lifecycle may include
any number of revisions and new releases, and ends when the last copy of the application is no
longer used and the whole thing is relegated to history. An application’s lifecycle may last 30 years
and span dozens of releases and enhancements, or it may be cut short during design when devel-
opers decide the application is impractical or doesn’t adequately meet customer needs.
Whether an application is intended to last a month, a year, or indefinitely, you need to manage its
lifecycle. For a simple, one-use throwaway application, you may spend very little time in lifecycle
management. For longer-lived applications, you may need to spend considerable effort to ensure
that the application moves as smoothly as possible from the idea stage to a usable product, and
that it remains usable as long as possible, or at least as long as it is needed.
This chapter describes several different lifecycle methodologies that you can use to manage differ-
ent kinds of applications. It discusses the strengths and weaknesses of each method, and explains
which work best when developing in Visual Basic.
Lifecycle Stages
All lifecycle strategies use the same fundamental stages arranged in various ways. In one
approach, some stages may be made longer or shorter. In another approach, some stages may be
reduced practically to the point of non-existence, whereas others are repeated several times. Often,
different stages blur together with one overlapping another. However, every approach uses the
same basic stages to one extent or another. The following sections describe these basic stages.
06_053416 ch02.qxd 1/2/07 6:28 PM Page 15
Idea Formulation and Refinement
Every application begins as an idea. Someone gets a notion about some program that might be useful.
Sometimes the idea is only a vague notion, such as, “Wouldn’t it be great if we had a program that could
generate reports for us?” Other times, the idea may be fully formed from the start with the idea maker
knowing exactly what the program must do.
Once the basic idea is discovered, it should be discussed with others who might have different insights
into the desired result. As many people as possible should have a chance to provide input, making sug-
gestions, additions, and improvements until you know pretty much what the application should do.
Many different people might provide useful feedback in this stage of the lifecycle. People who will even-


tually use the application (the customers) know what they need to accomplish. If the application will
help with an existing process, the customers know how the process is currently performed. Though the
application doesn’t need to follow the same approach that the customers use now, it’s worthwhile exam-
ining the current methods to see if they contain any useful ideas.
The supervisors and managers of the final users also often have useful insights. Even if these people
won’t use the application daily, they sometimes have a higher-level view of the environment in which
the application will run. They may be able to suggest ways to expand the scope of the application to
handle other related tasks. They may also be able to restrict the scope of the idea to avoid duplicating
other efforts that may not be obvious to the end users.
It is frequently useful to ask people to daydream during this phase. Ask questions such as, “In an ideal
world where anything is possible, what would you like the application to do?” The idea-makers may
decide they want a telepathic interface that can guess what items are in inventory at any given moment.
Ideas like this one may sound outlandish to the customers, but recent innovations in radio frequency
identification (RFID) do basically that. Now is not the time to stifle creativity.
Application architects, user interface designers, programmers, and developer types are often involved at
the idea formulation and refinement phase, but their participation is not necessary, and it is sometimes
even detrimental. These people know which application features are easy to build and which are not. In
an effort to keep the ideas realistic, they may inhibit the other idea-refiners and close down useful
avenues of exploration before their final payback is discovered. Though one feature may be unrealizable,
discussing it may spawn other more useful ideas. It’s better to have lots of unrealistic ideas that are later
discarded than to miss a golden opportunity.
Happy accidents are not uncommon. Ice cream cones, chocolate chip cookies, the implantable pacemaker,
penicillin, Post-it Notes, and Scotchgard were all invented by people trying to do something else (an ice
cream seller ran out of bowls, Ruth Wakefield thought the chips would dissolve, Wilson Greatbatch was
trying to make a device to record heartbeats, Alexander Fleming was studying nasal mucus but his
samples were contaminated by mold, the glue used by Post-its was an attempt to make a better adhesive
tape, and Scotchgard came from efforts to improve airplane fuels).
Although you are unlikely to accidentally invent the next blockbuster three-dimensional action adven-
ture game while trying to build a better billing system, you may stumble across a better way to identify
at-risk customers. In a dispatch system I worked on, we added a mapping feature that let dispatchers see

the routes that repair people were scheduled to follow. We thought this was a minor feature, but it
turned out to be the easiest way for the dispatcher to see where everyone was at the same time, so the
dispatchers used it all the time. A fortunate accident.
16
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 16
Just as experienced developers can stifle creativity, too much executive participation can restrict the
scope of the project. Eventually, the application’s lifecycle must consider real-world factors such as
schedule and cost, but that can be left for later. It’s better to have too many good ideas and then pick out
the best for implementation, rather than to stop at the first set of usable ideas you stumble across.
In many projects, particularly those with a large existing group of users, some of the users will be gen-
erally against the whole idea of the project. Sometimes this resistance is caused by inertia (“We’ve done
fine without it so far”). Sometimes it’s caused by a general fear of a loss of power, respect, or control.
Sometimes it’s caused by a general fear or dislike of change.
If you ignore this person, he or she can become a real difficulty, spreading fear and dissent among the
users. However, if you recruit this person into the development effort at the beginning, he or she can
become a tremendous asset. This person is likely to see things from a different perspective than the man-
agement and programming team members, and can provide valuable input. Often, this person will also
become one of the project’s greatest proponents. Once the other users see that this person has been won
over, they often follow.
One person who should definitely be consulted during idea development is the idea’s originator. Even if
the idea changes greatly during refinement, it’s worth keeping this person involved, if for no other rea-
son than to make the idea seem appreciated. People with usable ideas are important, and if you yank the
project away, you may never get another idea from this person again.
Eventually, you may run across someone who is so committed to an original idea that he or she won’t
allow the idea to change and grow. This person keeps pulling the idea back to its earliest form no matter
how much others try to stretch and improve the idea. I’ve seen this on several projects. Often the perpe-
trator has recently learned about a new concept or technique, and is determined to use it no matter what.
In some cases, you can civilly explain to the person that the idea has outgrown the original concept and
that it is time to move on. Occasionally, you may need to appeal to a higher level of management to

make a decision stick.
At the end of the idea formulation and refinement stage, you do not need a schedule, cost estimates, an
architectural design, a user interface design, or anything else that is directly related to writing code.
Those all come later.
Another danger at this stage is the person who wants to rush headlong into development. Managers
often want to move quickly to produce something they can hold up to show progress. Developers often
want to hurry along to their favorite part of the lifecycle: development. Although you eventually need to
actually build something, it’s a mistake to end the idea stage too early. If you dash off into development
with the first workable idea, you may be missing out on all sorts of wonderful opportunities.
At this point, you should have a pretty good idea of the application’s goals. The list of goals may be very
short, as in, “Pull data from a specific database and enter it into an Excel workbook.” The list may be
extremely large, containing hundreds of “must haves,” “nice to haves,” and “wild wishes.” The next
stage, requirements gathering, refines these possibly vague ideas and makes them more concrete and
measurable.
Team Building
This isn’t really a stage in an application’s lifecycle. Rather, it’s an ongoing activity that should start early
and last throughout the project. Ideally, you should start building a solid team of supporters during idea
formulation and refinement, and build on that core as the lifecycle progresses.
17
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 17
This core team should include people who will be involved in all phases of the lifecycle. It can include
customers, designers, developers, management, and customer support people, although you may need
to rein in some of these people during different stages. For example, during the idea formulation and
refinement phase, developers should not dwell on what is easy to program, and managers should not
focus exclusively on time and expense.
Two people who are particularly useful starting with the very earliest phases of the project lifecycle are a
customer champion and a management patron.
The customer champion represents the final users of the application. Often, this person is (or was) one of
those users, or is their direct supervisor. This person understands the needs of the users inside and out,

and can make informed decisions about which features will be most useful in the finished application.
This person must have the respect of the other users, so any decisions he or she makes will be accepted.
This person should be able to sign off on the project and say that it has officially achieved its goals.
It is also extremely helpful to have a customer champion with an easygoing attitude of give and take.
During later stages of the lifecycle, you may discover that some features are much harder to implement
than you had originally planned. At that point, it’s tremendously valuable to have a customer champion
who understands the realities of software development, and who knows that insisting on sticking to the
original design may mean cost and schedule slips. The best attitude here is one of, “The details don’t
matter, as long as the finished application lets the users do their jobs.”
People who have played the role of customer champion before often understand this give-and-take
approach. If your champion doesn’t have experience with development, you can start laying the ground-
work for this type of interaction during the idea formulation and refinement phase. Use phrases such as,
“We’ll see how hard that would be when we get into the design phase” and “If we start running out of
time, perhaps we can defer that feature until release two.”
Customers familiar with the traditional Waterfall lifecycle model described later in this chapter may
have more trouble with this concept. In that model, requirements are laid out early and exactly in the
cycle, and they must be followed strictly with little or no room for negotiation. A devoted Waterfall
enthusiast can make development difficult.
Fortunately, most people who are really dedicated to their jobs don’t care about the details too much as
long as they can do their jobs well. Most customers don’t insist on sticking to a plan long after the plan
has proven flawed (although I’ve met several managers with that attitude).
The management patron is someone with enough influence in the company to ensure that the project
moves forward. This person doesn’t necessarily need to be involved in the project’s day-to-day deci-
sions, but should be aware of the project’s progress and any high-level issues that may need to be
resolved (such as the need for staff, budget, and other resources).
Practically anyone with decision-making power can play this role, depending on the scope of the appli-
cation. For small projects that will be used by only one or two users, I’ve worked with patrons who were
first-tier managers with only one or two direct reports. For applications that would affect hundreds of
users and thousands of customers, I’ve worked with corporate vice presidents who oversaw tens of
thousands of employees. On a particularly interesting football-training project, Hall of Famer Mike

Ditka was one of our management patrons.
18
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 18
Two types of people that you may want to avoid for the role of patron are the soon-to-be-ex-managers
and people with lots of enemies.
If management changes, the new management may not have the same commitment to your project as
your previous patron. Sometimes, new management may cancel or weaken support for a project to dis-
credit the previous management, to provide resources for new pet projects, or just to make a change.
Unless you have a strong customer champion, keeping the project on track may be difficult. Of course,
you probably won’t know if your company is about to be sold and a new management team installed,
but if you know that a particular supervisor is up for retirement in six months, you might want to iden-
tify possible replacements and decide whether the transition is likely to be smooth.
If your management patron has lots of enemies, you may be in for an arduous project lifecycle. Political
infighting can make it difficult to attract and keep good team members, obtain needed resources, or gain
and keep customer confidence.
I worked on one project with particularly nasty politics. Corporate vice presidents were slugging it out
for control of this application and control of software development in general. At times, developers faced
threats and blatant attempts to sabotage the code. Fortunately, we had a strong customer advocate and
good customer buy-in, so the project was eventually a success, but it was a long, hard battle to keep the
project on track.
To avoid this sort of situation, you must take a look at the environment in which the potential patron
works. If it’s a hostile environment full of political infighting, you may want to look for another manage-
ment sponsor. Battling heroically against all odds to bring the customers the best possible application
despite adversity brings a certain satisfaction after the fact, but it can make for a pretty grim couple of
years during development.
Requirements Gathering
After the idea has had some time to ferment, you need to take a closer look at it and specify its features
more precisely. You need to decide exactly what the features you have identified mean. You need to
describe them with enough detail so that developers can understand what is necessary.

You also need to define the application’s requirements with enough precision to determine whether the
finished application has met its goals. The requirements must include specific measurable goals and mile-
stones that you can use to unambiguously decide whether the application has achieved its objectives.
For example, the following statement is a measurable goal: “When a user modifies a customer record,
the program will make an ‘audit trail’ record storing the date and time, the user’s name, the customer
ID, and the type of data changed. Supervisors (and only supervisors) will be able to view these records
by date, user, and type of change.” This is a nice concrete statement that you can easily verify by trying it
out in the program.
Many projects bog down at the finish while developers and customers bicker over whether the applica-
tion meets its goals. This is particularly a problem in projects that follow the Waterfall lifecycle model
described later in this chapter because that model makes changing the goals difficult. During the project,
if you discover that the goals are outdated, or that they do not solve a useful problem, it may be difficult
19
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 19
to adapt and revise the application’s design. That can lead to fighting between the developers who did
what they thought they were supposed to do (implement the original requirements) and customers
whose needs are not satisfied by the original requirements.
Other common requirements statements specify desired performance. For example, “When the user
requests a customer order for processing, the program will display the next order within five seconds.”
This statement is reasonable and measurable, but requires a bit more clarification. What may be possible
with a few users on a fast network may be impossible if the network is congested and hundreds of users
try to access the database at the same time.
A more precise requirement should specify the system’s load and allow for some fluctuation. For exam-
ple, “When the user requests a customer order for processing, the program will display the next order
within five seconds 95 percent of the time with up to 100 users performing typical duties.” This may still
be difficult to test, but at least it’s fairly clear.
I worked on one project where performance testing was performed by a group of 20 users in the same
room. The test leader had every user type in exactly the same data at the same time and press the Enter
key simultaneously. They quickly learned that this was not a realistic test because it is very unlikely that

every user would want to access the same customer record at exactly the same time. More realistic tests
used fake jobs to simulate real work with users working jobs and requesting new ones at their own paces.
One useful method for characterizing requirements is to build use cases. These are scripted scenarios that
portray a user trying to work through a common task. For example, an emergency dispatch system
might include a use case where the user takes a phone call from someone reporting a house fire, records
necessary information, reviews and possibly overrides recommendations for the units to dispatch, and
dispatches the units.
You can find some example use cases at
www.objectmentor.com/publications/usecases.pdf.
At this stage, the requirements should not specify a solution to the use cases. In fact, a use case should
specify as little about the solution as possible. It should not indicate how the user answers the phone,
records necessary information, views recommended units, and approves the dispatching selection. It
should just indicate the things that must happen, and leave the details open for later design.
During the requirements gathering stage, you should also categorize ideas by their degree of impor-
tance. At a minimum, you should make a list of “must haves,” “nice to haves,” and “wild wishes.” You
may even have a category of “don’t need” for items that you’ve decided are really not such great ideas
after all.
Depending on the lifecycle model you follow, items may move between these categories during later
stages of development. You may find an idea that you put in the “wild wishes” category is more impor-
tant than you thought. Or, changes in the users’ rules and regulations might make a “nice to have” fea-
ture mandatory.
This phase is one point where various lifecycle approaches can differ. The best strategy is to avoid any
actual design at this stage and focus on results. Rather than specifying exactly how the application
should perform its actions, describe the results of those actions and what the actions will accomplish.
Instead of designing a customer entry form, just say there will be a customer entry form where the user
can enter customer data with links to order and payment information. Leave the details for the following
design phases.
20
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 20

This approach works particularly well with iterative approaches where everyone expects changes to the
design. It doesn’t usually work as well with stricter Waterfall models where people expect every last
detail of the application to be chiseled in stone before any of the work begins. With that approach,
requirements gathering sometimes turns into a search for an exact specification. Avoid this as long as
possible. That kind of specification makes explicit and implicit assumptions about the design, and you
haven’t started the design yet.
If your development environment requires that you begin a specification at this point, begin the design,
too, either formally or informally. Try to do enough design work to support your decisions in the specifi-
cation. If you don’t have time to design the features properly, you may be able to reuse ideas from exist-
ing applications. If you’ve written an order-taking form before, reuse the ideas that worked well in that
application. While you may not have time to perform a detailed design analysis, you can piggyback on
the analysis you used for the previous application.
Feasibility Analysis
Feasibility analysis occurs at two levels. At a higher level, you must decide whether the project as a
whole can succeed. At a lower level, you must decide which features should be implemented.
High-Level Feasibility Analysis
At a high level, it is the process of deciding whether the project as a whole is possible given your time,
staffing, cost, and other constraints. At this scale, the goal is to determine whether the project will be
possible and worthwhile as quickly as possible. If the project is not viable, you want to know that as
soon as possible so you can bail out without spending more time and money on it than necessary.
It’s much cheaper to cancel a project after a 1-month feasibility study involving 3 people than it is to can-
cel a project that has been running with 20 staff members for 2 years. If people cost about $100,000 per
year, then the difference is spending $25,000 on a feasibility study versus wasting $4 million.
In this sense, feasibility analysis is a bit like poker. Many people think the goal in poker is to get the best
hand possible. The true goal is to decide whether you have the winning hand as quickly as possible. If
you are going to lose, you need to get out quickly before you bet a lot of money in the pot. If you think
you are going to win, you want to drag the betting out as long as possible to get as much money in the
pot as you can.
Because the goal of high-level feasibility analysis is to determine the overall fitness of the project as soon
as possible, you must start working on it as soon as you can. If the ideas you come up with during a

month of idea formulation and refinement are not worth expending some effort, cancel the project, write
off the $25,000 you invested studying the ideas, and use the $3.975 million you saved on another project.
If the refined ideas you develop during a month of requirements gathering no longer seem useful, again
abandon the project, forget the $50,000 you’ve spent so far, and invest the $3.95 million you saved on
something else. (You can see from this example that the sooner you cancel the project the more time and
money you save.)
As time and the project move on, you should gain an increasingly accurate idea of whether the project as
a whole is viable. After doing some preliminary design work, you should have an idea of how compli-
cated the application is. It’s usually around this time that most organizations make a final decision about
whether to commit to the project. Beyond this point, the amount of money invested in the application
becomes large enough that people get in trouble and careers can be ruined if they admit they made a
mistake and suggest that the project be canceled.
21
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 21
After making a detailed high-level design, you should have at least some guesses about how many peo-
ple you will need and how long the project will take.
One project that I worked on should have been canceled long before it actually was. Part of its design
turned out to be significantly more complicated than expected. After wasting a bit under a year working
on the application, several quick changes in management lead to the project being mothballed. It was a
small project and didn’t last too long, so it probably wasted less than $500,000. If we had realized that the
project was unfeasible sooner, we might have been able to change the design and finish six months late
and $250,000 over budget, or we could have canceled the project and saved six months and $250,000.
For an even more dramatic example, a friend of mine once worked on a huge multi-billion dollar space-
craft project involving hardware and software. At some point, most of the participating engineers realized
that it was unfeasible. Unfortunately, the company had already invested a lot of money in it and was
thoroughly committed at a high level. The project manager suggested that the project was impossible and
was promptly fired. After a few months, the new project manager reached the same conclusion, made the
same suggestion, and suffered the same fate. This loop repeated a couple more times before the project’s
failings were too large even for upper management to ignore and it finally went under. Had the company

listened to the first project manager, who knows how much time and money they could have saved.
With better information about how hard the project is as time goes on, you can make a more informed
decision about whether to continue. Until the point of full commitment, the result of your analysis
should either be “cancel the project” or “continue until we have more information.” Try to delay full
commitment until you are absolutely certain that the project can succeed.
A project was a rewrite of an enormous application that had been written and modified over many years
in several different versions of Visual Basic. The rewritten version was poorly designed, and the new
features that were added provided only a small benefit. I don’t know how much money was spent on this
rewrite, but I wouldn’t be surprised to learn it was $6 million or more.
Furthermore, I predict another rewrite will be necessary, hopefully with a greatly improved design,
within the next few years as the company transitions to a new development platform.
Even a year into this project, the project team could have seen the clues that the new development plat-
form was coming, canceled the first rewrite, and invested $3 million in savings on moving to the new
platform a bit sooner. The users would have lost only relatively small enhancements and gotten the
final, improved version of the system a year sooner.
Low-Level Feasibility Analysis
At a lower level, you must determine which features can and should be implemented given your time,
cost, and other constraints.
After the requirements-gathering phase, you should have a good idea of what the project’s features are
and how much benefit they will provide. You should know which features fall into the “must have,”
“nice to have,” and “wild wish” categories.
Now is the time for quick negotiation with your customer champion to decide which features make it
into the official requirements specification, which are deferred until a later release, and which are
dropped completely.
22
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 22
To help your customer champion understand the tradeoffs, you should phrase levels of difficulty in
terms of costs and exchanges. For example, “We can give you this feature, but it will delay release by six
months.” Or, “If we implement this feature and keep our original schedule, then we will have to defer

this other feature.” Or, “We can separate the development of this piece from the rest of the project and
hire four additional developers for six months (at a cost of $200,000).” Then it’s up to the customer
champion to decide which options are acceptable to the users.
Initially, you may have only a vague idea about how difficult the various features are to implement. As
is the case with high-level feasibility analysis, you will have a better understanding of how difficult dif-
ferent features are to build as time marches on.
Also as is the case with the high-level analysis, it is best to keep your options open as long as possible,
rather than committing to specific features, until you have a decent idea about how difficult they will be
to build. If you discover that a feature is harder (or easier) to provide than you originally estimated,
that’s the time when having a customer champion with an easygoing give-and-take nature is invaluable.
It’s also when iterative lifecycle methodologies described later in this chapter are most useful, and the
Waterfall model brings the most pain.
High-Level Design
During high-level design, architects lay out the application’s main systems and begin to outline how they
interact with each other. They describe main groups of functions, such as order entry, billing, customer
management, reporting, database processing, Web Services, inter-process communication, and so forth.
The high-level design should not specify how the systems work, just what they do. To the extent that
you understand them, you can also sketch out the interactions among the systems and some of the fea-
tures that they will implement. For example, experience with previous applications may suggest a par-
ticular set of tables that you will want to use in storing customer data in a database. Or, if you know that
the application will need to pull files from FTP servers on the Internet, you may want to tentatively spec-
ify the FTP routines that you used on a previous project.
The high-level design generally should not specify particular classes. For example, it can say, “We’ll have
a tax form where the user can fill in customer data describing their fuel tax payments and usage,” but it
wouldn’t define a
Customer class and dictate how that class should store fuel tax information. It also
would not lay out the exact user interface for this form. Both of those tasks come later.
At this stage, it is extremely important that you make the application’s different systems as decoupled as
you can. There should be as little interaction among these major systems as possible, while still getting
the job done. Every interface between two systems is a point where the teams building the systems must

communicate, negotiate, and debug differences in code.
If the systems have only a few well-defined interactions, the teams can work through them relatively
quickly. If many of the systems interact in lots of different ways, the teams will spend a considerable
amount of time trying to understand each other’s code, negotiating (sometimes arguing) over interfaces,
and debugging those interfaces, leaving less time to actually write code. A good decoupled design lets a
developer focus on the code within the system he or she is developing, while spending a minimum of
time studying the details of the other systems.
23
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 23
Lower-Level Design
During successively lower levels of design, developers add additional detail to the higher-level designs.
They start sketching out exactly what is needed to implement the high-level systems that make up the
application.
While building the lower-level designs, you should be able to flesh out the interfaces between the appli-
cation’s systems. This is the real test of whether the higher-level designs defined cleanly separated major
systems. If the systems must interact in many poorly defined ways, you may need to step back and
reconsider the high-level design. If two systems require a large number of interactions, then perhaps
they should be combined into a single system.
At the same time, you should try to make each system small enough so that a team of four or five devel-
opers can implement it. If a system will require more than five developers, you should consider breaking
it into two or more subsystems that separate teams can implement independently.
Some of the design patterns described in Chapter 7, “Design Patterns,” can help you separate closely
coupled systems. For example, the mediator pattern helps separate two closely related systems so devel-
opers can work on them separately. Instead of having code in the one system directly call code in the
other, the code calls methods provided by a mediator class. The mediator class then forwards requests to
the other system.
Figure 2-1 shows the situation graphically. The top of the figure shows two systems that are tied closely
together. On the bottom, a mediator sits between the systems to loosen the coupling between them.
Figure 2-1: The mediator design pattern can help decouple two closely related

systems.
Note that the mediator in Figure 2-1 complicates the design somewhat. In this example, the mediator
doubles the number of arrows required, so the amount of communication in the application has
increased. The reason this design can help is that the mediator isolates the two systems from each other
so that the code in one doesn’t need to know the details of the code in the other. This lets the two sys-
tems evolve independently and lets developers work on them separately.
System 1 System 2
System 1 System 2
Mediator
Mediator
24
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 24
System 1 can make calls to the mediator in more or less whatever way it wants, and those calls need not
change, even if the code in System 2 that eventually handles the calls changes. Similarly, System 1 can
handle calls by the mediator in a consistent way, and hopefully remain unchanged even if the code in
System 2 that starts those calls must change.
At some point, however, the mediator must be able to translate calls by System 1 into services provided
by System 2, and vice versa, so the application does require more code. The point isn’t to reduce the
amount of code, but to decouple the two systems so that developers can focus on their own systems and
not worry about the other system’s details. The new design is more complicated, and, generally, extra
complication decreases the application’s reliability. By separating the two systems, however, the media-
tor makes building the two systems easier, and that should increase their reliability. You should still con-
sider the added complexity before you start throwing mediators between every pair of classes in the
application; otherwise, the cure may be worse than the disease.
There are several other design patterns that you can use to help make designs easier to implement. For
example, the adapter and facade design patterns provide an extra interface layer for a system to help insu-
late the rest of the application from changes in the system. For more information on design patterns, see
Chapter 7.
Implementation

The implementation phase is when programmers write code to build the application. It is what most
programmers think of when they think about software development.
This is clearly an important part of application building because the application cannot exist unless
someone eventually writes some code. However, implementation is only one part of a much larger pro-
cess, and you cannot ignore the other phases of development.
The application is not useful if it doesn’t solve the users’ problems, so it is essential that the application’s
initial ideas, purpose, and requirements make sense to the users. If the users need a toaster, it does no
good to build a microwave oven.
If the high- and lower-level designs don’t further the requirements, developers again end up building
the wrong application. The users ask for a toaster, but the design is for a grill.
If the high- and lower-level designs don’t specify a system that can actually be built, the application
fails. The users ask for a toaster, the design specifies a nuclear-powered toaster with a multilingual voice
interface that uses artificial intelligence to optimize toast-doneness while minimizing time and power
consumption. The users end up with a pile of metal and circuitry that can burn bread, but can’t make
toast in any language (and is probably radioactive).
Even after the application is properly built, it must be tested. Every non-trivial application contains
bugs. Eventually, the users will discover that the toaster cannot handle whole-wheat bread or bagels. It
is less expensive and gives the application an impression of higher quality if you can discover and fix
these problems before the application ships.
Finally, the users’ needs change over time. The users will discover that they need to toast scones and
other breads that were not considered in the original specification and design. Eventually, bugs will also
pop up in even the most thoroughly tested application. If you do not provide maintenance down the
road, you limit the application’s lifespan.
25
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 25
Implementation is an important part of application development, and for programmers, at least it’s usu-
ally the most fun, but it’s far from the beginning or end of the story.
Testing
In most projects, testing begins only after programmers have written a lot of code. In extreme cases, test-

ing only starts after the application has been fully assembled.
It is a well-known fact that bugs are easiest to find and remove when they are detected as soon as possi-
ble after they are inserted into the project. If a bug is introduced to the code early in the implementation
phase, it can be very difficult to find if you wait until the entire application is built to look for it.
Unfortunately, programmers have a natural tendency to assume that their code is more or less correct
and needs little or no testing. This makes some sense because, if you knew that your code contained a
bug, you would fix it. When you spend a lot of time writing a routine and handling all of the possible
errors you can think of, it’s easy to underestimate the possibility of errors that you didn’t think of.
Because of this tendency to think that code is correct, programmers often don’t spend as much time test-
ing as they should, particularly early on when bugs are easiest to detect and fix. To counter the program-
mers’ naturally optimistic approach to testing, you may need to coax, cajole, or coerce developers into
testing thoroughly enough.
Programmers often have blind spots when testing their own code. It’s easy to look over a piece of code
and see what you think is there rather than what is really there. It’s also easy for a programmer to think
the code handles all possible cases when it really hasn’t. If you knew about a situation that wasn’t han-
dled in your subroutine, you would modify the code to handle it.
The human brain is amazingly good at seeing what it thinks is there instead of what is really there. For
an example, see how hard it is to decipher the following sentence: It’s dwnrght embrrssngly esy t rd txt
evn if yu rmv mst of th vowls! If it’s that easy for your brain to fill in missing vowels in a sentence, how
hard can it be to fill in one or two missing details in program code?
To ensure that code is tested reasonably objectively, you may want to have programmers review each
other’s code rather than (or in addition to) their own. The tester can review the code without preconcep-
tions acquired while writing the code. The tester is more likely to see what is really there and to think of
unusual cases that the code’s original author may not have.
Probably the most common types of testing used by larger projects are unit testing, system testing,
regression testing, and user acceptance testing.
Unit Testing
Unit testing examines “code units.” These are regarded as the smallest usable collections of code that can
be meaningfully executed. The goal is to exercise every path through the code in a variety of different
ways to flush out any bugs in the code.

Unfortunately, many subroutines are difficult to test until lots of other routines are written, too. Before
you can test the code that prints customer orders, you need the code that creates a customer order. That
code may depend on other routines that load customer and inventory data from the database, and forms
that let the user build the order.
26
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 26
Testing these routines as a unit is fine as far as it goes, and may very well uncover bugs. However, when
you test multiple routines as a single unit, it’s often difficult to ensure that you cover all of the paths
through the code in each of the routines. Though you may be able to create a customer order and print it,
it may be difficult to test all of the possible combinations of ways to create a customer order with various
data available and printed in every possible way. While testing these routines as a unit, you may test the
paths among the routines, but you may leave paths within the routines untested.
Suppose each of the routines contains about 10 possible paths of execution. To exercise each routine’s
paths of execution separately, you would need to run about 10 tests for each of the five code routines
(print, make order, load customer data, load inventory data, and load the form that builds the order),
making a total of about 50 tests.
Now, suppose you want to test the paths through the routines as a single unit. If the paths of execution
within the routines are independent, then there are 10^5 = 100,000 possible paths of execution to test.
Ideally, you would test all of those paths, but in practice, even the most pessimistic developer will have
trouble writing that many test cases.
In addition to incomplete coverage, another drawback to testing groups of routines as a unit is that it can
be difficult to locate and fix a bug when you do encounter one. If a piece of data isn’t printing correctly,
the error may have been caused by the printing routine, the code that generates a customer order, the
customer or inventory data in the database, or the forms that let you build the order.
You can minimize all of these problems if you test each routine separately as soon as it is written. This
sort of routine-level test lets you more easily follow each of the routine’s paths of execution. It lets you
localize bugs more precisely to the routine that contains them. It also lets you test the code while it is still
fresh in your mind, so it will be easier to fix any bugs that you do find.
Two well-known facts about bugs are that they come in swarms and that fixing a bug is likely to create a

new bug. When you find one bug, you are likely to find others in related code. If you catch a bug right
after writing a routine, chances are good that any related bugs are also inside the routine and you can
catch them all relatively easily. If you test a group of routines as a unit and find a bug, it is more likely
that additional bugs will be in the unit’s other routines, so they will be harder to find and fix.
Part of the reason that fixing bugs is likely to create new bugs is that you usually fix bugs long after
you (or someone else) originally wrote the code that contains the bug. The code is no longer fresh in
your mind, so you need to re-learn the code before making changes. If you don’t understand the code
completely, your changes are likely to create new bugs. Routine-level testing helps with this problem by
letting you test the code right after you write it, so it’s still fresh in your mind.
Routine-level testing is not always easy. Sometimes the code to test a routine can be twice as long as the
routine itself, but routine-level testing is always worth the extra effort. Even if you don’t find any bugs in
the code, you will have some evidence to believe that the code works, rather than an unsubstantiated
feeling of general correctness.
Of course, there’s always the possibility that the test code will contain errors. Sometimes the testing code
will incorrectly report that a routine contains a bug when it doesn’t. While trying to fix the imaginary
bug, you will probably discover the bug in the testing code, but that can be easier if you keep in mind
that it’s a possibility. (Of course, you can’t assume every bug is in the testing code either! Assume the
bug is real, but keep your eyes open.)
27
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 27
System Testing
System testing examines how the application’s systems interact. It includes overall tests of the entire
application to see how all the major components work together and how they interact with each other. It
incorporates tests that exercise all of the use cases and other provisions defined in the requirements
phase. System testing should also cover as many typical and unusual cases as possible to try to discover
unexpected interactions among the application’s code units.
You can think of the system tests as unit tests on a larger scale. During a unit test, you try to test all of
the routines within an application system. During system tests, you try to test all of the units (and their
routines) within the entire application.

System testing has the same drawbacks as unit testing. It is difficult to cover all possible paths through
the application’s code at this level. When you do detect a bug, it is often difficult to determine which
routines caused the problem. After you find a bug’s incorrect code, it can be difficult to remember how
the code is supposed to work, making it harder to fix the bug and increasing the chances that you will
introduce new bugs.
To avoid these problems, you should try to catch bugs at a lower level if possible. Ideally, you catch most
bugs during routine-level testing when it’s easiest and safest to fix them. A few bugs will slip through to
the unit testing level where it’s harder to diagnose and treat bugs. Hopefully, very few bugs will remain
at the system testing level where debugging is most difficult and most error-prone.
Figure 2-2 shows the different levels of testing graphically. Routine testing exercises execution paths
within a routine (represented by thin arrows). Unit testing exercises paths of execution between routines
in a unit (represented by thicker dashed arrows). System testing exercises the application as a whole,
testing paths of execution between units or subsystems (represented by very thick arrows).
Figure 2-2: Routine, unit, and system testing exercise paths of execution at different
levels.
Unit
Routine
Routine
Routine
Routine
Routine
Unit
Routine
Routine
Routine
Routine
Routine
Unit
Routine
Routine

Routine
Routine
Routine
28
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 28
Even if you perform extensive lower-level testing, however, system-level testing is still necessary. While
most bugs have small scope and are contained within a single routine, some only occur when many differ-
ent routines interact. You will not find those bugs until you perform higher-level unit and system testing.
Regression Testing
Regression testing is the process of retesting the system to try to detect changes in behavior caused by
changes to the code. When you modify a piece of code to fix a bug, improve performance, or otherwise
change the way in which the code works, you should test the application thoroughly to see if the change
had any unintended consequences.
Changing code (whether to fix bugs or to add new features) is likely to add new bugs to the code. This is
partly because you are unlikely to remember exactly how the original code was written when you are
modifying the code.
To reduce the chances of introducing new bugs, you should carefully examine the code surrounding the
change you are going to make to ensure that you understand the code in context. Ideally, you should
also understand any routines that the nearby code calls, and any other routines that call this code. Only
if you understand the code’s position within the rest of the application do you have a reasonable chance
of making a change without introducing errors.
After making the change, you should run as many regression tests as possible to look for changes that
you may have made to the application’s behavior. You can buy software-testing tools that can automati-
cally perform these tests for you. Typically, these products perform a series of actions, either scripted or
captured by watching you perform the same steps in the user interface, and then they look for specific
results. For example, a test result might require that a certain text box contains a specific piece of text.
Some of these programs can capture images and verify that the image matches one saved in an earlier
“training run.”
These sorts of regression testing tools provide some evidence that the application hasn’t been broken by

your change, at least at a high level. Unfortunately, they don’t offer any help in locating and fixing a bug
if you detect one. Your only real defense is to perform regression testing every time you make a change.
Then, if a test identifies a bug, you have reason to believe that the problem is with the change you made
most recently.
Sadly, most developers don’t work this way. Often, a programmer will be given a long list of bugs to fix
and enhancements to make. The programmer makes a bunch of changes to the code and then performs
some tests to see if anything is broken. If the tests are cumbersome, and particularly if they are not auto-
mated by a regression testing tool, the programmer may make a lot of changes before testing them. If a
test discovers a bug, it can be hard to figure out which change caused the bug.
Some development teams run automatic regression testing nightly. That’s a good practice to ensure that
a change doesn’t cause a bug that slips past developers, but it’s not enough. If you modify the code, you
should not wait until the nightly tests. Instead, run the regression tests after you make each change.
Although regression testing is a reasonable way to look for changes at a high level, an even better way to
test code changes is to re-run the module- and unit-level tests that you used when you originally wrote
the code. Then, if you detect a bug, the tests are more likely to localize it, so it will be easier to find and fix.
Unfortunately, most developers do not save any test code that they may have written for a routine. The
weakest developers don’t write code to test their new routines in the first place. Better developers write
29
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 29
tests for their code, but then throw the test code away after the routine is working properly. The best
developers write test code and then save it forever. Admittedly, it can be inconvenient to save the huge
number of test routines that may be necessary to test every routine in a large application. In some pro-
jects, the test code can be twice as large as the application itself. However, disk space is cheap and your
time is valuable. It’s much better to save old test routines and never need them than it is to skip testing a
code modification and then spend hours tracking down a new bug later in the project’s development.
To make finding test routines easier, store them in files with names similar to the code they test. For
example, you might store test routines for the code in module
SortRoutines.bas in the file
Test_SortRoutines.bas. Keep the test code in the same sort of source code control system you use to

store the application’s code, back it up on a regular schedule, and treat it as a valuable asset, not an
afterthought.
User Acceptance Testing
During user acceptance testing, key users examine the application to see if it satisfies the requirements.
Usually, your customer champion and other key users perform these tests. The testers should work
through all of the use cases defined during the requirements gathering phase. They should also try out all
of the typical and unusual cases they can think of in every possible combination, trying to find circum-
stances that were not anticipated by the developers and that are not handled correctly by the program.
In some projects, you may be able to make the final version of the application available for every user to
experiment with during their free time before the final installation. If you do this, however, ensure that
the version of the application that you make available is either finished, or nearly finished, and contains
no known bugs. You can disable certain features if they are not yet available, but don’t give the users a
fragile application. A buggy version will only give the users a bad impression of the quality of the pro-
ject, so they will not accept the final version as wholeheartedly. It’s okay to give the customer champion,
management patron, and other project team members a preliminary version that may have some prob-
lems, but keep that version away from the general user population.
In fact, it’s generally a good idea to give the customer champion and other team members access to pre-
liminary versions as soon as they are available so you can get feedback early enough that it is still use-
ful. This idea is discussed further in the later parts of this chapter that explain prototyping strategies
such as Iterative Prototyping.
Deployment
During the deployment phase, you make the application available to the users. Sometimes, this is a mas-
sive rollout to every user at the same time. Often, particularly for applications headed to a large user
audience, the application is installed in staged rollouts where several groups of users get the application
at different times.
Visual Studio’s ClickOnce Deployment system makes installing Visual Basic .NET applications a lot eas-
ier than installation has been in the past, but it’s still not trivial. In a large installation, there will proba-
bly be some users who will have difficulty installing the application themselves. At a minimum, some
users will be bothered by the long time needed to download the .NET Framework necessary to run
Visual Basic .NET applications, particularly if they access the network using a slower modem.

30
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 30
You should at least provide explicit instructions telling the users what to expect during download and
installation. It’s even better if the project team or the company’s IT department can install the application
for the users.
A particularly troublesome issue with large installations these days is the number of operating system
versions that the users may be running. If your application is built in-house for a particular company,
you may be able to require that all of the users have the same operating system and software environ-
ment. If the project is for distribution to external customers, or if the company cannot require that all
users have the same operating system, you may discover that the application works differently for dif-
ferent users.
Sometimes it’s not even the operating system or the software installed that causes a problem, but the
combination of operating system, software installed, and software previously installed. I recently did
some maintenance work on a project that was running incorrectly on only a few installations out of
hundreds of customers. It turned out that those computers had at one time run a previous version of
Microsoft Excel and then upgraded to a newer version. The application was confused by Registry
entries left behind by the earlier version, and I had to add code to take this unusual situation into
account.
It can be difficult to test a new application on every possible operating system. Some of the possibilities
include Windows 95, Windows 98, Windows 98 SE, Windows ME, Windows 2000, Windows NT (various
versions), Windows XP, and the soon-to-be-released version code-named Vista. You could also consider
the Home and Professional editions of some of these, plus non-Microsoft operating systems such as
Linux, Solaris, Max OS X, and Unix, which can use Mono (
www.mono-project.com) to run .NET
applications.
It’s unlikely that you’ll have a dozen or so computers sitting around with all of these operating systems
on them. It’s even unlikely that you can run more than a couple of them on virtual computers such as
Microsoft’s Virtual PC. It’s still less likely that you’ll have the time to install each of these from scratch,
so that you know the application doesn’t depend on anything else that may happen to be installed on

the developers’ machines.
Usually, development teams test the application on one or two operating systems, and then require that
users have those operating systems. They may say that the application might work on others, but that
they make no guarantees. They may then handle users on other operating systems on a case-by-case
basis, possibly adding new operating systems to the list of those supported over time.
No matter which operating systems you decide to support, it’s generally worth trying to install and run
the application on a “clean” system that has just been built with nothing installed on it. During the clean
installation, it’s not unusual to discover that some of the code depends on another application such as
the Microsoft Office products Excel, Word, Access, or Outlook.
Support
Many programmers consider application development to be over after the code is written and tested,
but there is still plenty of work to do later in support of the application. Support includes recording,
tracking, and fixing bugs. It includes helping users solve problems and explaining how to use the appli-
cation. It may include providing training materials and classes. Depending on the lifespan of the appli-
cation, support can also include major future releases to fix bugs and implement new features.
31
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 31
You can reduce the amount of work needed for some of these support tasks by providing good online
help and documentation. If the users can use the help and documentation to easily figure out how to do
something, they won’t need to call you and ask.
Many developers think that you should write documentation at the end of the project after all of the
code has been written. One advantage to this approach is that the application doesn’t change much after
this point, so you can write documentation and help files without worrying about making extensive
changes later.
Unfortunately, the end of application development is often rushed so projects are frequently released to
the users with inadequate documentation and help. It’s better if documentation begins as soon as possi-
ble, even though some of it must be rewritten as the application changes.
Documentation can start with the requirements and can describe the purpose of the application on a
high level. As the user interface is prototyped and revised, the documentation can be revised to match.

In some organizations, the programmer who implements a feature writes the first version of its user doc-
umentation. Other writers (dedicated documentation specialists if they are available) should modify the
documentation as necessary to ensure that the user can understand it. The customer champion and pos-
sibly other users should also review the documentation to verify that it makes sense to them.
ClickOnce Deployment makes installing new applications easier than it has been in the past. It makes
installing new versions of an existing program even easier. If the new version of the program was writ-
ten using the same version of the .NET Framework, then you only need to install the application itself
and you don’t need to copy the Framework onto the user’s computer again. Instead of downloading and
installing several hundred megabytes, you may only need to download a few hundred kilobytes.
Just because installing new releases is relatively easy, that doesn’t mean you should take advantage of
ClickOnce to inundate the user with a flood of updates and fixes. Releasing new versions too often gives
the users the (probably correct) impression that the application is unstable and insufficiently planned,
even if the releases add newly requested functionality rather than fixing bugs.
New releases also make the users learn about whatever new features are in the release, taking time away
from their normal jobs. Users who are forced to read about new features every week may come to dread
the new releases. They may decide that the application is adding to their workload even if it only takes 20
minutes per week to read about new features and the application saves the users hours every day. After a
while, users may stop reading about the new enhancements, so they can’t take advantage of them any-
way. What’s the point of having frequent releases if the users don’t know about the new features?
I once worked at a company where everyone had Windows XP installed and automatic updates enabled.
We probably averaged about one mandatory update per week, and sometimes a flurry of updates would
appear on the same day. Even though many of these updates were in response to new Internet viruses
and other threats, the frequent updates did nothing to enhance our perception of the quality and stabil-
ity of Windows XP.
To protect your application’s appearance of reliability, avoid unnecessary releases. Save up minor bug
fixes and enhancements and release them all at once in a bigger release. For example, you might have a
major release once per year, and a minor release six months after the major release.
32
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 32

You can further improve the perception of stability if you schedule the releases far in advance. You may
occasionally need to release a new version quickly to implement urgently needed features or to protect
against a new Internet threat, but it does nothing for your image to push out mysterious security
updates with no warning on a regular basis.
Lifecycle Models
The previous sections described the basic tasks of application development:
❑ Idea formulation and refinement
❑ Team building
❑ Requirements gathering
❑ Feasibility analysis
❑ Design
❑ Implementation
❑ Testing
❑ Deployment
❑ Support
Any application development effort must address each of these basic tasks in some manner. Some of
these clearly take a particular position in the life of the project. For example, deployment and support
must happen after the code is written and tested. Other tasks, such as team building and documentation,
can occur throughout the project’s lifetime.
Different lifecycle models arrange the remaining tasks in different orders to provide various levels of pre-
dictability and flexibility. These models focus on the smaller subset of tasks shown in the following list:
❑ Idea formulation
❑ Requirements gathering
❑ Design
❑ Implementation
❑ Testing
❑ Deployment and support
The following sections describe some of the most common lifecycle models. They explain how the mod-
els arrange the basic tasks and discuss how these models apply to Visual Basic .NET development.
33

Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 33
Throwaway
A throwaway application is one that you plan to use for only a short time. Usually, this kind of application
is a developer tool intended to make some specific task easier, and then it is discarded.
This type of application generally doesn’t require a very formal development process. You don’t need to
spend days on idea formulation, requirements gathering, and different levels of design. You just sit
down and pound out some code. If it doesn’t work, you adjust it slightly until it gets the job done, and
then you throw it away.
There’s nothing wrong with this development model, as long as it really is a small project that you won’t
need later. One of the few dangers to this model is that the tool may turn out to be more useful than you
originally planned. If you let other developers use it and it becomes popular, then other users may not
let you throw it away. You may be stuck maintaining a program that you spent very little time design-
ing. The program may be difficult to debug, and it may be difficult to add new features that the
impromptu user population demands. If the project lives long enough and is in enough demand, you
may need to rebuild it using a more elaborate development model.
If you have a small project that you only need for a specific task, go ahead and use the throwaway
model. Just be certain that you really can throw it away when you are finished.
Waterfall
In the Waterfall model, the major phases of development follow each other in the following strict order:
❑ Idea formulation
❑ Requirements gathering
❑ Design
❑ Implementation
❑ Testing
❑ Deployment and support
Control flows from one stage of development to the next in a simple predictable fashion, similar to the
way water falls down a series of ledges in a cascade. Figure 2-3 shows the process graphically.
Because the control in Figure 2-3 actually falls down a series of steps, this model is also sometimes
referred to as the cascade model or the staircase model. Some developers also call this the lifecycle model,

although that term rather misleadingly implies that other development models don’t last throughout a
project’s life.
The team works on idea formulation until it can think of no other useful ideas, and then moves on to
requirements gathering. The team then writes requirements documents that completely specify the system
and moves on to the design phase only after the system is completely described. Based on the specification,
34
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 34
the project architects make the application’s high- and lower-level designs. When the design is complete,
programmers implement the design. After the code is written, the team members perform system-level test-
ing and user acceptance testing. Finally, the project is released to the users and long-term support begins.
One advantage of the Waterfall model is that it is predictable. The requirements specification indicates
exactly what the application will do. Later, designers and programmers can refer to the requirements to
see exactly what they should be doing.
Figure 2-3: In the Waterfall lifecycle model, control falls from one step to the next like water in a
cascade.
The high- and low-level designs explain exactly how the application will do its job. Later, programmers can
refer to the design documents to understand how to write the code they are supposed to be implementing.
Testing is simpler because all of the code is available. You don’t need to worry about one routine
depending on another before it can be tested, because all of the code is ready.
Another advantage to the Waterfall model is that it makes it easier to write online help, documentation,
and training materials. After the specification and design phases, the application’s user interface is fully
defined. At that point, technical writers can begin converting the specification and design into materials
that the users and training department can work with. Because this can begin relatively early in the pro-
ject, there’s plenty of time to write high-quality documentation and training materials.
Idea
Formulation
Requirements
Gathering
Design

Implementation
Testing
Deployment
and Support
35
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 35
Although the Waterfall model has a few advantages, it is not as flexible as the other models described
shortly. If the project is small and extremely well-defined, the Waterfall model can help you get the job
done fairly quickly with little time wasted iterating through different feature, design, and implementa-
tion approaches.
However, if the initial requirements or designs are incorrect, it may be difficult to fix the application and
produce a usable result. When you move on from one stage of development, team members assume that
it is correct, and moving back to make changes can be difficult.
For example, suppose you have designed a system that downloads product data from a Web site, uses it
to update a local database, and then generates reports using the data. After the design is complete, dif-
ferent teams of developers can start working on the code that downloads the data, inserts it into the local
database, and generates the reports. Because the specification describes exactly what the program needs
to do, and the design explains exactly how the code will do it, the groups can work independently.
Now, suppose you discover a problem in the way the data is stored in the local database. Perhaps the
local database cannot easily store the image data that you were planning to download from the Web.
You could remove the image data from the application’s requirements, but that will affect all three of the
groups working on this part of the application. The download group may already have written code that
downloads all of the data, including the images. The report-generating group may have already written
code to produce reports complete with images. These groups can modify their code to meet the altered
requirements, but it may require updating a lot of code, and that can introduce a bumper crop of bugs.
The Waterfall model doesn’t cause this sort of change. It was a mistake in the design (selecting that
database technology) that gave rise to false assumptions (that you could store images in the database)
and caused the real problem. By assuming that each stage of development is complete before moving on
to the next, the Waterfall model just makes it more difficult to fix this sort of problem.

Serial Waterfalls
You can represent multiple application releases with the Waterfall by connecting Waterfalls one after
another, as shown in Figure 2-4.
To build multiple releases with the Waterfall model, simply feed the results of one Waterfall into the
beginning of another. Take the results of the previous iteration into account when you start the idea for-
mulation phase of the next.
In addition to any bug fixes and feature enhancements that you have identified with the current version
of the application, you can also consider how the previous development efforts worked out. If you
found that lots of bugs were introduced during the design phase, spend extra time on this round’s
design phase. If you found that system testing found a lot of bugs that were difficult to track down,
encourage the programmers to spend more time on routine and unit testing.
36
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 36
Figure 2-4: You can use multiple serial Waterfalls to model multiple releases.
Idea
Formulation
Requirements
Gathering
Design
Implementation
Testing
Deployment
and Support
Version 3
Idea
Formulation
Requirements
Gathering
Design

Implementation
Testing
Deployment
and Support
Version 2
Idea
Formulation
Requirements
Gathering
Design
Implementation
Testing
Deployment
and Support
Version 1
37
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 37
Overlapping Waterfall
In practice, the different phases of development in the Waterfall model often overlap, as shown in Figure 2-5.
Figure 2-5: The phases in a Waterfall model may overlap.
After you have defined the application’s basic theme, some team members can start working on the
main requirements, while others continue with idea formulation and refinement. When pieces of the
requirements are finished, application architects can begin high-level design for those pieces. Once the
designs for a system are finished, programmers can start writing the code.
Waterfall with Feedback
The biggest disadvantage to the Waterfall model is the assumption that each phase is complete and cor-
rect before the next phase begins. If you really can finish each phase correctly, the Waterfall model lets
you move steadily and confidently forward with no looking back. When problems do arise, however,
they can be very difficult to fix.

The Waterfall model with feedback gives you a formal path for letting one phase feed back into the pre-
vious one. The project generally flows down the cascade, but sometimes you can paddle against the cur-
rent to fix a mistake in the previous step.
Figure 2-6 shows this model graphically. The basic flow is downward and to the right, but, if necessary,
you can work back upstream (dashed arrows).
As is the case in a kayak, however, moving back against the current is difficult. All of the project’s team
members usually work on the same phase of development at the same time. If you discover a flaw intro-
duced in the previous phase, other developers may already be working under the flawed assumption.
To fix the previous phase, many team members may need to redo their work to satisfy the new assump-
tions. That gives them a chance to introduce a whole new set of bugs and, if you don’t catch them right
away, you will need to fix them during the next phase of development.
Idea
Formulation
Requirements
Gathering
Design
Implementation
Testing
Deployment
and Support
38
Part I: Design
06_053416 ch02.qxd 1/2/07 6:28 PM Page 38
Figure 2-6: The Waterfall model with feedback lets one phase correct a previous phase.
Repairing mistakes in phases earlier than the previous one can be even more difficult. For example, you
may discover a mistake in the requirements while you are writing code. You’re merrily whittling away
at a hand-crafted hash table when you realize that a hash table won’t solve the user’s problems after all.
Instead, you need a priority queue.
Fixing the problem at this point is much harder. Everyone thought that the application needed a hash
table and was working under that assumption. Now you need to move all the way back to the require-

ments gathering phase and modify the application’s requirements to require a priority queue.
Changes to the requirements will cause additional changes to the design. You need to discard the design
for the hash table and design a priority queue. You’ll need to change both the high- and low-level
designs. You’ll also probably need to update the user interface design to let the user interact with the
new queue.
The new design will also impact other subsystems that were supposed to add and remove items from
the hash table. The design for those subsystems must now be modified to work with a priority queue
instead.
Finally, the changes to the designs will have a large impact on the implementation. The code that imple-
ments the hash table must be removed and replaced by new priority queue code. Any other routines that
interacted with the hash table must now be updated to follow the new design that uses the priority
queue. As usual, those modifications provide additional opportunities for developers to add new bugs
to the code.
Idea
Formulation
Requirements
Gathering
Design
Implementation
Testing
Deployment
and Support
39
Chapter 2: Lifecycle Methodologies
06_053416 ch02.qxd 1/2/07 6:28 PM Page 39

×