Tải bản đầy đủ (.pdf) (32 trang)

Business Process Implementation for IT Professionals phần 7 pps

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (466.17 KB, 32 trang )

Glass, R. L., “Reuse: What’s Wrong With This Picture,” IEEE Software, Vol. 15,No. 2, 1998,
pp. 57–59.
Gustavsson, A., “Software Component Management and Reuse Component Repositories,”
Proc. 4th Internatl. Workshop Software Configuration Management, Baltimore, May 1993, pp.
123–126
Heinninger, S., K. Lappala, and A. Raghavendran, “An Organizational Learning Approach to
Domain Analysis,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30,
1995, pp. 95–104.
Jacobson, I., G. Martin, and P. Jonsson, Software Reuse: Architecture, Process and
Organization for Business Success, Reading, MA: Addison-Wesley, 1997.
Kotula, J., “Using Patterns to Create Component Documentation,” IEEE Software, Vol. 15,
No. 2, 1998, pp. 84–92.
McClure, C., Software Reuse Techniques: Adding Reuse to the System Development
Process, Englewood Cliffs, NJ: Prentice-Hall, 1997.
Rada, R., and J. Moore, “Sharing Standards: Standardizing Reuse,” Communications of the
ACM, Vol. 40, No. 3, 1997, pp. 19–23.
Radding, D., “Benefits of Reuse” Information Week, Mar. 31, 1997, pp. 1A–6A.
Rosenbaum, S., and B. du Castel, “Managing Software Reuse—An Experience Report,” Proc.
17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 105–111.
Sen, A., “The Role of Opportunism in the Software Design Reuse Process,” IEEE Trans.
Software Engineering, Vol. 23, Issue 7, 1997, pp. 418–436.
Voas, J. M., “Certifying Off-the-Shelf Software Components,” Computer, Vol. 31, No. 6, 1998,
pp. 53–59.
Chapter 15: Workflow modeling
Workflows are the last automation assets that need to be examined as background for
the specification of a process implementation methodology. Essentially, a workflow is
another representation of a business process. The workflow representation or model is
different from the business process models discussed in Chapter 5 because workflows
must incorporate the technical and workforce information needed for implementation and
deployment. To differentiate the different process representations, the original
representation is referred to here as the business process, while the workflow


representation is called a workflow.
15.1 Evolution
Although workflow-like techniques have been in use for many years, the designation of
workflow as a distinct technology is relatively recent. As such, it is useful to (1)
investigate how and why the technology started; (2) develop an understanding of the
current state of the art (including the availability of products that incorporate the
technology); and (3) determine the specification of any standards that have general
industry acceptance.
15.1.1 Genesis
Workflow technology had its start in the image processing and document management
technologies. Many business procedures involve interaction with paper-based
information, which can be captured as image data and used as part of an automation
process. Once paper-based information has been captured electronically as image data,
it often is required to be passed between a number of different participants, thereby
creating a requirement for workflow functionality.
The emphasis on business process reengineering (BPR) in the early 1990s provided a
need for utilization of the general workflow techniques and contributed to their
development as a separate technology. Because the initial emphasis of BPR was on
process definition and not implementation, the pace of development for workflow
technology has been relatively slow. That situation is beginning to change as the
emphasis shifts from process development to process implementation.
Defining a process is not very useful unless some means exists to monitor and track the
operation of the process and determine how well it is working. That is true for mostly
manual processes as well as highly automated ones. Although manual monitoring and
tracking can be utilized, they are not very efficient and the tendency is always to
eliminate that function when time gets short. Utilizing the automated means for
monitoring and tracking that workflow provides can greatly assist in evolving to an
efficient and effective process.
15.1.2 Standards
The Workflow Management Coalition (WfMC) was formed in 1993 by a number of

vendors who produced products addressing various aspects of workflow technology. The
purpose was to promote interoperability among heterogeneous workflow management
systems. A workflow management system consists of workflow products configured in a
manner that enables implementation of one or more specified business processes.
Currently, the WfMC remains the only standards body involved with workflow products
and services. It currently produces standards that address the following areas:
§ Application program interfaces (APIs) for consistent access to workflow
management system services and functions;
§ Specifications for formats and protocols between workflow management
systems themselves and between workflow management systems and
applications;
§ Workflow definition interchange specifications to allow the interchange of
workflow specification among multiple workflow management systems.
The standards activities are still in an early stage, with only a small portion of the needed
standards addressed in a manner that can provide needed guidance to product vendors
and workflow implementers. Nevertheless, the existence of a standards body is an
important indication of the viability and strength of the technology.
Because the WfMC standards generally are concerned with the partitioning of workflow
functionality and the interfaces between different functions, further discussion of that
aspect of the standards is deferred until Section 15.5 when configuration models are
discussed.


15.2 Model views
To facilitate the discussion and present the diversity of information required to
understand the use of workflow technology in the implementation of business processes,
this chapter utilizes several different models. Each model is oriented toward a specific
aspect of workflow design and operation. The following basic models are considered:
§ The dynamic model illustrates the basic operation of a workflow in performing
a business process.

§ The design model represents a business process suitable for implementation
and deployment. The parts of the design model are the workflow map, data
access, and control.
§ The configuration model defines the interaction of the different components of
a workflow management system. Each component eventually is realized by
one or more products. The parts of the configuration model are the
reference model, the logical model, and the physical model.
Although the three models are discussed separately to avoid unnecessary complexity,
they are closely related and the definitions of all the models must be compatible for the
resultant implementation to function properly. The interrelationships are not discussed
explicitly because of the complexity involved. However, it should be evident from the
discussion how the models interact.
As will be seen, some of the models use concepts and models from previous chapters as
an integral part of their definition. That also illustrates the close relationships among all
the models that have been presented.


15.3 Dynamic model
The basic dynamics of a workflow implementation are shown in Figure 15.1. The four
major components of this model are:

Figure 15.1: Workflow dynamics.
§ A workflow instance that contains the data relevant to a given invocation of the
workflow;
§ Tasks, each of which performs some specific aspect of process functionality;
§ Business rules that determine when and where the next task is performed;
§ A monitor that determines if the workflow is progressing according to the
specified parameters.
When a business event occurs, a workflow instance is created. The purpose of the
instance is to contain the status and instance-specific data that will be associated with

the handling of the business event. Initially, the workflow instance contains very little
information. As the solution to the business event progresses, additional information is
added. A workflow instance exists until the final response to the defining business event
is provided. At that time, the characterization of the workflow is complete and can be
used for statistical purposes in the management of the process.
The workflow instance is known by a number of different names, including folder,
container, courier, and token. In addition, the implementations may vary considerably,
ranging from a single structure to a complex set of structures. To avoid confusion, the
generic term of workflow instance is used throughout this discussion. During a review of
the characteristics of different products, it is necessary to understand that a diversity in
naming conventions and implementation methods exists.
The information in the workflow instance is examined by the business rules defined for
the workflow. Depending on the instructions contained in the rules and the data elements
and values contained in the workflow instance, a specific task is scheduled and routed to
a role performer assigned to that task. After the task is performed, the workflow instance
information is updated and the procedure continues until the business event is satisfied.
When a workflow instance requests a specific task to perform a function, an instance of
that task is formed that is then associated with the workflow instance. In that way, all the
work necessary to respond to a given business event can be connected with that
particular business event, even if the same task is needed for many different workflows
or multiple instances of the same workflow.
The operation of the workflow is continuously monitored to ensure that it is performing
satisfactorily. If a problem is encountered, the instance data are updated appropriately.
For example, if the monitor discovers that an assigned task has not been completed
within the time period indicated by the business rules, the instance information is
updated to reflect that fact. When the business rules are next used to examine the
instance information, they then may cause a task to be executed that notifies a
supervisor of the problem. If allowed by the business rules, it is possible to have multiple
tasks simultaneously executing in the same workflow.
As part of its function, the monitor also collects statistics on the defined metrics of all the

instances of the given workflow process, including the associated task instances. Those
statistics are used to determine how effectively the process implementation (workflow) is
functioning and what, if any, changes should be considered. That statistical function
operates across all instances of the workflow, as contrasted with the monitoring function,
which operates within a single instance.
The dynamics of the workflow are incorporated in a workflow engine that is part of the
overall workflow management system. The engine provides the means for creating the
workflow instance, interpreting the business rules, executing the tasks, and monitoring
the overall operation of the workflow. Because of the differences in commercial products,
the specifics of individual workflow engines are not discussed here. Instead, the
discussion of workflow engines emphasizes the modeling efforts needed to utilize any
workflow engine.


15.4 Design model
The workflow design model is an operational representation of the business process
being implemented. It consists of three parts. The workflow map shows the relative
sequences of the tasks that will be utilized by the workflow. Other information is
considered as part of the map and must be keyed to the diagram, including:
§ The rules that determine which tasks will be selected for execution;
§ The transactions called by each task and the location of the functionality
needed by each transaction;
§ The workforce units that will perform the roles used to perform the tasks. The
characteristics of the workforce units are not considered part of the map
information.
The data access model part of the design model indicates what data are contained in the
workflow instance and what data are contained in databases external to the workflow
system. Both types of data are accessible by the tasks.
Finally, the control model determines how the workflow progresses.
15.4.1 Workflow map

As a part of the workflow process model, a workflow map is produced. An example of
such a map is shown in Figure 15.2. Although the business process map and the
resultant workflow map may look similar, there are important differences. The workflow
map is the result of the design phase and is not an exact duplicate of the business
process map. To help impress that point, the terminology employed is different. Instead
of a sequence of process steps, a sequence of tasks is utilized.

Figure 15.2: Workflow map structure.
As defined in Chapter 13, a task usually consists of one or more dialogs assigned to the
same role. A task can be implemented via a custom development, a purchased product,
a legacy system, or a combination of those methods.
Instead of the information flows used in the business process representation, the need to
interact with databases and processing functionality is indicated by the use of
transactions. The location of the functionality accessed by each transaction also must be
indicated. Without that information, the workflow cannot be implemented. Transactions
result from the specification of the actions for each of the dialogs in a task.
15.4.2 Data access model
The data access model defines how data are stored and accessed. It is illustrated in
Figure 15.3 and shows how tasks can utilize data from either the workflow instance or
databases external to the workflow management system. As will be discussed, not all the
data in a workflow instance can be accessed by the tasks. Some data are available only
to the workflow control mechanism. In general, that is not a problem because the tasks
performing process functionality do not usually require data of that type.

Figure 15.3: Data access structure.
As shown in Figure 15.3, a task can obtain data from either the workflow instance or an
external database. Data also can be transferred between tasks using either construct. In
general, it is better to keep the information in a workflow instance relatively small. That
avoids the need for routing the same information to a number of potential role
performers. Depending on the number of potential role performers, a large amount of

data in a workflow instance can require considerable network and processing resources.
The use of pointers in the workflow instance to locate required data in an external
database usually is the best method of transferring large amounts of data between tasks.
The first task stores the data on a database outside the workflow management system
but stores key pointers to those data in the workflow instance.
The second task uses the key pointers passed to it in the workflow instance to retrieve
the application data it needs from the external application database. As will be shown,
any data needed by the workflow management system business rules to make decisions
must be placed in the workflow instance. If desired, task functionality can be defined that
transfers data between external databases and the workflow instance.
To help in the determination as to where to place needed data, the classification of
workflow instance data is presented along with a brief explanation of the data involved.
15.4.2.1 Internal control data
The internal control data identify the state of individual workflow processes or task
instances and may support other internal status information. The data may not be
accessible to the tasks, but some of the information content may be provided in
response to specific commands (e.g., query process status, give performance metrics).
Multiple workflow engines used to implement a given process also can exchange this
type of information between them.
15.4.2.2 Workflow-relevant data
Workflow-relevant data are used by a workflow management system to determine
particular transition conditions and may affect the choice of the next task to be executed.
Such data potentially are accessible to workflow tasks for operations on the data and
thus may need to be transferred between tasks. Multiple workflow engines used to
implement a given process may also exchange this type of information between them.
The transfer may (potentially) require name mapping or data conversion.
15.4.2.3 Workflow application data
Workflow application data are not used by the workflow management system and are
relevant only to the tasks executed during the workflow. They may convey data utilized in
task processing, instructions to the task as to how to proceed, or instance priority. As

with workflow-relevant data, they may need to be transferred (or transformed) between
workflow engines in a multiple-engine environment.
15.4.3 Control model
The workflow control model consists of the business rules responsible for determining
the sequencing, scheduling, routing, queuing, and monitoring of the tasks in the workflow
process. Each area is briefly described, and examples of rules for the continuing
example are provided at the end of this section.
15.4.3.1 Sequencing
Sequencing is the determination of the next task or tasks that must be performed to
respond to the business event. Multiple tasks can be scheduled concurrently if
determined by the appropriate rules. Although much of the sequencing information is
shown on the workflow map, additional information may be needed to determine when
the conditions for task execution have been met. For example, assume two tasks are
executing in parallel and the map shows that both tasks require a common task as the
next in sequence. The rules must determine if execution of the common task must wait
until both parallel tasks are completed or if it can be started when one or the other
predecessor task completes. Such rules can get somewhat complex for large maps.
One of the more interesting differences between the business process map and the
associated workflow process map is the amount of parallelism that can be obtained. The
business process map is very sequential, while the workflow map can have many tasks
that are able to execute in parallel. Whether it is desirable to take advantage of that
situation is up to the workflow designer and depends on a number of factors, including
the size and sophistication of the workforce.
15.4.3.2 Scheduling
Scheduling is the determination as to when the next task in the sequence should be
executed. That can vary from immediately to weeks or even months in the future. The
schedule can be static or dynamically determined from the workflow instance data or an
external event calendar. Although most of the time, subsequent tasks can be executed
immediately, there are a number of situations in which the execution must wait until a
predetermined time. Such situations would include the availability of equipment or other

resources, a time negotiated between the service provider and the customer, and the
desirability of performing certain tasks, such as switching electric lines when electric
usage is light (e.g., at 2 A.M.).
15.4.3.3 Routing
Once a task has been selected and scheduled, the next need is to determine the work
stations or the task performers that are to be used to process the task for a given
business event (workflow instance). There are many methods that can be used to
determine the routing, and the routing business rules specify which ones are to be used
in a specific situation. When the task is routed to more than one individual, the first
individual to select that task is assigned the task and it is unavailable to any others in the
identified group. The most popular routing methods are as follows:
§ Route to the same individual who performed the previous task in the
given workflow instance;
§ Route to a specific named individual;
§ Route to a list of named individuals;
§ Route to any individual in a given geographical location;
§ Route to any individual who can perform the specified role;
§ Route to an individual who can perform the specified role and who has
the shortest queue or who is next in line for a new task (round- robin);
§ Any combination of those.
Although most products allow an individual who is assigned a task to reroute it to another
permissible performer, the workflow designer may restrict that ability to prevent misuse.
15.4.3.4 Queuing
The queue of tasks available to a given individual can have several characteristics,
depending on the capabilities of the products utilized and the rules established by the
workflow designer. Queues may be defined to be first in/first out (FIFO) or last in/first out
(LIFO) for all tasks arranged by priority in one of those categories. In this type of queue,
the task performer would not be able to select the next task. The task would be
presented automatically when the performer is available using the defined rules. This
type of queue is known as a push queue, because it pushes work to the performer.

In another case, the queue might be able to be viewed by a prospective performer, who
could select any task deemed appropriate. For knowledge workers, this is probably the
most used method. This type of queue is a pull queue, because the performer pulls work
from the queue as needed.
Any form of queue could be used with an automated task performer, although the push
queue probably is utilized most often, because it eliminates the need for the task logic to
determine which task is to be selected.
15.4.3.5 Calendaring
Calendaring is the definition of events based on calendar units (days, weeks, months,
years). Examples of calendar events are employee work schedules, server availability,
due dates, deadlines, holidays, and special sale periods. Sequencing, scheduling, and
routing operations are able to use calendar events in making the necessary decisions. A
monitor function, discussed in Section 15.4.3.6, also can utilize those events to
determine if problems exist with the operation of the workflow instance.
15.4.3.6 Monitoring
Using the specified rules, the monitoring function examines the values of the metrics
defined for the workflow. Depending on the conditions established in the rules, tasks that
indicate that some condition has occurred may be initiated or a database may be
updated with appropriate information that can be utilized later for reports or queries.
Metrics Some common metrics are as follows:
§ Total time expended for the instance to present;
§ Elapsed real time total for the instance;
§ Elapsed real time in each task in the instance;
§ Elapsed real time in each queue;
§ Elapsed active time in each task;
§ Elapsed active time in each queue;
§ Task throughput per unit time;
§ Resource utilization per unit time;
§ Queue length for each work station;
§ Number of each priority item in each queue.

Active time is time that counts against the workflow instance. For example, some
workflows do not count traveling time assessed against the workflow instance. This type
of metric is important when contractual obligations (service-level agreements)
differentiate between real and assessed time.
Alerts and alarms Based on the values of the metrics and the threshold defined for
them, alerts and alarms determine if an alert task (no action required), an alarm task
(action required), or a recovery task should be scheduled and routed to locations
specified by the routing rules.
Statistical information Statistics are developed for each metric, including those for
each instance, all workflows of a given type, all tasks, and all task performers.
Queries and reports Queries and reports request and receive information related to
any of the measured values given previously for an individual workflow instance,
including:
§ Total time expended to present;
§ Time until defined calendar or schedule event;
§ Total cost expended to present;
§ Any of the available statistics.
The time period over which the information is needed must be specified.


15.5 Configuration model
The configuration of a workflow management system refers to the way in which the
components of the system are defined and interconnected. The three parts of this model
are:
§ The reference model, which defines the system components and their
interfaces;
§ The logical model, which indicates how the components will be utilized to
accommodate a given workflow specification;
§ The physical model, which indicates how the components given by the logical
model will be implemented using selected products in the environment in

which they must function.
The structure of each model is defined in later sections. Additional information as to how
the models are developed in response to a given workflow process specification is
provided in Chapter 24, which deals with the workflow aspects of the process
implementation methodology.
15.5.1 Reference model
As a part of its standards activities, the WfMC has developed a workflow reference
model that defines the elements of a workflow management system and the interfaces
between them. Figure 15.4 is a pictorial description of the WfMC workflow reference
model, which consists of five components:
§ Process definition;
§ Workflow enactment service;
§ Administration and monitoring;
§ Workflow client applications;
§ Invoked applications.

Figure 15.4: Workflow reference model. (Source: WfMC.)
15.5.1.1 Process definition
The process definition component of the workflow reference model allows workflow
definition information (as defined by the process models) to be entered into the workflow
system and some amount of testing of the resultant workflow definition to be performed.
Simulation and other test procedures also are defined as a part of this component. The
process definition tool translates the process model information into a standard format
for input to the workflow enactment service.
15.5.1.2 Workflow enactment service
The workflow enactment service provides the run-time environment in which one or more
workflow processes are executed. It consists of one or more workflow engines that
cooperate in the implementation of a given process. The model assumes that the
engines of an enactment service are all from the same vendor and are allowed to
communicate using proprietary protocols. If engines (and hence enactment services)

from multiple vendors are used to implement a process, the reference model requires
them to communicate via a standard protocol. The characteristics of a workflow engine
are described in Section 15.5.3.
The enactment service is distinct from the application and end-user tools used to
process items of work. As such, it must provide interfaces to those components to
provide a complete workflow management system.
As can be seen from Figure 15.4, the workflow enactment service is the main component
of the model. All the defined interfaces are between it and the other components. There
are no direct interfaces between components of the model other than the ones shown.
15.5.1.3 Administration and monitoring
The administration and monitoring component contains the logic for administering and
monitoring the status and operation of the workflow. Queries and online status
information (dashboard display) would be a part of this component. Process and
workforce changes would not be functions of this component but would be contained in
the process modeling component.
15.5.1.4 Workflow client application
The workflow client application is the component that presents end users with their
assigned tasks according to the business rules (push, pull, and other custom rules). It
may automatically invoke functions that present the tasks to the user along with related
data. It allows the user to take appropriate actions before passing the case back to the
workflow enactment service and indicating its current state (e.g., completed, error
condition, reassigned to another performer).
The workflow client application may be supplied as part of a workflow management
system, or it may be a custom product written specially for a given application.
15.5.1.5 Invoked applications
There is a requirement for workflow systems to deal with a range of invoked applications.
In may be necessary, for example, to invoke an e-mail or other communications service,
image and document management services, scheduler, calendar, or process-specific
legacy applications. Those applications would be executed directly by the workflow
engine without having to utilize a workflow client.

15.5.2 Logical model
There are many ways that a business process can be implemented and deployed using
workflow techniques and products. The purpose of the logical model is to depict the
specific components that are used to implement the process of interest. The issues and
procedures involved in producing a design for a specific logical model are not considered
in this section, only the need for the model and its structure.
To develop a suitable workflow design for a given business process, it is necessary to
understand and incorporate the characteristics of the process as well as the capabilities
and characteristics of the workflow management system products being utilized. If, for
example, there is a need to utilize multiple workflow engines, any product considered for
the function must be able to provide that ability.
Figure 15.5 illustrates the example of using a help-desk process. One enactment service
consists of four cooperating workflow engines (all assumed to be from the same vendor
so multiple enactment services are not needed). The engines are used to support the
user roles as follows:
§ One engine is assigned to the east help desk;
§ One engine is assigned to the west help desk;
§ One engine is assigned to the clerks regardless of location;
§ One engine is assigned to all of the other users (service technicians,
marketing representatives, and experts).

Figure 15.5: Logical configuration model.
Each help desk engine needs to communicate with the miscellaneous engine and the
clerk engine, but they do not have to communicate between them; once assigned, users
cannot migrate between engines.
The scheduler needs to be accessed by three of the four engines. The clerk process
fragment does not require any scheduling. The clients of the same three engines need to
access the test and diagnostic legacy applications. If those applications are accessed by
a task defined outside the client, they also should be shown as a part of the logical
diagram, because they would have to be made available to the workstations running the

clients and associated tasks.
Supervisors and managers are considered to be a part of the group to which they are
assigned.
15.5.3 Physical model
Once the logical architecture has been determined, the physical architecture can be
designed. The development of the physical architecture is similar for most types of
system development. Although some aspects are unique to workflow, the procedure and
information contained do not differ materially.
Essentially, the physical architecture starts with the logical architecture components
along with the automated activities, human interface modules, custom worklist
management modules, and any external interfaces to existing components (applications)
needed. External interfaces could include those utilized for calendaring, task scheduling,
and work force management. Also included in the physical architecture specification is
the network topology that supports the distributed environment, including protocols,
server configuration and characteristics, and data location and conversion.
Specific products also are assigned at this step but are omitted in this discussion
because of the desire not to be tied to any vendor’s products. Figure 15.6 illustrates a
possible physical configuration for the logical model developed in Section 15.5.2. The
physical configuration should be detailed enough to serve as an implementation and
deployment guide.

Figure 15.6: Physical configuration model.
Because the details of the physical configuration depend on the infrastructure
components and configuration used by a given enterprise, it is not possible to consider
the physical configuration in any more detail. The most important aspect of the physical
configuration is that it is developed from the characteristics of the logical configuration
model.


15.6 Summary

Workflows are the natural method of implementation for business processes. They
maintain the same sense of process as that originally developed from a business
perspective while allowing the use of automation assets concepts such as reusable
software components. Workflows are also a natural mechanism to coordinate the manual
and automated activities needed to perform the identified functions of the original
process. However, business processes cannot be converted directly to workflows. A
considerable amount of design is necessary to produce the different models described in
this chapter. That aspect requires a robust implementation methodology.


Notes on Part II
All the required automation asset models have now been specified in a form suitable for
use by the process implementation methodology. The human interface is not included as
an asset model because it is more of a design technique than a model and is better
addressed as an integral part of the methodology. Although some of the relationships
between models have been discussed as part of the structure definitions, most of the
relationships are driven by the needs of the methodology and will be examined as they
occur.
In some cases, not all of an asset model is directly incorporated into the methodology.
That occurs because (1) to form a coherent asset model, the defining structure needs to
be broader than is strictly needed by the methodology (e.g., roles, scenarios), and (2)
some of the assets have a significant implication for the enterprise beyond that of the
methodology (e.g., C/S structure). The reader should not be surprised or confused by
this condition.
Other automation assets used by the enterprise are only indirectly utilized by the
methodology. Those assets are concerned mainly with the infrastructure and include
such areas as security, communications, and network operations. To keep this book
focused on the topic of process implementation, it was not possible to address those
assets in detail. However, the development of a robust infrastructure is critical to the
automation environment, and those assets need the same degree of concentration given

to the assets needed by the process implementation methodology.
Although not always explicitly specified, all the automation assets used in the
methodology must be considered to be “real” enterprise assets and managed using the
functions as described in Part II. Without that degree of attention, the availability of the
assets is unreliable and the ability of the automation methodology to produce effective
process implementations is compromised.
The asset models developed in Chapters 8 through 15 have been demonstrated to work
effectively with the methodology. However, it certainly is possible to define alternative
models to address unique or unusual circumstances. Space constraints do not permit
that possibility to be explored here, but enough information has been presented
concerning the motivation and reasons behind the model structure to facilitate that type
of activity.
Finally, the author again would like to state his conviction that the automation
environment is one of the most critical elements in the ability of the enterprise to
compete in the future. Not only must it improve the efficiency of enterprise operation, it
must provide the ability to obtain competitive advantage. Without a clear understanding
of the assets utilized in this environment and a means to ensure that they are used to the
best advantage of the enterprise, the automation environment will not be able to provide
the required support.
Selected bibliography
Cichocki, A., et al., Workflow and Process Automation: Concepts and Technology, Boston:
Kluwer Academic Publishing, 1998.
Engelhardt, A., and C. Wargitsch, “Scaling Workflow Applications With Component and
Internet Technology: Organizational and Architectural Concepts,” Proc. 31st Hawaii Internatl.
Conf. System Sciences, Wailea, HI, Jan. 6–9, 1998, pp. 374–383.
Koulopolous, T. M., The Workflow Imperative: Building Real World Business Solutions, New
York: John Wiley & Sons, 1997.
Jablonski, S., and C. Bussler, Workflow Management: Modeling Concepts, Architecture and
Implementation, Boston: International Thompson Computer Press, 1996.
Jackson, M., and G. Twaddle, Business Process Implementation: Building Workflow Systems,

Reading, MA: Addison-Wesley, 1997.
Ngu, A. H. H., et al., “Modeling Workflow Using Tasks and Transactions,” Proc. 7th Internatl.
Conf. Database and Expert Systems Applications, Zurich, Sept. 9–10, 1996,pp. 451–456.
Workflow Management Coalition, Workflow Handbook 1997, ed. by P. Lawrence, New York:
John Wiley & Sons, 1997.
Part III: Automation methodology
With the conclusion of Parts I and II of this book, the fundamental concepts and models
needed to permit the specification of a comprehensive methodology for
the implementation of business processes are now in place. Part III develops and details
the specification of a process implementation methodology (PRIME). In a sense, Part III
is an anticlimax since most of the difficult work has been done during the investigation of
the automation assets utilized by the methodology. What remains is the integration of
those components into a unified design that can be implemented in a timely and cost-
effective manner.
It probably is evident from the discussions of some of the individual assets that PRIME is
not a conventional methodology. It contains a number of unique approaches and
constructs designed to address the issues inherent in the current business and technical
environment. From the preceding chapters, it should be clear that the current difficulties
of software development and deployment will not be solved by conventional approaches.
That has been amply demonstrated over the years by the large number of projects that
have failed to deliver what they promised. They overran projected costs, completion
times, or did not provide customers with what was needed. Requirements usually were
vague and kept changing. Development staff turned over, and valuable information that
was not documented was lost. General management and project management that did
not understand the intricacies and special needs of software development in general and
business software in particular tried to apply techniques that simply were not adequate
for the task.
The basic problem is not the particular architecture, technology, tools, equipment, or
even management style utilized. Other than not being process oriented, the problem is
the implementation methodology employed. Currently available methodologies contain

structural problems and are deficient in one or more (usually all) of the following areas:
§ They do not adequately ensure that the requirements have been incorporated
into the completed product.
§ They leave too much to the discretion of the developers.
§ They have inadequate checks and balances.
§ They do not adequately address geographical distribution of equipment and
function (with or without Internet techniques).
§ They are not adequately integrated with the infrastructure components and
services.
§ They do not provide for the continuous involvement of the customer to ensure
usefulness of the completed implementation.
§ They do not address the integration of manual and automated tasks, which
must cooperate in the satisfaction of a business event.
§ They are not adequately coupled to the structure of the resultant automation
functionality.
§ They do not incorporate a reuse approach.
Assuming that management realizes that a process orientation is necessary to address
the fundamental requirement problems, it is still necessary to overcome the difficulties of
the available methodologies. That can be accomplished only by the design of an entirely
new methodology that has as its specific purpose the effective implementation of
business processes in a manner that addresses the listed deficiencies.
The PRIME approach addresses all the difficulties of the current methodologies but
admittedly introduces constructs that require a technology and culture stretch. Because
of the rapid advances in technology occurring on a large number of fronts, the
technology aspect is not an insurmountable problem for even the near future. Also,
“workarounds” can be utilized until the needed technology becomes available. The major
problem is cultural. All areas of the enterprise, from executive management to the
accountants to the software developers, have to learn a new way of working. The
commitment to that change must be obtained from all levels of the organization, from
senior management to the individual software engineers. Senior management must

provide the leadership needed, including overseeing the changes in the strategic
planning process that are required to accommodate the difference in approach. The
inherent problems in accomplishing such change and obtaining the needed commitment
should not be underestimated.
If those difficulties can be overcome (and they can), the result of utilizing the PRIME
methodology will be implemented processes and associated automation software that
meet the needs of the users, that utilize state-of-the-art technology appropriately, that
provide results in a timely and cost-effective fashion, and that can be easily changed to
accommodate business and technological improvements.
Unfortunately, the discussion in Chapters 16 through 27 is essentially a circle. No matter
where the start is positioned, the entire circle must be traversed before all the concepts
and their interactions become clear. The author has tried to keep the discussion in as
logical a sequence as possible. However, the reader should be aware of the inherent
circle properties and not become frustrated when a concept is utilized before it can be
sufficiently motivated or detailed. The problem is self-correcting as the discussion
progresses.
Chapter List
Chapter 16: Overview of process implementation methodology
Chapter 17: Spirals
Chapter 18: Step 1: Define/refine process map
Chapter 19: Step 2: Identify dialogs
Chapter 20: Step 3: Specify actions
Chapter 21: Step 4: Map actions
Chapter 22: Step 4(a): Provision software components
Chapter 23: Step 5: Design human interface
Chapter 24: Step 6: Determine workflow
Chapter 25: Step 7: Assemble and test
Chapter 26: Step 8: Deploy and operate
Chapter 27: Retrospective



Chapter 16: Overview of process
implementation methodology
Overview
A process implementation has several elements that must be developed in a coordinated
fashion and that must all interoperate to accurately reflect the requirements contained in
the original business process. Depending on the process being implemented, those
elements usually include the following:
§ Automated functions;
§ Manual procedures;
§ User interfaces;
§ Task management;
§ Workflow management.
A PRIME must be capable of converting a given business process into a set of those
elements, all of which are compatible with and utilize the available infrastructure for
support and common services. The infrastructure is specified and designed
independently of the process implementation. Its architecture and design are determined
by the services provided, the technology utilized, and the resources available.
An explicit differentiation must be made between an implementation methodology and a
project management methodology. Unfortunately, the two types of methodologies quite
often are confused. That results in a significant amount of uncertainty as to what is
included in an implementation methodology and what are the required skill sets of the
personnel involved.
An implementation methodology specifies how the conversion of requirements to an
implementation should be accomplished. It defines the procedures, models, structures,
and information flows utilized in the conversions as well as their relationships and when
and how they are to be utilized. A project management methodology produces a project
plan that includes estimates for resources and elapsed time. It also may identify the
individuals needed to implement the plan. After the conversion has started, a project
methodology determines if the conversion is proceeding according to the initial estimates

and indicates the types of corrections to make if there is significant deviation from the
plan.
A project management methodology without an attendant implementation methodology
is devoid of meaning because there literally is nothing to manage. In a similar way, an
implementation methodology needs an associated project management methodology to
ensure that the conversion is utilizing enterprise resources in an effective way. The
definition and utilization of the two types of methodologies should be separate and
distinct because they focus on different problems and the personnel involved require
different skill sets.
The methodology developed in this presentation is an implementation methodology. It
focuses on the conversion from requirements to implementation and does not explicitly
consider management aspects, such as resource estimation. However, where
appropriate, some management topics are discussed when they add to the
understanding of the procedures involved. Management topics are explicitly identified as
such, in keeping with the desire to keep the two types of methodology separate.
It is assumed that both methodologies utilize the same repository. That ensures that the
project management methodology is able to track the progress and estimate resource
utilization of a given implementation and can determine at any step if changes should be
made in the development. Such information would be placed in the repository and cause
the implementation parameters to change accordingly. The repository also would be
used to contain the results of “what-if” alternatives (e.g., manual function versus
automated function implementation) requested through the project management
methodology. Each what-if alternative would use the implementation methodology as
needed but would be identified as a separate result associated with the original
development.
The specification and design of PRIME are a combination of art and engineering. The art
addresses the decisions as to the specification of the underlying concepts and models.
The engineering addresses the detailing of the models and the means to ensure that all
the components interoperate as needed. Both the art and engineering aspects must
work in harmony to provide a methodology that meets the requirements and produces

the required results. Although it is possible from an intellectual perspective to separate
those two aspects, in practice it is difficult because they are closely intertwined. Thus, no
attempt is made during the development of the methodology to identify when one or the
other is being applied. The readers are invited to judge for themselves if the
differentiation is of interest.


16.1 Automation system
In previous chapters, the term enterprise automation environment was used to represent
the totality of the deployed and operational computing environment, including the
automation software (applications and infrastructure) and the computing and network
equipment employed in the enterprise. Because PRIME is one mechanism by which the
automation environment software is structured, it is necessary to define further the
environment and the associated elements used in its creation, which, for convenience,
are referred to as the automation system.
The automation system is defined by the model shown in Figure 16.1. The primary input
to the model is the business requirements as represented by the business process.
Secondary inputs are (1) enterprise standards that will affect some aspect of the
development or deployed process and (2) stakeholder needs, which represent the
interests of the different classes of individuals interacting with the deployed process. The
output is the implemented and deployed business process the users employ in
performing their work. The deployed processes consist of the business functionality and
associated workflow management that use the enterprise computing infrastructure for
common support services (e.g., security, communications). The set of all such deployed
processes is the enterprise automation environment.

Figure 16.1: Automation system model.
16.1.1 Enterprise automation environment
The enterprise automation environment can utilize many different forms and structures.
Although a process implementation architecture can be specified separately from the

implementation methodology and vary with different functionality, that approach is
inefficient and may not allow the implementation methodology to meet the specified
requirements.
For those reasons, only one process implementation architecture is utilized in the
automation environment. It is designed to be an integral part of the PRIME approach and
thus support the fundamental requirements of the methodology. That results in the
following advantages for the automation environment:
§ Processes can easily interact with each other as required.
§ Reuse of the same components in multiple processes is facilitated.
§ The infrastructure needed to support the architecture need be developed
only once. Multiple support structures are not needed.
§ Asset management procedures are simplified.
§ The implementation methodology can be optimized to a single
implementation architecture instead of requiring multiple structures.
The one disadvantage to the utilization of a single architecture for all process
implementations is that operational efficiency may not be optimized in all cases. Because
the cost for the performance obtained of hardware continues to decrease rapidly, the
performance penalty usually is small in comparison to the advantages.
16.1.2 Process implementation architecture
The process implementation architecture is the aggregation of several of the concepts
and models presented in previous chapters. It is illustrated in schematic form in Figure
16.2. The architecture is based on a C/S structure with four explicit types of servers:
automation control, workflow, infrastructure, and business functionality/data. There is
also a client that provides the user interface.

Figure 16.2: Process implementation architecture schematic.
In conventional C/S structures, the user client and the automation control server are
coresident in a workstation, and the combination of the two is called the client. This
arrangement is called the fat client; the functionality of the automation control server
(which may be significant) is grouped and colocated with the user client. In an Internet-

based C/S structure, the user client consists of only a browser and supported I/O devices
and the automation control server is located on a different platform in the thin-client
arrangement. An associated thin-hardware platform is sometimes called a network
computer (NC).
The process implementation architecture specification can support either type of client
arrangement. In fact, for a given process, either or both arrangements can be utilized,
depending on the needs of the users and the policies and procedures of the enterprise.
The purposes of the automation control server are to (1) determine the functionality
needed for a particular instance of the workflow process fragment assigned to the server
and (2) cause it to be executed. Depending on the specific design and implementation of
the workflow, the determination as to the appropriate functions can be made by the user,
the task manager, the workflow manager, or a combination of them. As indicated in
Figure 16.2, the automation control server has four types of components: human
interface, task manager, workflow manager access, and cluster store.
Each of the three methods of determining the needed functionality is represented by one
of the component types. The cluster store contains the dynamic data used by those
components.
The basic operation of the automation control server is as follows:
1. The workflow manager routes workflow instances to the server, where
it can be addressed using the functionality available to the server.
2. The workflow manager access component first determines how a
workflow instance will be selected from all the instances available to
the server. It may automatically select one of them according to
specified business rules or present a selected set to the user through
the human interface for manual selection. When a workflow instance
has been selected, the workflow manager access component indicates
that selection to the workflow manager.
3. Once the instance has been selected, the appropriate task manager is
invoked. Controlled by the information contained in cluster store, the
task actions cause functionality in the various servers to be executed

(including user functions accessed through the human interface).
Changes in cluster store data are made continuously as server and
user functionality finish. That continues until the task manager has
caused all the needed functionality to be executed and the task
completes.
4. Depending on the specific circumstances of the instance, additional
tasks needed to address the instance may be invoked either in parallel
or in sequence. Each of the tasks operates as in step 3. Each task
invocation is communicated to the workflow manager by the workflow
manager access component.
5. The workflow manager access component indicates to the workflow
manager that a task has completed.
6. After an instance has been satisfied, the entire procedure starts over
again.
16.1.3 PRIME
Now that the scope, role, and context of the implementation methodology have been
defined, the remainder of this book focuses on the design of PRIME. The start of the
design process is an examination of the various types of implementation methodologies
and their respective advantages and disadvantages. That allows selection of an
appropriate type as the foundation of PRIME.
16.1.3.1 Methodology types
The major types of implementation methodologies considered here are the waterfall,
evolutionary, build-and-test, and spiral methodologies. Although the ad hoc methodology
is probably the most prevalent one in current use, it is not considered because it cannot
be characterized readily. It varies with each individual and project and is not a true
methodology in the sense used in this section.
The advantages and disadvantages of each type of methodology are discussed briefly.
Waterfall methodology The waterfall methodology has been used since the early days
of software development. It partitions the development into a series of fixed,
concatenated stages, as shown in Figure 16.3. Each stage must be substantially

completed before the next one is started. Although there can be feedback paths from
one stage to an earlier one, they usually are exception paths and are not used as a
normal part of the methodology definition. The methodology gets its name from the
sequential nature of the stages. The flow of the development is in one direction, much
like the flow of a waterfall.

Figure 16.3: Waterfall methodology.
The waterfall type of methodology has several advantages.
§ A large number of individuals are experienced in its use.
§ Organizations have used this methodology for a long time and are
comfortable with it.
§ It lends itself to very large projects.
The disadvantages are:
§ The customer does not see the implementation until it is finished,
which can be a significant number of years for a large project.
§ It is assumed that all the requirements can be determined at the start
of the project. That usually is not the case, and the system
implementation may not reflect hidden requirements.
§ It is not oriented toward software reuse, process-based requirements,
and integration with an established infrastructure.
§ Requirements that change during the development can cause
considerable disruption to the development, requiring even more time
and resources to complete.
The waterfall methodology generally is not suited for the type of development needed to
provide process-based enterprise automation.
Evolutionary methodology Whereas the waterfall methodology has been used for
large projects, the evolutionary methodology has been used for small projects or parts of
larger projects. It starts with the development of some initial functionality that is then
validated by the customer. The next set of functions is then determined and
implemented. That continues until the entire product has been implemented. This type of

development is illustrated in Figure 16.4.

Figure 16.4: Evolutionary methodology.
The development starts by implementing functionality 1 as a core and then proceeds to
add functionality increment 2, and so on. The resultant structure changes with each new
evolution and is impossible to predict in advance. If some part of the functionality is
rejected by the customer, it must be redone until approval is obtained. The methodology
gets its name because the functionality available evolves over time rather than appearing
as a completed implementation.
There are some advantages of the evolutionary type of methodology.
§ The customer sees some of the functionality quickly and can
determine if the evolution is what is needed.
§ Requirements can be added and changed with little disruption to the
overall project.
§ It tends to keep projects small and more manageable.
The disadvantages are:
§ There is a strong tendency to skip the requirements phase and leap
right into the coding. That can cause a considerable amount of
redevelopment when the customer determines that much of the initial
development is not appropriate and needs to be redone.
§ The ability to add functionality decreases as the product evolves. That
occurs because no overall architecture is defined. The product
becomes more and more complex and difficult to understand.
§ It is not oriented toward software reuse, process-based requirements,
and integration with an established infrastructure. The development
tends to be closed in all its aspects.
§ The implementation tends to become the documentation, resulting in
insufficient documentation of the system construction and the reasons
that the evolution proceeded as it did. That can hamper continued
evolution as well as usage of the product.

The evolutionary methodology is also not suited for the type of development needed to
provide process-based enterprise automation.
Build-and-test methodology The build-and-test methodology and the evolutionary
methodology have many similarities. The main difference is that the build-and-test
methodology utilizes a well-defined structure designed to accommodate all the
functionality needed. The functionality itself is developed in small increments, as it is in
the evolutionary methodology, but it is designed to fit into the overall structure. This is
illustrated in Figure 16.5.

Figure 16.5: Build-and-test methodology.
The overall structure, as indicated by the large rectangular box in Figure 16.5, is
designed in accordance with the initial set of requirements. The initial set of functionality,
depicted by box 1, is implemented so it fits into the defined structure. Then the next
functionality increment (box 2) is added, and so on. The overall structure remains
constant and does not change as the product evolves. Because an initial set of
requirements has been determined, it is less likely that the customer will reject the
functionality increments.
To some extent, the build-and-test methodology is a compromise between the waterfall
methodology and the evolutionary methodology. It keeps the initial requirements phase
and structured approach but permits the customer to get a feel for the product and its
functionality much more quickly than allowed by the waterfall approach.
The advantages of the build-and-test type of methodology are:
§ The customer sees some of the functionality quickly and can
determine if the evolution is what is needed.
§ Requirements can be added and changed with little disruption to the
overall project.
§ It tends to keep projects small and more manageable.
§ An initial set of requirements is determined up-front and helps guide
the overall implementation.
§ It is oriented toward software reuse, process-based requirements, and

integration with an established infrastructure.
§ Because an overall architecture defined, the ability to add functionality
does not decrease as the product evolves.
The disadvantages are:
§ A relatively comprehensive structure that can accommodate the
functionality fragments needs to be developed and utilized up-front.
§ If the requirements should change materially, the structure needs to be
changed. That may be mitigated by certain structure designs that are
more flexible than others.
The build-and-test methodology is suited for the type of devel- opment needed to provide
enterprise automation in the current environment.
Spiral methodology The spiral methodology is a type of build-and-test methodology in
that it formalizes that approach by defining a standard set of activities that are performed
in a repetitive fashion to construct both the initial and additional functionality. The use of
those standard activities is shown graphically by arranging them as quadrants of a circle.
Development proceeds by logically traversing the circumference of the circle and
performing each set of activities as they are encountered. As the process is repeated,
functionality is added (or improved with respect to requirements) with each complete
cycle. The ever widening radius of the circle being traversed—the outward spiral—
indicates the increase in knowledge of the development over time.
The spiral process is illustrated in Figure 16.6, in which the set of standard activities is
arranged by quadrant and consists of analysis, design, implementation, and evaluation
types. For each traversal of the spiral, all those activities are sequentially invoked.
Analysis is performed to determine what needs to be accomplished next, usually in the
form of some type of new or changed implementation. The necessary design work for
the implementation is performed and the implementation produced. The results are then
evaluated and the cycle starts over with the analysis for the next cycle. Each activity
depends on the ones before it as the development continues.

Figure 16.6: Spiral approach to development.

In theory, a spiral methodology can be thought of as a single procedure that guides the
development through one complete revolution and is reinvoked for each new cycle or
traversal. The only difference between successive revolutions is the additional
knowledge and detail available for the development. The spiral activities remain the
same.
Although the spiral methodology is an attractive notion, in actual practice the procedures
necessary to perform a given cycle vary considerably. The specific methodology design
for a spiral cycle must reflect not only the current detail level but also the type of
information being developed. That can be accomplished within the high-level context of
the spiral, but the details certainly will vary.
As an example, early in the development, the cycle of interest is concerned with defining
the initial set of requirements and the associated overall structure of the product. In a
subsequent cycle, the emphasis may be on the detailed design of the user interface
functionality. Obviously, the needs of those two cycles are somewhat different, even
though they go through the same basic quadrants of analysis, design, implementation,
and evaluation activities. One approach to accommodating the differences is to explicitly
define different types of spirals. The major categories of the spiral will be present in each
type, but the differences can be addressed in a more effective way.


16.2 Selected approach for PRIME
A spiral methodology, which is a type of build-and-test methodology, is suited for the sort
of development needed to provide enterprise automation in the current environment and
is the approach selected for PRIME. Some modifications to the basic methodology are
made to accommodate the needs of practical software development and adapt it to a
process implementation focus. Specifically, the uses of multiple tailored spiral types are
specified to ensure that all aspects of the development are covered adequately.
16.2.1 Requirements
Because a spiral technique indicates only an overall methodology approach, the actual
design of the methodology must account for additional requirements. The PRIME

methodology has several additional needs that must be considered:
§ Methodology driven by process definitions;
§ Reuse of business functionality;
§ Incorporation of legacy systems;
§ Use of a common infrastructure;
§ Use of a distributed C/S architecture;
§ Integration with enterprise long-term planning;
§ Utilization of prototypes to convey the evolving design to stakeholders;
§ Equal consideration of manual and automated functions;
§ Incorporation and integration of specifications for error handling and
recovery, online training and help, and performance monitoring;
§ Covering of the operational phase of the life cycle as well as the
development phase.
Although PRIME is completely compatible with an object-oriented approach, it is not
necessary to utilize an object structure to make effective use of the methodology. In fact,
software developed according to object-oriented techniques can be mixed with software
developed in a more traditional fashion. That is required when legacy systems and
COTS products are combined in a given process implementation. The ability to combine
the different software architectures is one of the strengths of PRIME.
Those requirements, along with the selected spiral approach, form the basis of the
PRIME design.
16.2.2 Methodology design
PRIME is a nontraditional, process-driven, and workflow-realized approach to the
implementation of business processes. As necessary, individual tasks are developed
using a nonprocedural programming approach. As will be seen, PRIME utilizes a
common infrastructure with shared components and business functionality built on
reusable software components and incorporates prototyping and stakeholder interaction
in each spiral.
The PRIME methodology has a total of seven explicit spirals, each of which has a
specific function in the methodology. Figure 16.7 shows the spirals in a manner similar to

that used to illustrate the spiral methodology in Figure 16.6. However, the representation
in Figure 16.7 does not adequately convey the interactions or show the individual
activities that the methodology operation comprises. A more robust depiction, although
not as obviously a spiral approach as that utilized in Figure 16.7, is shown in Figure 16.8.

Figure 16.7: PRIME spirals.
In Figure 16.8, the seven spirals are indicated through the use of reverse (feedback)
arrows. The activities that define the operation of the spirals are contained in eight
named steps. Although a methodology can be thought of as a continuum, it usually is
partitioned into discreet steps for ease of definition and management. Although the steps
are somewhat arbitrary, they do need to contain a set of closely related activities and
produce specific deliverables.

Figure 16.8: Methodology structure of PRIME.
In most cases, a step belongs to more than one spiral. Having a step participate in
multiple spirals allows a smooth transition from one spiral to another (both forward and
backward). It should be remembered that traversing any spiral increases (or improves)
the quality of the development through improved knowledge of the requirements, design,
or implementation. In that sense, a transition to a previous spiral type still increases the
quality of the development. It does not usually mean that there is a problem. The spiral
always gets larger in radius, never smaller.
16.2.3 Methodology dynamics
There are three entry points into the methodology. The initial one, shown on the left side
of Figure 16.8, represents the case in which it is desired to utilize the methodology to
implement a given business process. The second entry is at the top middle of the
diagram, the case in which an explicit link between the project management
methodology and PRIME is needed. That link provides project management with the
description of functionality that is needed but has not yet been implemented. The
information is crucial to the determination of project resource and schedule needs and is
shown explicitly to reflect that importance. The third entry point, shown near the upper

right side of the diagram, represents the situation in which the need for a new or
changed software component has been identified independently from the requirements
of a specific process implementation.
Although the third entry point does not represent an actual invo- cation of the
methodology, it is necessary to indicate that an initial structured set of software
components should be developed utilizing a top-down approach. For similar reasons,
step 4(a) and spiral 3(a) are not considered an integral part of PRIME. They are shown
in the diagram and discussed in the context of the methodology to emphasize the point
that software components must be designed and developed to provide the functionality
needed by the action specifications. The implementation of the software components is
independent of the PRIME process implementation, as discussed in Chapter 14.
Given that exception, the initial general flow of PRIME is from step 1 through step 8, and
from spiral 1 through spiral 7. Because PRIME follows a spiral approach, it is possible to
repeat a spiral or a step multiple times and traverse them in any reasonable order. In
fact, some steps (e.g., 4, 5, and 6) and spirals can be performed in parallel. However, if
any of the steps causes a change that affects another step that is common to the spirals
containing the parallel steps (e.g., “Identify actions”), then all the affected spirals need to
be revisited. That can result in changes to information in any of the spiral steps.
That type of behavior also can occur at any point in a development. For example,
assume that step 5 of spiral 4, “Design human interface,” is in process and the
implementation, for some reason, requires that the action definitions be revised, so step
3, “Specify actions,” is revisited. As a step in spiral 2, changes to the actions may require
that step 2, “Identify dialogs,” be revisited and the dialog definitions changed. That, then,
affects all of spirals that contain that step.
Those types of changes occur with any implementation methodology, but they can be
confusing and difficult to track and manage. Things can easily fall through the cracks.
With the PRIME structure, there is a well-defined method to manage the changes and
ensure that all the effects are considered. For a given change, there may be several
traversals of multiple spirals necessary before the situation is fully resolved. However,
the needed activities and their status easily can be checked at any time during the

development.
16.2.4 Methodology prototypes
As another aspect of the spiral approach, PRIME also specifies the development of a
series of prototypes. Prototypes are associated with a spiral rather than attached to a
specific methodology step. The prototypes are used to contain the implementation
aspect of the spiral. As a spiral goes through successive iterations, additional information
and structure may be added to the prototype. Some spirals use essentially the same
prototype, each populating it with information specific to the spiral involved. For
organization purposes, the activities necessary for the initial embodiment of a prototype
are assigned to one step (not necessarily the first) of the related spiral. Once that has
occurred, subsequent traversals of the spiral steps assume the existence of the
associated prototype.

×