Creating value in the
public sector
Intelligent project selection
in the US federal government
A report from the Economist Intelligence Unit
Sponsored by Oracle
Creating value in the public sector
Intelligent project selection in the US federal government
Contents
1
Preface
3
Executive summary
4
Introduction
5
Identifying the right projects
6
Balancing project portfolios
7
Selecting and managing resources
9
The impact of laws and regulations
10
Improving decision-making
12
Conclusion
13
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Preface
Creating value in the public sector: Intelligent project selection in the US federal government is an Economist
Intelligence Unit research report, sponsored by Oracle. The findings and views expressed in the report
do not necessarily reflect the views of the sponsor. The author was Brian Robinson. Mike Kenny was
responsible for the design.
April 2011
2
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Executive summary
U
S federal agencies have been under pressure for years to improve the way they select and prioritise
the programmes they manage, with successive administrations and Congress beating the efficiency
drum. Government budgets threaten deep cuts, and that pressure will increase. Most agencies are not
where they need to be to meet these demands. Individual programme management has improved, but
there has been no progress on techniques for assessing the impact of a portfolio of programmes and their
alignment with agency strategies and goals. Evaluation practices have not kept pace, and programmes
have fallen through the cracks. Ineffective programmes continue to run seemingly under their own
momentum, with scant justification of their efficacy.
New requirements will ratchet the pressure even higher. The Obama administration’s Accountable
Government Initiative will require agencies to identify their worst-performing projects, and weed out
the least critical. A Government Accountability Office (GAO) report released in March detailed overlaps
and duplications in hundreds of government programmes. Congress will undoubtedly use these findings
to cut billions of dollars from agency spending. Meanwhile, in line with the enactment of the revamped
Government Performance and Results Act (GPRA) in 2010, agencies now have to link their programme
evaluation and selection more closely to annual performance plans and strategy goals.
The government is not a monolith, and each agency has its own culture and unique set of stakeholders.
No one template can provide an answer for all government agencies, but there are common approaches
that can improve the performance of most programmes:
l Take a holistic approach to programme evaluation. New programmes cannot be considered without
knowing how they fit with existing ones, how they will operate and what they will cost. This portfoliobased approach provides the best way to gain a holistic view of agency needs.
l Build a feedback loop into the evaluation process. Agencies will need the ability to redefine their
planning continually, instead of only at the beginning of each budget cycle, in order to adjust the mix
of programmes in their portfolio more quickly to changes imposed by overall demands on agencies and
subsequent changes in strategies.
l Make sure there is a clear prioritisation of programmes. Know which programmes are vital to
maintain the agency’s mission and which are less so. Build this into the agency’s operating plan so
that budgets can be reallocated across the portfolio of programmes as needed to make sure the needs
of those with the highest priorities are met.
l Assume the worst, and plan accordingly. While the budgets for certain individual programmes may
rise, most agency budgets overall will drop substantially over the next few years. Planning for that
eventuality will require a robust process for evaluating programmes across the enterprise, and how
they link with agency strategies and goals.
3
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Introduction
T
1. Government
Accountability Office,
Opportunities to Reduce
Potential Duplication in
Government Programs, Save
Tax Dollars, and Enhance
Revenue, March 1st 2011.
4
hese are not the best of times for government. With a federal budget deficit projected to be US$1.6trn
by the end of fiscal year 2011 (the federal government fiscal year runs from October 1st to September
30th), the administration of Barack Obama has called for a five-year freeze on government spending,
while Republicans in Congress have pushed for deep cuts. What is certain is that most agencies will have to
do their work with fewer resources.
In addition to these cuts, there will be an even sharper focus on how agencies select and implement
programmes. At the beginning of March the Government Accountability Office (GAO), the investigative
arm of Congress, released the first of a series of congressionally mandated annual reports identifying
which federal programmes, agency offices and initiatives have duplicative goals or activities.1 If Congress
acts on these findings, it could cut billions of dollars from agency budgets.
The report found 34 areas of government where the objectives overlapped, provided similar services
to the same sectors of the population, or where missions were fragmented across multiple agencies
or programmes. It also identified 47 areas where costs could be reduced, or extra revenue could be
obtained (see box Improving the US food safety system). Overall, the GAO’s findings spanned hundreds of
programmes, and touched nearly all the major federal departments and agencies.
At the same time, agencies are also under pressure from the Obama administration “to make
government work better, faster and more efficiently”, by scrutinising more carefully which programmes to
implement, and then providing a better evaluation of the effectiveness of these programmes.
One goal of the president’s Accountable Government Initiative, clearly, is to identify the lowest-priority
and worst-performing government programmes. All agencies have to take a hard look at their spending in
order to weed out the initiatives that are the least critical to their missions—in turn shifting the focus to
how they evaluate programmes.
Just before he left his post as director of the Office of Management and Budget (OMB) at the end of
July 2010, Peter Orszag sent a memo to federal agency heads pointing out that rigorous programme
evaluations can be key to determining whether they are achieving their purpose at the lowest cost.
“Some programmes have persisted year after year without adequate evidence that they work,” he wrote.
(The OMB is part of the Executive Office of the President, and its core mission is to help a wide range of
executive departments and agencies across the federal government to implement the commitments and
priorities of the president.)
Government has done a good job in recent years with such things as performance measurement and
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Improving the US food safety system
One of the most important areas where the federal government adds
value for American citizens is in overseeing the safety of the nation’s
food system. It is even more important now given that a much greater
portion of the US food supply is imported, as raw and unprocessed
foods become more popular, and with the growth of sections of the
population that are susceptible to food-borne illnesses.
This is the reason that the Government Accountability Office (GAO)
chose the issue as an example of wayward government efforts in its
March 2011 report on Opportunities to Reduce Potential Duplication
in Government Programs, Save Tax Dollars, and Enhance Revenue. The
fragmented nature of federal food oversight programmes has caused
inconsistent oversight, ineffective co-ordination and inefficient use
of resources, according to the GAO.
“Without a government-wide performance plan for food safety,
decision-makers do not have a comprehensive picture of the federal
government’s performance on this cross-cutting issue,” it noted.
Responsibility for food safety lies primarily with the Food and Drug
Administration (FDA) and the Food Safety and Inspection Service
(FSIS) of the US Department of Agriculture (USDA). Together, these
two agencies operate a US$2.5bn food safety budget, but a total of 15
federal agencies together administer at least 30 food-related laws.
This can lead to some bizarre situations. For example, the FDA is
responsible for the safety and proper labelling of eggshells at the
farms where the eggs are produced, whereas the FSIS is responsible
for the safety of eggs processed into egg products. Meanwhile,
other USDA agencies are responsible for setting quality and grade
standards for eggs, and for ensuring the health of young chicks
supplied to egg farms.
Meanwhile, it is unclear who is responsible for actually testing
eggs for such things as salmonella. In 2010 there was nationwide
recall of 500 million eggs because of salmonella contamination.
As early as 2004, the GAO reported that integrating food safety
oversight could create economies of scale, and made a more
concentrated effort to protect the food supply. In 2007 it made its
first call for a mission-based, government-wide performance plan
focused on results.
Although the GAO does not expect reducing fragmentation in
food safety oversight to result in significant cost savings, new costs
could be avoided. This might be an important point going forward,
however, since both the FDA and USDA budgets are under pressure in
Congress because of the ongoing fiscal crisis.
individual programme management, according to Jon Desenberg, policy director for The Performance
Institute, a think tank based in Washington, DC. Nevertheless, officials “haven’t taken that to the next
level, especially with things such as portfolio management”, he says.
Although government is doing a much more effective job on managing individual programmes, what
is missing in many agencies is the link to a broader strategic outlook, and a better way to prioritise
programmes to fit with that outlook.
5
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Identifying the right projects
S
2. Government
Accountability Office,
Program Evaluation:
Experienced Agencies
Follow a Similar Model
for Prioritizing Research,
January 2011.
6
ome agencies have become practiced at evaluating proposals for the programmes they want to
implement. A January 2011 report by the GAO2 found that agencies with the most mature evaluation
processes have a consistent approach, from initial consultation with a variety of stakeholders and
feedback from senior officials to a formal review and approval process.
This approach also fits well with the ability to take an enterprise-wide look at agency programmes, to
see how that portfolio matches up with the longer-term strategic goals. By identifying how programmes
relate to each other and to those goals, agencies can decide on the optimal mix of programmes needed to
meet those strategic targets, and what resources need to be shifted among them to make sure the entire
portfolio of programmes is appropriately balanced.
The problem with all but the smallest agencies is the diversity of stakeholders whose input has to be
considered in any programme discussions. At the National Aeronautics and Space Administration (NASA),
for example, there are five major lines of business, each of which has several themes that may need their
own programmes.
The senior leadership at the agency keeps on top of things with reviews of the programme portfolios of
each business line “roughly quarterly,” says Dr Michael Hawes, associate administrator in NASA’s Office
of Independent Program & Cost Evaluation. Then, at the beginning of each year’s budget cycle, a broader
portfolio review is done for the entire agency to see how programmes are developing and which might
need adjustment.
The National Oceanographic & Atmospheric Administration (NOAA) also provides programmes for
several lines of business. It uses two key documents – a strategic plan and a business unit implementation
plan – to guide its programme evaluation and selection. Previously, it used a two-step planning process:
the first step was based on the perfect scenario in which all the requirements could be met with no
constraints assumed, while the second step added the constraints. Now it follows a planning process that
includes fiscal constraints from the beginning.
Based on the budgets enacted by Congress, the executive team now has a much clearer idea of what
is possible in terms of its overall portfolio of programmes. “Before, it was very difficult to track decisions
over time in terms of the implementation of strategies,” says Paul Doremus, director of strategic planning
in NOAA’s Office of Program Planning and Integration. “Now we have a much tighter linkage between
strategic intent and what we can actually resource.”
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Balancing project portfolios
N
o decision to begin a new programme is taken in isolation by any government agency. Most of
them have legacy programmes that need to be funded and resourced, so knowing the matrix of
programmes in an agency is key to the decision-making process. Good portfolio management is an
essential step to better programme selection and prioritisation. Without it, agencies have no holistic view
of their strategy and mission, and how each programme (existing or proposed) fits into “the big picture”.
The Office of Environmental Management in the Department of Energy (DOE), for example, is
responsible for cleaning up the radioactive sites in the United States and around the world that are
a legacy of the cold war arms race. Programmatically, it is one of the most complex endeavours in
government.
The clean-up is already 20 years old and has many years still to go. Each of the clean-up sites sets up
its own baseline needs, but because of the schedules for clean-up and movement of nuclear waste around
the country, there is a certain interdependency of needs. All of that information feeds into a database
that is used by executives at the Office of Environmental Management headquarters to prioritise which
programmes to fund based on maximising compliance and clean-up. “We think we have a good planning
basis,” says Merle Sykes, the Office of Environmental Management’s chief business officer (see case study
Nuclear clean-up—planning for the long term). “We know what our costs are on a continuing basis, so we
feel pretty confident in that.”
NASA has a similar long-term outlook, if not the same kind of interdependency. The Science Mission
Directorate alone has around 50 satellites operating in space, many of them with multi-decade lifetimes.
Each of them has a cost that has to be factored into deliberations about which programmes to continue
funding, and which programmes to begin funding.
The portfolio reviews undertaken by the agency’s senior leadership give them a good understanding of
how each programme is performing, and the level of its operational constraints. “We need that to assess
the kind of fixed operating space we have, and what the challenges are in the development of ongoing
projects,” says Dr Hawes. “Then we can consider what room we have for new missions.”
He believes that strengthening this kind of portfolio approach to gain the most holistic view is
important. Frequently, attention is driven towards the more unusual and visible projects. However,
considering the portfolio as a whole allows the agency to factor the broader risks into the budget, to
understand the common drivers of cost and to plan for growth.
That holistic view is also critical to the way NOAA operates. It has “major mission goals” in the
7
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
CASE STUDY:
Nuclear clean-up—planning for the long term
The Office of Environmental Management at the Department of Energy
(DOE) is charged with cleaning up the environmental waste produced
by 50 years of nuclear weapons development and energy research. It
runs dozens of construction and clean-up projects in multiple states
valued at a total lifecycle cost of over US$200bn. Scrutiny on its work
will no doubt intensify in the wake of the Japanese nuclear crisis as
public concern over nuclear projects reawakens.
The Department is currently operating under a five-year plan that
runs until fiscal year 2012 (October 2011 to September 2012). A
major strategic goal under the plan is to complete the clean-up of 95
contaminated geographical sites by the end of 2012. There is some
flexibility for changes to the clean-up schedule, but the DOE is liable
for fines, penalties and other regulatory action if it strays too far.
Accordingly, precise programme evaluation and planning is vital,
and in fact an increased attention to programme management was
a specific feature of the five-year plan. Each site operates under its
own funding profile and determines what it needs for clean-up. Such
calculations include assumptions relating to funding that are then
included in the agency’s overall financial statement. At the same
time, there is a certain amount of interdependency between sites that
must be taken into account, particularly when it comes to shipping
waste. “Obviously, everyone can’t ship on the same day, so we have
to make sure that sequencing makes sense,” says Merle Sykes, chief
business officer at the Office of Environmental Management. “Also,
there’s an optimum way to fill up [the waste depository], so there’s a
certain sequencing there you have to accomplish.”
The focus of the Office of Environmental Management’s planning
activities is what Ms Sykes calls an integrated priority list. Each site
uploads its planned activities into a central database, and that list
is used at headquarters to decide which particular projects might
require more money.
Factors that affect prioritisation include such things as how to
maximise regulatory compliance, how to get the most clean-up
done for the money and how to manage resources for disposal. It
is a mature process, since the agency has been at this for 20 years
already. But surprises are still possible. For example, the agency had
been working on the assumption that it would be funded at the same
level as fiscal 2010, but there has been talk of cutting it back to 2008
levels. “That would have a substantial impact for us,” Ms Sykes says.
“We try to hold back, but as the continuing resolution debate drags
on, that’s sometimes not possible. We can talk about lay-offs if we get
cut, or about curtailing activities, but it’s not precise.”
The saving grace is the planning and evaluation process that the
Office of Environmental Management has put in place, and its years
of experience, particularly with regard to costs. Even though this is a
“very, very difficult year”, Ms Sykes believes that her organisation is
in a good position to weather it.
areas of weather, climate, oceans and coasts, says Mr Doremus, with line offices responsible for the
implementation of those goals, “while acknowledging that the implementation requires capabilities from
across the organisation.”
So, when it comes to planning at NOAA and ensuring that this portfolio view is available, each of those
lines of business also has to be able to show what the interdependencies are across the organisation, and
with external partners.
“Each of the objectives in the strategic plan identify the capabilities that are required across the
organization as well as, sometimes, what’s needed from the outside,” says Mr. Doremus. “Those are
built into a logic model that shows us how the components work together over time, so we can integrate
complementary programmes and avoid redundancies.”
8
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Selecting and managing resources
A
ll decisions about government programmes are resource-dependent, so what guidance agencies and
programme planners can provide about available resources will ultimately drive the final decision on
what programmes to develop and what to do without. Sometimes that has to do with people, sometimes it
is the time available in which to implement the programmes. Currently, funding is by far the biggest driver.
The question is how to factor resource constraints into the decision-making process. For the current
fiscal year planning, well before the current budget debates began, NOAA told its business units to plan as
though resources would not increase at all, based on the fiscal trends that seemed to be developing. That
turned out to have been good guidance, Mr Doremus says, “if not optimistic”.
Other organisations try to get ahead by incorporating flexibility into their upfront planning. The
Department of Veterans Affairs (VA), for example, is neck deep in new and ongoing information
technology (IT) programmes that are needed to upgrade the services it provides to military veterans. To
make sure that the overall intent of those programmes is not compromised, IT operations at the VA work
according to a prioritised operating plan.
Each of the 1,100 programmes in the plan has a certain dollar amount attached to it that is considered
the minimum needed to move that programme forward. If the VA is faced with a reduced budget, it knows
how to reallocate funding in such a way as to keep the maximum number of programmes going while
factoring in the relative priorities of each project. “Any good, well-disciplined organisation knows what it
needs in the way of prioritisation,” says Roger Baker, the VA’s chief information officer. “In the VA, it’s the
operations of its hospitals and clinics, and what goes through the benefits office.”
The potential funding gap raises a question mark over everything to do with federal programme
evaluations, notes George Grob, a former director of planning and policy at the Department of Health and
Human Services, and now president of the Virginia-based Center for Public Program Evaluation. “The political
process says evaluation has to be a part of the DNA of programmes, but for that you need resources,” he says.
“The mindset of Congress now is to cut the budget, so where will the funding come from?”
It is a difficult process, acknowledges Mr Hawes of NASA. NASA usually has very bipartisan support in
Congress, which is a huge advantage when it comes to programme planning “because we don’t swing
as widely as some do with the change in political parties and changes in Congress”. But both parties are
focused on the financial environment now “and we will not be immune to that in any way”, he says. “So
we need to look ahead and at those issues, which will be reflected in whether or not we can inject new
programmes into the portfolio.”
9
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
The impact of laws and regulations
I
n order to comply with laws and regulations, agencies have to factor a complex range of considerations
into their programme plans. At present, however, the impact of laws and regulations on budgets is
driving everything. This focus on monetary issues is driven by the fact that the federal budget deficit is
projected to reach US$1.6trn by the end of fiscal year 2011. In response, the Obama administration has
called for a five-year freeze in government spending and has proposed a target of reducing the deficit by
US$4trn over the next decade—with consequent deep cuts in many agency budgets.
Republicans in both the House of Representatives and the Senate have said that the proposal will not
be sufficient, and House Republicans have proposed more than US$60bn in cuts just for the remainder
of fiscal 2011. The end result? The budgeting process for agencies is unclear for the foreseeable future,
making planning very difficult indeed.
The Government Performance and Results Act (GPRA) Modernization Act, in turn, will also have a
major and long-lasting impact. Passed by the last Congress at the end of 2010 and signed into law by the
president, Mr Obama, on January 4th 2011, it ushers in a new era of programme evaluation requirements
that will resonate across government.
The previous 18-year-old GPRA also pushed agencies to evaluate programmes better, requiring them
to create multi-year strategic plans, produce annual performance reports, and determine whether
programmes really achieve their objectives. However, critics say it produced little information to
guide programmes or policy action. In contrast, the new GPRA will create more specific fact-based
decision-making for programme implementation, and more frequent, quarterly reporting and reviews of
programmes. Agencies will have to link performance goals in their annual plans with the goals in their
strategic plans, and will also need to describe their strategy and resource proposals.
The new GPRA promotes closer relations between Congress and agencies, notes Mr Desenberg of The
Performance Institute. And it suggests that Congress will be making a better effort to look at performance
goals and evaluate whether they have been attained.
The GAO’s March report adds a huge amount of fuel to the programme-cutting fire. A year in the
making, it looked at hundreds of federal programmes that affect virtually all of the major federal
departments and agencies, with the goal of giving Congress a view into the “duplication, overlap or
fragmentation” of programmes that could then be reduced or eliminated.
In just one programme alone, the Department of Defense (DOD) military healthcare system, the
GAO found that up to US$460m a year in savings could be attained if the DOD committed to a broader
10
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
restructuring of the programme, compared to the limited changes it had proposed. In addition, federal
revenue losses could be reduced by up US$5.7bn annually by addressing duplicative policies across the
government to boost domestic ethanol production.
Implementing provisions of the new GPRA, such as its emphasis on establishing outcome-oriented
goals covering a limited number of cross-cutting policy areas, could play an important role in clarifying
desired outcomes, addressing programme performance spanning multiple organisations, and facilitating
future actions to reduce unnecessary duplication, overlap and fragmentation, according to the GAO.
Even if they are not driven to do so because of internal demands, these external pressures will push
agencies to adopt a more holistic, portfolio-based approach to programme planning and prioritisation.
11
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Improving decision-making
B
efore the new GPRA and other government requirements kick in, agencies could make several
improvements in the way they currently evaluate programmes. “Certainly a continuous evaluation
feedback loop, that is definitely missing,” says Mr Grob. “Some places have that tied down pretty well,
such as the Centers for Disease Control and Prevention, and also the people at the some family service
programmes. But, once you get beyond those, it tends to disappear very fast.”
An evaluation feedback loop is one of the benefits that Mr Doremus expects to extract from NOAA’s
new streamlined evaluation process. Such a loop should help maintain the “line of sight” from strategy
through to programme selection and on into the budget process and programme execution. This approach
should give NOAA the ability to learn continually and adjust as the agency’s pattern of performance
changes, and as the planning environment changes around it.
Like other agencies, NOAA probably faces a “very different future”, says Mr Doremus. To meet the
challenge, it needs greater organisational capacity for strategic flexibility. “It’s difficult to plan under
circumstances of great complexity and uncertainty. Scenario planning is a method we could use, and
we’ve implemented that to a certain degree, but I think we could certainly get better at it.”
For NASA’s Dr Hawes, better portfolio analysis could lead to major improvements. By identifying risks
associated with specific activities on each mission, NASA would be better able to factor risk into the way it
prioritises programmes, thus making the budgeting process as a whole more sensitive to risk. “Those are
the things we’re trying to evaluate all of the time,” he says. “And you really only get those if you have a
process that allows you to step back and look at the total portfolio.”
Last, but not least, programme evaluation would profit from an influx of trained and dedicated people.
Most of the agency planning offices that carry out the project evaluation and analysis now rely on a very
small workforce, which will be stretched even further with the demands from the OMB, the new GPRA and
Congress.
The American Evaluation Association made the same point in its Evaluation Roadmap. The units formed
by agencies to conduct evaluations are too often under-resourced. Training and capacity building for
evaluation have been inconsistent across agencies “and, in many cases, insufficient to achieve the needed
evaluation capacity and to sustain it over time”, the association reports.
12
© Economist Intelligence Unit Limited 2011
Creating value in the public sector
Intelligent project selection in the US federal government
Conclusion
P
rogramme evaluation and selection in the federal government has changed substantially over the
past 20 years. The administration of Ronald Reagan (1981-89) shifted the focus away from largescale programme evaluation and more towards programme management. It was not until Bill Clinton was
elected as president (1992-2000) that the focus started to move back to evaluation.
The Obama administration, which came into power in 2009 promising more transparency and
accountability in government, has taken note of surveys such as one by the Pew Research Center,3 which
found that two-thirds of Americans believe that the government cannot run programmes efficiently and
without waste. The president’s Accountable Government Initiative is the administration’s attempt to
introduce a formal evaluation process for government programmes to help overturn that perception. Its
goal is a rigorous, evidence-based approach to programme selection and implementation, in which only
those programmes that are worthy will survive.
For agencies, the need to commit more resources to improving their evaluation capabilities could not
come at a worse time. On the one hand, the fiscal crisis is imposing major constraints on their ability to
rise to the challenge to select and evaluate projects better—a situation that is unlikely to change soon. On
the other hand, they have little room to manoeuvre. They must improve their programme evaluation and
selection processes to meet the growing demands of the Obama administration, the requirements of the
new GPRA, the repercussions of the GAO report, and additional legislation and regulation that is aimed at
governing agency programme management more tightly.
If anything, it will be incumbent on agencies to try to get ahead of the coming tide. Those that can
show good knowledge and management of their portfolio of programmes, while also demonstrating the
ability to maintain progress on their strategic goals, will be treated far more kindly by Congress and the
administration than those that do not, and on these agencies solutions will be imposed.
3. The Pew Research Center
for the People and the
P-ress, Trends in Political
Values and Core Attitudes:
1987-2007, March 2007.
13
© Economist Intelligence Unit Limited 2011
14
Cover image: Shutterstock
Whilst every effort has been taken to verify the accuracy
of this information, neither The Economist Intelligence
Unit Ltd. nor the sponsors of this report can accept any
responsibility or liability for reliance by any person on
this white paper or any of the information, opinions or
conclusions set out in the white paper.
LONDON
26 Red Lion Square
London
WC1R 4HQ
United Kingdom
Tel: (44.20) 7576 8000
Fax: (44.20) 7576 8476
E-mail:
NEW YORK
750 Third Avenue
5th Floor
New York, NY 10017
United States
Tel: (1.212) 554 0600
Fax: (1.212) 586 0248
E-mail:
HONG KONG
6001, Central Plaza
18 Harbour Road
Wanchai
Hong Kong
Tel: (852) 2585 3888
Fax: (852) 2802 7638
E-mail:
GENEVA
Boulevard des Tranchées 16
1206 Geneva
Switzerland
Tel: (41) 22 566 2470
Fax: (41) 22 346 93 47
E-mail: