Tải bản đầy đủ (.pdf) (105 trang)

Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Reference Architecture Report (CAESARS) doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.21 MB, 105 trang )





Department of Homeland Security
Federal Network Security Branch

Continuous Asset Evaluation, Situational
Awareness, and Risk Scoring Reference
Architecture Report
(CAESARS)
September 2010

Version 1.8
Document No. MP100146



CAESARS September 2010








This page intentionally left blank.




CAESARS iii September 2010


Table of Contents
1. Introduction 1
1.1 Objective 1
1.2 Intended Audience 1
1.3 References 2
1.4 Review of FISMA Controls and Continuous Monitoring 2
1.5 CAESARS Reference Architecture Concept of Operations 4
1.5.1 Definition 4
1.5.2 Operating Principles 4
1.5.3 Relationship of CAESARS to CyberScope 5
1.5.4 Cautionary Note – What Risk Scoring Can and Cannot Do 6
1.5.5 CAESARS and Risk Management 7
1.5.6 Risk Management Process 8
1.6 The CAESARS Subsystems 9
1.7 Document Structure: The Architecture of CAESARS 10
1.7.1 CAESARS Sensor Subsystem 11
1.7.2 CAESARS Database/Repository Subsystem 12
1.7.3 CAESARS Analysis/Risk Scoring Subsystem 13
1.7.4 CAESARS Presentation and Reporting Subsystem 13
2. The Sensor Subsystem 14
2.1 Goals 14
2.1.1 Definitions 14
2.1.2 Operating Environment Assumptions for the Sensor Subsystem 15
2.2 Solution Concept for the Sensor Subsystem 16
2.2.1 Tools for Assessing Security Configuration Compliance 19
2.2.2 Security Assessment Tools for Assessing Patch-Level Compliance 23
2.2.3 Tools for Discovering and Identifying Security Vulnerabilities 25

2.2.4 Tools for Providing Virus Definition Identification 29
2.2.5 Other Sensors 30
2.2.6 Sensor Controller 32
2.3 Recommended Technology in the Sensor Subsystem 33


CAESARS iv September 2010


2.3.1 Agent-Based Configuration 33
2.3.2 Agentless Configuration 35
2.3.3 Proxy-Hybrid Configuration 36
2.3.4 NAC-Remote Configuration 37
3. CAESARS Database 39
3.1 Goals 39
3.1.1 Raw Data Collected and Stored Completely, Accurately, Automatically, Securely,
and in a Timely Manner 39
3.1.2 Modular Architecture 39
3.2 Objects and Relations 39
3.2.1 Repository of Asset Inventory Baselines 40
3.2.2 Repository of System Configuration Baselines 41
3.2.3 National Vulnerability Database 42
3.2.4 Database of Findings 42
4. CAESARS Analysis Engine 49
4.1 Goals 49
4.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies 49
4.1.2 Make Maximal Use of Existing, In-Place, or Readily Available Sensors 50
4.1.3 Harmonize Data from Different Sensors 50
4.1.4 Develop Analysis Results that Are Transparent, Defensible, and Comparable 50
4.2 Types of Raw Data and Data Consolidation and Reduction 51

5. CAESARS Risk Scoring Engine 53
5.1 Goals 53
5.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies 53
5.1.2 Common Base of Raw Data Can Be Accessed by Multiple Analytic Tools 54
5.1.3 Multiple Scoring Tools Reflect Both Centralized and Decentralized Analyses 54
5.2 Centralized Scoring that Is Performed Enterprise-Wide 54
5.2.1 Allow Common Scoring for Consistent Comparative Results Across the Enterprise .54
5.2.2 Show Which Components Are Compliant (or Not) with High-Level Policies 54
5.2.3 Permit Decomposition of Scores into Shadow Prices 55
5.2.4 Show Which Components Are Subject to Specific Threats/Attacks 55
5.2.5 Allow Controlled Enterprise-Wide Change to Reflect Evolving Strategies 55
5.3 Decentralized Analyses that Are Unique to Specific Enterprise Subsets 55


CAESARS v September 2010


5.3.1 Raw Local Data Is Always Directly Checkable by Local Administrators 55
5.3.2 Different Processor Types or Zones Can Be Analyzed Separately 56
5.3.3 Region- and Site-Specific Factors Can Be Analyzed by Local Administrators 56
5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use 56
5.4 Description of the iPost Implementation of the Analysis and Scoring Engines 56
5.4.1 Synopsis of the iPost Scoring Methodology 57
5.4.2 Using iPost for Centralized Analyses 58
5.4.3 Using iPost for Decentralized Analyses 58
5.4.4 The Scoring Methodologies for iPost Risk Components 59
6. CAESARS Presentation and Reporting Subsystem 61
6.1 Goals 61
6.1.1 Modular Presentation, Independent of Presentation and Reporting Subsystem
Technologies 61

6.1.2 Allow Either Convenient ―Dashboard‖ Displays or Direct, Detailed View of Data 61
6.2 Consistent Display of Enterprise-Wide Scores 62
6.2.1 Device-Level Reporting 63
6.2.2 Site-, Subsystem-, or Organizational-Level Reporting 63
6.2.3 Enterprise-Level Reporting 64
6.2.4 Risk Exception Reporting 64
6.2.5 Time-Based Reporting 64
7. Areas for Further Study 65
8. Conclusions and Recommendations 67
Appendix A. NIST-Specified Security Content Automation Protocol (SCAP) 68
Appendix B. Addressing NIST SP 800-53 Security Control Families 69
Appendix C. Addressing the Automatable Controls in the Concensus Audit Guidelines 71
Appendix D. List of Applicable Tools 74
Appendix E. Sample List of SCAP Security Configuration Checklists 82
Appendix F. Sample Risk Scoring Formulas 84
Acronyms 90



CAESARS vi September 2010



List of Figures
Figure ES-1. Contextual Description of the CAESARS System x
Figure 1. Continuous Monitoring of a System‘s Security Posture in the NIST-Defined System
Life Cycle and Risk Management Framework 4
Figure 2. Contextual Description of the CAESARS System 11
Figure 3. Relationships Between Security Configuration Benchmarks, Baseline, and RBDs 15
Figure 4. Contextual Description of the Sensor Subsystem 17

Figure 5. Contextual Description of Interfaces Between an FDCC Scanner Tool and the
Database/Repository/Subsystem 21
Figure 6. Contextual Description of Interfaces Between an Authenticated Security Configuration
Scanner Tool and the Database/Repository Subsystem 23
Figure 7. Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch
Scanner Tool and the Database/Repository Subsystem 24
Figure 8: Contextual Description of Interfaces Between an Unauthenticated Vulnerability
Scanner Tool and the Database/Repository Subsystem 25
Figure 9. Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and
the Database/Repository Subsystem 28
Figure 10. Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool
and the Database/Repository Subsystem 29
Figure 11. Contextual Description of Interfaces Between an Authenticated Security
Configuration Scanner Tool and the Database Subsystem 30
Figure 12. Contextual Description of Sensor Controller to Control Security Assessment Tools. 32
Figure 13. Agent-Based Deployment Configuration 33
Figure 14. Agentless Deployment Configuration 35
Figure 15. Proxy-Hybrid Deployment Configuration – Agentless 37
Figure 16. NAC-Remote Deployment Configuration – Agent-Based 37
Figure 17. Contextual Description of Database/Repository Subsystem 40
Figure 18. Contextual Description of Interfaces Between the Database Subsystem and an FDCC
Scanner Tool 43
Figure 19. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Security Configuration Scanner Tool 44
Figure 20. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Vulnerability and Patch Scanner Tool 45


CAESARS vii September 2010



Figure 21. Contextual Description of Interfaces Between the Database Subsystem and an
Unauthenticated Vulnerability Scanner Tool 46
Figure 22. Contextual Description of Interfaces Between the Database Subsystem and a Web
Vulnerability Scanner Tool 47
Figure 23. Contextual Description of Interfaces Between the Database Subsystem and a Database
Vulnerability Scanner Tool 48


List of Tables
Table 1. Recommended Security Tools for Providing Data to Support Risk Scoring 18
Table 2. Currently Scored iPost Components 57
Table 3. Components Under Consideration for iPost Scoring 57
Table 4. Reportable Scoring Elements (Sample) 63


CAESARS viii September 2010



Executive Summary
―Continuous monitoring is the backbone of true security.‖
Vivek Kundra, Federal Chief Information Officer, Office of Management and Budget
A target-state reference architecture is proposed for security posture monitoring and risk scoring,
based on the work of three leading federal agencies: the Department of State (DOS) Security
Risk Scoring System, the Department of Treasury, Internal Revenue Service (IRS) Security
Compliance Posture Monitoring and Reporting (SCPMaR) System, and the Department of
Justice (DOJ) use of BigFix and the Cyber Security Assessment and Management (CSAM) tool
along with related security posture monitoring tools for asset discovery and management of
configuration, vulnerabilities, and patches. The target reference architecture presented in this

document – the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring
(CAESARS) reference architecture – represents the essential functional components of a security
risk scoring system, independent of specific technologies, products, or vendors, and using the
combined elements of the DOS, IRS, and DOJ approaches. The objective of the CAESARS
reference architecture is to provide an abstraction of the various posture monitoring and risk
scoring systems that can be applied by other agencies seeking to use risk scoring principles in
their information security program. The reference architecture is intended to support managers
and security administrators of federal information technology (IT) systems. It may be used to
develop detailed technical and functional requirements and build a detailed design for tools that
perform similar functions of automated asset monitoring and situational awareness.
The CAESARS reference architecture and the information security governance processes that it
supports differ from those in most federal agencies in key respects. Many agencies have
automated tools to monitor and assess information security risk from factors like missing
patches, vulnerabilities, variance from approved configurations, or violations of security control
policies. Some have automated tools for remediating vulnerabilities, either automatically or
through some user action. These tools can provide current security status to network operations
centers and security operations centers, but they typically do not support prioritized remediation
actions and do not provide direct incentive for improvements in risk posture. Remedial actions
can be captured in Plans of Actions and Milestones, but plans are not based on quantitative and
objective assessment of the benefits of measurably reducing risk, because the potential risk
reduction is not measured in a consistent way.
What makes CAESARS different is its integrated approach and end-to-end processes for:
Assessing the actual state of each IT asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to use in taking the
actions to make the most critical changes needed to reduce risk and improve their grades



CAESARS ix September 2010


Making these assessments on a continuous or nearly continuous basis is a prerequisite for
moving IT security management from isolated assessments, supporting infrequent authorization
decisions, to continuous risk management as described in the current federal guidance of the
National Institute of Standards and Technology (NIST) and Office of Management and Budget
(OMB) mandates.
The risk scoring and continuous monitoring capabilities that were studied for this document
represent operational examples of a more generalized capability that could provide significant
value to most or all federal agencies, or to any IT enterprise. What makes risk scoring different
from compliance posture reporting is providing information at the right level of detail so that
managers and system administrators can understand the state of the IT systems for which they
are responsible, the specific gaps between actual and desired states of security protections, and
the numerical value of every remediation action that can be taken to close the gaps. This enables
responsible managers to identify the actions that will result in the highest added value in bringing
their systems into compliance with standards, mandates, and security policies.
The reference architecture consists of four interconnected architectural subsystems, the functions
and services within those subsystems, and expected interactions between subsystems.
The four subsystems are:
Sensor Subsystem
Database/Repository Subsystem
Analysis/Risk Scoring Subsystem
Presentation and Reporting Subsystem
The fundamental building blocks of all analysis and reporting are the individual devices that
constitute the assets of the information system enterprise. An underlying assumption of risk
scoring is that the total risk to an organization is an aggregate of the risks associated with every
device in the system. The risk scoring system answers the questions:
What are the devices that constitute the organization‘s IT assets?

What is the current state of security controls (subset of technical controls) associated with
those assets?
How does their state deviate from the accepted baseline of security controls and
configurations?
What is the relative severity of the deviations, expressed as a numerical value?
At its core, CAESARS is a decision support system. The existing implementations on which it is
based have proven to be effective means of visualizing technical details about the security
posture of networked devices in order to derive actionable information that is prioritized, timely,
and tailored.
The core of the reference architecture is the database, which contains the asset status as reported
by the sensors as well as the baseline configurations against which the asset status is compared,
the rule sets for scrubbing data for consistency, the algorithms for computing the risk score of
each asset, and the data that identifies, for each asset, the responsible organization, site, and/or
individual who will initiate the organzation‘s remediation procedure and monitor their


CAESARS x September 2010


completion. This assignment of responsibility is key to initiating and motivating actions that
measurably improve the security posture of the enterprise.
The subsystems interact with the database and through it with each other, as depicted in Figure
ES-1.
Figure ES-1. Contextual Description of the CAESARS System

Using an implementation based on this reference architecture, risk scoring can complement and
enhance the effectiveness of security controls that are susceptible to automated monitoring and
reporting, comparing asset configurations with expected results from an approved security
baseline. It can provide direct visualization of the effect of various scored risk elements on the
overall posture of a site, system, or organization.

Risk scoring is not a substitute for other essential operational and management controls, such as
incident response, contingency planning, and personnel security. It cannot determine which IT
systems have the most impact on agency operations, nor can it determine how various kinds of
security failures – loss of confidentiality, integrity, and availability – will affect the functions and
mission of the organization. In other words, risk scoring cannot score risks about which it has no
information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration


CAESARS xi September 2010


management tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.
Neither is it a substitute for the underlying governance and management processes that assign
responsibility and accountability for agency processes and results; but it does help make explicit
what those responsibilities are for the security of IT systems, and by extension it helps identify
when there are overlaps or gaps in responsibility so that they can be addressed.
The reference architecture presented here abstracts the design and implementation of various
configuration benchmarking and reporting tools to a model that can be implemented with a
variety of tools and products, depending on the existing infrastructure and system management
technologies available to federal IT managers. The reference architecture also enables IT
managers to add capabilities beyond those implemented in existing implementations, to extend
risk scoring and continuous monitoring to all IT components (network, database, server,
workstations, and other elements) in a modular, interoperable, standards-based implementation
that is scalable, flexible, and tailorable to each agency‘s organizational and technical
environment.
In a memorandum dated April 5, 2010, the OMB Chief Information Officer restated the
commitment to continuous monitoring:
Continuous monitoring is the backbone of true security – security that moves beyond

compliance and paperwork. The threats to our nation‘s information security continue to
evolve and therefore our approach to cybersecurity must confront these new realities on a
real time basis. The Department of State (DOS) the Department of Justice (DOJ) and the
Department of the Treasury (Treasury) have each developed systems that allow
monitoring in real time of certain aspects of their security enterprises. To evaluate best
practices and scale them across the government, the Office of Management and Budget
(OMB) is requesting that DOS, DOJ, and Treasury coordinate with the Department of
Homeland Security (DHS) on a comprehensive assessment of their monitoring systems.
This reference architecture summarizes the conclusions of that assessment. DHS will engage
with federal stakeholders to refine this reference architecture based on the experience of other
federal agencies with similar capabilities and to establish a federal forum for sharing capabilities
and knowledge that support the goals of risk scoring and continuous monitoring.


CAESARS September 2010

Acknowledgements
The information published in the reference architecture is based on the compilation of work in
support of the continuous monitoring of computing and network assets at the Department of
State, the Department of Justice, and the Department of the Treasury. The Federal Network
Security Branch of the National Cyber Security Division of the Department of Homeland
Security especially acknowledges the contributions, dedication, and assistance of the following
individuals in making this reference architecture a reality:
John Streufert, Department of State
George Moore, Department of State
Ed Roback, Department of Treasury
Duncan Hays, Internal Revenue Service
Gregg Bryant, Internal Revenue Service
LaTonya Gutrick, Internal Revenue Service
Andrew Hartridge, Internal Revenue Service

Kevin Deeley, Department of Justice
Holly Ridgeway, Department of Justice
Marty Burkhouse, Department of Justice


CAESARS September 2010








This page intentionally left blank.





CAESARS 1 September 2010


1. Introduction
The Federal Network Security (FNS) Branch of the National Cyber Security Division (NCSD) of
the Department of Homeland Security (DHS) is chartered with leading, directing, and supporting
the day-to-day operations for improving the effectiveness and consistency of information
systems security across government networks. FNS is also the Program Management Office for
the Information Systems Security Line of Business (ISSLOB). On April 5, 2010, the Office of
Management and Budget (OMB) tasked the DHS with assessing solutions for the continuous

monitoring of computing and network assets of the Department of State (DOS), the Department
of Justice (DOJ), and the Department of the Treasury. The results of the assessment gave rise to
a reference architecture that represents the essential architectural components of a risk scoring
system. The reference architecture is independent of specific technologies, products, or vendors.
On April 21,2010, the Office of Management and Budget (OMB) released memorandum M-10-
15, providing guidelines to the federal departments and agencies (D/A) for FY2010 Federal
Information Security Management Act (FISMA) reporting. The OMB memorandum urges D/As
to continuously monitor security-related information from across the enterprise in a manageable
and actionable way. The reference architecture defined in this document – the Continuous Asset
Evaluation, Situational Awareness, and Risk Scoring (CAESARS) reference architecture – is
provided to the federal D/As to help develop this important capability. Continuous monitoring of
computing and network assets requires up-to-date knowledge of the security posture of every
workstation, server, and network device, including operating system, software, patches,
vulnerabilities, and antivirus signatures. Information Security managers will use the summary
and detailed information to manage and report the security posture of their respective D/A.
1.1 Objective
The objective of this document is to describe a reference architecture that is an abstraction of a
security posture monitoring and risk scoring system, informed by the sources noted above, and
that can be applied to other agencies seeking to apply risk scoring principles to their information
security program. This reference architecture is to be vendor-neutral and product-neutral and will
incorporate the key elements of the DOS, Internal Revenue Service (IRS), and DOJ
implementations: targeted, timely, prioritized risk scoring based on frequent monitoring of
objective measures of IT system risk.
1.2 Intended Audience
This reference architecture is intended for use by managers and security administrators of federal
IT systems. It is intended as a guide to developing individual agency programs and processes that
support continuous monitoring of IT assets for compliance with baseline configuration standards
for patches, versions, configuration settings, and other conditions that affect the security risk
posture of a device, system, or enterprise.



CAESARS 2 September 2010


1.3 References
National Institute of Standards and Technology (NIST) Special Publication (SP) 800-37,
Revision 1, Guide for Applying the Risk Management Framework to Federal Information
Systems, February 2010.
NIST SP 800-39, DRAFT Managing Risk from Information Systems: An Organizational
Perspective, April 2008.
NIST SP 800-53, Revision 3, Recommended Security Controls for Federal Information
Systems and Organizations, August, 2009.
NIST SP 800-64, Revision 2, Security Considerations in the System Development Life Cycle,
October 2008.
NIST SP 800-126, Revision 1 (Second Public Draft), The Technical Specification for the
Security Content Automation Protocol (SCAP): SCAP Version 1.1, May 2010.
NIST, Frequently Asked Questions, Continuous Monitoring, June 1, 2010.
1

OMB Memorandum M-07-18, Ensuring New Acquisitions Include Common Security
Configuration, June 1, 2007.
OMB Memorandum M-08-22, Guidance on the Federal Desktop Core Configuration
(FDCC), August 11, 2008.
OMB Chief Information Officer Memorandum, Subject: Security Testing for Agency
Systems, April 5, 2010.
OMB Memorandum M-10-15, FY2010 Reporting Instructions for the Federal Information
Security Management Act and Agency Privacy Management, April 21, 2010.
Department of State, Enterprise Network Management, iPost: Implementing Continuous Risk
Monitoring at the Department of State, Version 1.4, November, 2009.
The MITRE Corporation, Security Risk Scoring System (iPost) Architecture Study, Version

1.1, February 2009.
The MITRE Corporation, Security Compliance Posture Monitoring and Reporting
(SCPMaR) System: The Internal Revenue Service Solution Concept and Architecture for
Continuous Risk Monitoring, Version 1.0, February 1, 2009.
1.4 Review of FISMA Controls and Continuous Monitoring
Under the Federal Information Security Management Act of 2002 (FISMA), , NIST is
responsible for developing information security standards and guidelines, including minimum
requirements for federal information systems. NIST SP800-37, Revision 1, Guide for Applying
the Risk Management Framework to Federal Information Systems, establishes a Risk
Management Framework (RMF) that promotes the concept of near-real-time risk management
through the implementation of robust continuous monitoring processes. The RMF encourages the
use of automation and automated support tools to provide senior leaders with the necessary
information to take credible, risk-based decisions with regard to the organizational information
systems supporting their core missions and business functions.

1



CAESARS 3 September 2010


Commercially available automated tools, such as those described in the NIST RMF, support
situational awareness—what NIST refers to as ―maintaining awareness of the security state of
information systems on an ongoing basis through enhanced monitoring processes‖—of the state
of the security of IT networks and systems. Tools are available to monitor and assess the
information security risk from numerous factors such as missing patches, known vulnerabilities,
lack of compliance with approved configurations, or violations of security control policies. Many
if not all of these tools can provide current security status to network operations centers and
security operations centers.

What is generally lacking are tools and processes that provide information in a form and at a
level of detail that support prioritized remediation actions and that recognize improvements
commensurate with the timely, targeted reduction in risk. As a result, system security assessment
and authorization is usually based on infrequently conducted system vulnerability scans that test
security controls at the time of initial assessment but do not reflect the real state of system risk
between security control test cycles.
Faced with a broad range of residual risks, security managers and system administrators have no
reliable, objective way to prioritize actions to address these risks. Remedial actions are often
embodied in Plans of Actions and Milestones (POA&M), but assigning resources to take action
is not based on rational assessment of the benefits of actually reducing risk, because the potential
risk reduction is not measurable in a consistent way.
The CAESARS reference architecture and the information security governance processes that it
supports, combined, provide support different from that available to most federal agencies in key
respects. Many agencies have automated tools to monitor and assess information security risk
from factors like missing patches, vulnerabilities, variance from approved configurations, or
violations of security control policies. Some have automated tools for remediating
vulnerabilities, either automatically or through some user action. These tools can provide current
security status to network operations centers and security operations centers, but they typically
do not support prioritized remediation actions and do not provide direct incentive for
improvements in risk posture.
What make CAESARS different is its integrated approach and end-to-end process for:
Assessing the actual state of each information technology (IT) asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to take the actions to
make the most critical changes needed to reduce risk and improve their grades
Making these assessments on a continuous or nearly continuous basis is a prerequisite for
moving IT security management from isolated assessments, supporting infrequent authorization

decisions, to continuous risk management as described in current federal guidance of the NIST
and OMB mandates.
The CAESARS approach provides a means of monitoring the security controls in place and
focusing staff efforts on those most likely to enhance the agency‘s information security posture.


CAESARS 4 September 2010


The system consolidates and scores data from multiple network and computer security
monitoring applications into a single point and presents the data in an easy-to-comprehend
manner. The system allows a highly distributed workforce with varying levels of skill and
authority to recognize security issues within their scope of control. Once recognized, the staff
can then focus their efforts on those that remediate the highest vulnerabilities.
1.5 CAESARS Reference Architecture Concept of Operations
1.5.1 Definition
System life cycle is defined by NIST SP 800-64, Rev. 2 as Initiation, Development/Acquisition,
Implementation/Assessment, and Operations & Maintenance. (Note: Activities in the Disposal
Phase are not a part of continuous monitoring activities.)
2

Risk management framework (RMF) is defined by NIST SP 800-37, Rev. 1 as Categorize
Information System, Select Security Controls, Implement Security Controls, Assess Security
Controls, Authorize Information System, and Monitor Security Controls.
3

Figure 1 illustrates the security activities in a system life cycle using the NIST-defined RMF.
Figure 1. Continuous Monitoring of a System’s Security Posture in the NIST-Defined System Life Cycle and Risk
Management Framework
Monitor, report , and

manage implemented
security controls to
maintain security posture
ISSOs & Security PMO
track approved
deviations and monitor
risks
Preliminary risk assessment
and define information
protection needs
FIPS 199: Security category
Security configuration
benchmarks
Implement security configuration
benchmarks
Verify designed security
controls
Perform security assessment
to validate implemented security
controls and record residual risks
Approving Authorities review,
negotiate, and approve deviations
NIST SP
800-64, Rev 2
Security
Activities
MonitorAuthorizeAssessImplementSelectCategorize
SDLC Phase: Initiation SDLC Phase: Development/Acquisition SDLC Phase: Operations & Maintenance
1 2 3 4 5 6
SDLC Phase: Implementation

Assessment
NIST SP 800-37, Rev. 1
Risk Management
Framework

1.5.2 Operating Principles
The Risk Scoring program is intended as an agency-wide program, directed by the Senior
Agency Information Security Officer in support of its organizational mission. It is applicable to
agencies and IT infrastructures that are both centralized and decentralized, and its application is
especially valuable in an organization where the control of IT assets is widely distributed and not
under the direct control of any one organization. The program is intended to meet the following
objectives:
Measure risk in multiple areas
Motivate administrators to reduce risk

2
NIST SP 800-64, Rev. 2, Security Considerations in the System Development Life Cycle, October 2008.
3
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems,
February 2010.


CAESARS 5 September 2010


Motivate management to support risk reduction
Measure improvement
Inspire competition
Provide a single score for each host
Provide a single score for each site

Provide a single score for the enterprise
Be scalable to additional risk areas that can permit automated continuous monitoring
Score a given risk only once
Be demonstrably fair
The Risk Scoring program at DOS evolved in three separate stages:
Deployment of enterprise management tools to monitor weaknesses
Delivery of operational monitoring data to the field in an integrated application
Establishment of a risk scoring program that fairly measures and assigns risk
The reference architecture was developed to fit the multiple organizational structures, network
infrastructures, geographic distribution, and existing tools available to support these goals. It will
provide local administrators with a single interface for direct access to the monitoring data for
objects within their scope of responsibility; it will also provide a single source for enterprise-
level management reporting across a variety of monitoring data.
A generic CAESARS reference architecture for other federal agencies would support
development and operation of similar systems for multiple agencies that fit their enterprise
architectures and do not constrain their choice of tools. CAESARS is intended for use by
multiple agencies to support their individual adoption of comparable processes and the tools to
support those processes. A modular architecture also supports the sharing of subsets of the
solution (such as a government-wide risk scoring algorithm) while allowing agencies to use the
algorithm in the manner that best fits their specific size, structure, and complexity.
Depending on the agency needs, resources, and governance models, CAESARS could:
Enable agencies to see how their existing tools and processes could be adapted to such a
framework
Enable agencies to identify gaps in their own set of solution tools, based on the
architectural constructs developed
Support a ―build or buy‖ decision for acquiring continuous risk scoring tools and services
Provide standardized approaches to statements of work (SOW) or performance work
statements (PWS) to acquire the needed services and products
Establish an accepted framework in which more detailed technical and functional
requirements can be developed

1.5.3 Relationship of CAESARS to CyberScope
Current plans for summarizing federal agency progress and programs for continuous monitoring
of IT security are detailed in OMB M-10-15, FY2010 Reporting Instructions for the Federal


CAESARS 6 September 2010


Information Security Management Act and Agency Privacy Management, dated April 21, 2010.
The OMB memo includes the following:
For FY 2010, FISMA reporting for agencies through CyberScope, due November 15,
2010, will follow a three-tiered approach:
1. Data feeds directly from security management tools
2. Government-wide benchmarking on security posture
3. Agency-specific interviews
Further guidance on item 1, direct data feeds, says,
Agencies should not build separate systems for reporting. Any reporting should be a by-
product of agencies‘ continuous monitoring programs and security management tools. …
Beginning January 1, 2011, agencies will be required to report on this new information
monthly.
And it provides additional details:
The new data feeds will include summary information, not detailed information, in the
following areas for CIOs:
Inventory
Systems and Services
Hardware
Software
External Connections
Security Training
Identity Management and Access

The types of information that OMB requires to be reported through CyberScope are broader in
scope than the status of individual assets, which are the focus of the CAESARS reference
architecture. Nevertheless, the CAESARS reference architecture can directly support the
achievement of some of the OMB objectives by ensuring that the inventory, configuration, and
vulnerabilities of systems, services, hardware, and software are consistent, accurate, and
complete. A fundamental underpinning of both the CAESARS reference architecture and the
OMB reporting objectives is full situational awareness of all agency IT assets.
1.5.4 Cautionary Note – What Risk Scoring Can and Cannot Do
Risk scoring can complement and enhance the effectiveness of security controls that are
susceptible to automated monitoring and reporting, comparing asset configurations with
expected results from an approved security baseline. It can provide direct visualization of the
effect of various scored risk elements on the overall posture of a site, system, or organization.
Risk scoring is not a substitute for other essential operational and management controls, such as
incident response, contingency planning, and personnel security. It cannot determine which IT
systems have the most impact on agency operations, nor can it determine how various kinds of
security failures – loss of confidentiality, integrity, and availability – will affect the functions and
mission of the organization. In other words, risk scoring cannot score risks about which it has no


CAESARS 7 September 2010


information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration
anagement tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.
1.5.5 CAESARS and Risk Management
It cannot be overemphasized that, while CAESARS can serve a necessary and valuable function
within an organization‘s risk management function, the Reference Architecture is not, nor ever
can be, a full replacement for that function. CAESARS does directly address many of the major

aspects of risk management, but there are other aspects that CAESARS does not, nor ever will,
consider. Three such areas merit particular mention.
First, in modern terms, risk management is an interplay between threat and criticality. Both
threat and criticality can often be qualitatively assessed as low, moderate, or high. Criticality is
normally addressed in the system‘s FIPS-199 Security Categorization. Threats can be similarly
assessed as to their severity, and the two must be combined to determine overall risk. A high-
severity threat on a low-criticality system and a low-severity threat on a high-criticality system
may both pose the same overall risk, which itself may then range from low to high.
CAESARS operates primarily in two parallel processes: (i) assessing system configurations
(hardware, software, network, etc.) against pre-defined standards, and (ii) detecting the presence
of known vulnerabilities. The CAESARS architecture is limited to considerations of the threat
that is posed by deviations from standards or by the presence of vulnerabilities. CAESARS does
not have access to any information concerning the criticality of a system or its assets. Therefore,
all discussion of risk in CAESARS must be interpreted in these terms: threats detected and
reported by CAESARS can be assessed for their severity, and may pose overall risk ranging from
low up to the highest that the subject system could ever face: the failure of its mission and/or the
compromise/destruction of its most critical assets. But, lacking knowledge of specific system
criticalities, CAESARS itself can go no further.
Second, CAESARS is not, in itself, a full risk management system. CAESARS cannot, for
instance, evaluate security controls in the NIST SP 800-53 Planning or Management families,
such as those for capital planning or risk assessment. It cannot create, evaluate, or manage
security policies. CAESARS may have interaction with other automated security tools designed
to address Operational or Technical controls, such as those for configuration management,
auditing, or intrusion detection, but it does not itself replace those tools.
Finally, CAESARS cannot itself replace a risk management organization. CAESARS can report
its findings to the system‘s owners and security administrators and to identified management
officials. But CAESARS cannot replace the organization functions of prioritization or tasking
that are needed for remediation. Moreover, if an organization is federated (i.e., if it consists of
enclaves that function with relatively independent asset sensitivity, ownership and security
policies), then each enclave might have its own individual version of CAESARS. Integrating the

CAESARS results from the various enclaves into a single picture that is useful to the full
enterprise would require policy decisions and procedures that are beyond CAESARS‘ intended
scope.


CAESARS 8 September 2010


1.5.6 Risk Management Process
The operating concept for CAESARS is based on NIST-defined RMF
4
and US-CERT Software
Assurance (SwA) guidelines.
5
As illustrated in Figure 1, RMF is composed of a six-step process
conducted throughout a system life cycle:
Step 1: Categorize Information System
To determine the information protection needs, a project security engineer working as a part of a
system engineering team shall work with the information system security officer (ISSO) to:
Perform Federal Information Processing Standard (FIPS) 199 security categorization
Identify potential information security flaws or weaknesses in an information system (i.e.,
potential vulnerabilities)
Identify the potential threat and likelihood of a threat source so that the vulnerability can
be addressed
Step 2: Select Security Controls
Based on the configurations of a designed information system, the security engineer shall work
with the system owner and the Information System Security Officier (ISSO) to:
Select the security configuration benchmarks based on the security category and
computing platforms in order to formulate the initial system security configuration
baseline

Step 3: Implement Security Controls
Once the security configuration baseline has been established,
The system administrator shall implement the security configuration settings according to
the established security configuration baseline
Step 4: Assess Security Controls
After the security configuration baseline has been implemented in a ―system-under-
development‖
6
:
The CAESARS system shall perform assessments based on the approved baseline
A security control assessor or ISSO shall report the level of compliance and identify
deviations from the approved security configuration baseline
The ISSO shall review the security configuration assessment reports and determine
corrective action or recommendations for residual risks
Step 5: Authorize Information System

4
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A
System Life Cycle Approach, February 2010.
5
DHS/National Cyber Security Division (NCSD)/US-CERT SwA Processes and Practices Working Group.
(
6
A system-under-development is a system that has not been formally authorized by a Authorizing Official (AO).


CAESARS 9 September 2010


In this step, an agency Authorizing Official (AO) reviews the security configuration assessment

report (prepared by the security analyst or ISSO) and
Formally approves the new security configuration baseline with risk-based decision
(RBD)
Step 6: Monitor Security Controls
Once the ―system-under-development‖ has formally transitioned into a ―system-in-operation,‖
the CAESARS system shall perform automated assessments periodically to maintain the baseline
security posture. However, existing processes for doing this must still be followed:
If a software patch is required, the formally approved security configuration baseline
must be updated through a change control process.
If a software upgrade or configuration change is significant, then the ISSO must re-
baseline the new system configuration by initiating Step 2 in the RMF process.
The CAESARS reference architecture is intended to be tailored to fit within this six-step
framework, the requirements of the NIST SP 800-53, the agency‘s information security program
plan, and the agency‘s enterprise security architecture. It can help implement the use of common
controls, as it functions as a common control for risk assessment and configuration management
across the scope of the enterprise that it covers. Future versions of CAESARS will address even
more comprehensive integration of the RMF and SwA with CAESARS operations.
1.6 The CAESARS Subsystems
The remainder of this document describes a modularized architecture that contains the required
components of an enterprise security risk scoring system modeled on current agency
implementations. The recommended architectural approach consists of four architectural
subsystems, functions and services within those subsystems, and expected interactions between
subsystems.
The four subsystems of CAESARS, as depicted in Figure 2 below, are:
Sensor Subsystem
Database/Repository Subsystem
Analysis/Risk Scoring Subsystem
Presentation and Reporting Subsystem
This modular division of the subsystems allows a number of advantages. Chief among these is
that the internal implementation of one subsystem is functionally independent of that of any

other subsystem. The technology of any one subsystem can be procured or developed, altered or
even replaced independently of another subsystem. The technology within a subsystem could be
replicated to provide failure backup, with the two implementations using differing technologies
to provide technical diversity. Software maintenance efforts are also independent across
subsystems.
The CAESARS Database/Repository Subsystem, for example, could include a commercial off-
the-shelf (COTS) database based on MS SQL or Oracle (or both, side by side), or it could
include an internally developed product. This independence also applies to the various steps


CAESARS 10 September 2010


within the procurement process, such as requirements development (for COTS products) or
creation and execution of SOWs and PWSs (for system development.)
The modular architecture allows for multiple adjacent subsystems. A single CAESARS
Database/Repository Subsystem could interface with multiple CAESARS Analysis/Risk Scoring
Subsystems (e.g., one a COTS analysis product and another internally developed) and even offer
differing services to each. This feature would allow a single CAESARS Database/Repository
Subsystem to interface with multiple CAESARS Analysis/Risk Scoring Subsystems, for
example, at both the local (site or region) level and the enterprise-wide level.
Similarly, a single CAESARS Analysis/Risk Scoring Subsystem could provide data to multiple
CAESARS Presentation and Reporting Subsystem components.
1.7 Document Structure: The Architecture of CAESARS
In the remaining sections of this paper, the four subsystems of CAESARS are described in detail.
In those cases where well-defined products, particularly COTS products, have been used, these
products are identified and placed into their appropriate subsystem and context.
7
Their interfaces
are described as fully as is practical for this paper. Future work may include identifying

alternative products that might also be used within each subsystem and context.

7
The use of trade names and references to specific products in this document do not constitute an endorsement of
those products. Omission of other products does not imply that they are either inferior or superior to products
mentioned herein for any particular purpose.


CAESARS 11 September 2010



Figure 2. Contextual Description of the CAESARS System

1.7.1 CAESARS Sensor Subsystem
Section 2 of this document describes the Sensor Subsystem. The Sensor Subsystem includes the
totality of the IT assets that are the object of CAESARS‘ monitoring activities. It includes all
platforms upon which CAESARS is expected to report, including end-user devices, database
servers, network servers, and security appliances. The Sensor Subsystem does not include
platforms for which federal agencies have no administrative responsibility or control and which
CAESARS is not expected to monitor and report. For example, it could include federal
contractor IT systems but it would not include the public Internet.
CAESARS may also play a role in cases where federal agencies contract with third-party
providers for data processing and/or storage services (e.g., ―cloud computing‖ services). It may
be possible to require, in the contract‘s Service Level Agreement (SLA), that the provider‘s
compliance data be made available to real-time federal monitoring, analysis, and reporting (at
least at a summary level, if not down to the sensors themselves). This possibility will be
examined in future versions of CAESARS.



CAESARS 12 September 2010


A primary design goal of CAESARS is to minimize the need for client platforms themselves to
contain or require any specific executable components of CAESARS. The data to be gathered
and reported to CAESARS is collected by systems that are already in place on the client
platforms or that will be provided by the enterprise. The platforms of the Sensor Subsystem are
assumed to have already installed the tools that will gather the configuration and vulnerability
data that will be reported to CAESARS. For example, those platforms that run the Microsoft
Windows operating system are assumed to have already in place the native Windows security
auditing system, and the server platforms are assumed to have their own similar tools already in
place. Enterprise tools such as Active Directory likewise have their own native auditing
mechanisms. Similarly, client platforms may already have installed such tools as anti-virus, anti-
spam, and anti-malware controls, either as enterprise-wide policy or through local (region- or
site-specific) selection.
CAESARS, per se, does not supply these data collection tools, nor does it require or prohibit any
specific tools. The data that they collect, however, must be transferred from the client platforms
to the CAESARS Database/Repository Subsystem on an ongoing, periodic basis. The tools for
transferring this data are specified in the CAESARS Sensor-to-Database protocol. The transfer
can follow either a ―push‖ or ―pull‖ process, or both, depending upon enterprise policy and local
considerations.
In the data push process, the scheduling and execution of the transfer is controlled by the local
organization, possibly by the platform itself. This allows maximum flexibility at the local level
and minimizes the possibility of interference with ongoing operations. But the push process is
also more likely to require that specific CAESARS functionalities be present on client platforms.
CAESARS components required for data push operations are part of the CAESARS Sensor-to-
Database protocol, and are described in Section 2 as part of the Sensor Subsystem.
In the data pull process, the scheduling and execution of the data transfer is controlled by the
CAESARS Database/Repository Subsystem. CAESARS interrogates the platforms on an
ongoing, periodic basis, and stores the resulting data in the CAESARS Database/Repository. The

pull paradigm minimizes the need for any CAESARS components on individual client platforms,
but also provides less scheduling flexibility at the local level and may also interfere with existing
operations. The pull paradigm may also involve directly integrating the CAESARS
Database/Repository Subsystem with numerous and disparate individual platform sensors,
negating the benefits of subsystem modularity and independence. CAESARS components
required for data pull operations are part of the CAESARS Sensor-to-Database protocol, and are
described in Section 3 as part of the CAESARS Database/Repository Subsystem.
1.7.2 CAESARS Database/Repository Subsystem
Section 3 of this reference architecture describes the CAESARS Database/Repository
Subsystem. The CAESARS Database/Repository Subsystem includes the totality of the data
collected by the Sensor Subsystem and transferred up to CAESARS. It also includes any tools
that are required by the CAESARS Database/Repository to perform data pull operations from the
Sensor Subsystem platforms.
CAESARS does not impose any specific database design or query requirements at the
CAESARS Database/Repository Subsystem; these are the purview of the CAESARS
Analysis/Risk Scoring Subsystem. However, the CAESARS Database/Repository Subsystem

×