Tải bản đầy đủ (.pdf) (461 trang)

secure data management in decentralized systems

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (24.39 MB, 461 trang )

www.dbebooks.com - Free Books & magazines
Secure Data Management in
Decentralized Systems
Advances in Information Security
Sushil
Jajodia
Consulting Editor
Center for Secure Information Systems
George Mason University
Fairfa, VA 22030-4444
email:
The goals of the Springer International Series on ADVANCES IN INFORMATION
SECURITY are, one, to establish the state of the art of, and set the course for future research
in information security and, two, to serve as a central reference source for advanced and
timely topics in information security research and development. The scope of this series
includes all aspects of computer and network security and related areas such as fault tolerance
and software assurance.
ADVANCES IN INFORMATION SECURITY aims to publish thorough and cohesive
overviews of specific topics in information security, as well as works that are larger in scope
or that contain more detailed background information than can be accommodated in shorter
survey articles. The series also serves as a forum for topics that may not have reached a level
of maturity to warrant a comprehensive textbook treatment.
Researchers, as well as developers, are encouraged to contact Professor Sushil Jajodia with
ideas for books under this series.
Additional titles in the series:
NETWORK SECURITY POLICIES AND PROCEDURES
by Douglas
W.
Frye;
ISBN:
O-


387-30937-3
DATA WAREHOUSING AND DATA MINING TECHNIQUES FOR CYBER SECURITY
by Anoop Singhal; ISBN: 978-0-387-26409-7
SECURE LOCALIZATION AND TIME SYNCHRONIZATION FOR WIRELESS
SENSOR AND AD HOC NETWORKS
edited
by
Radha Poovcndran, Cliff Wang,
and
Sumit
Roy; ISBN: 0-387-32721-5
PRESERVING PRIVACY IN ON-LINE ANALYTICAL PROCESSING (OLAP)
by Lingyu
Wang, Sushil Jajodia and Duminda Wijesekera; ISBN: 978-0-387-46273-8
SECURITY FOR WIRELESS SENSOR NETWORKS
by Donggang Liu and Peng Ning;
ISBN: 978-0-387-32723-5
MALWARE DETECTION
edited by Somesh Jha, Cliff Wang, Mihai Christodorescu, Dawn
Song, and Douglas Maughan; ISBN: 978-0-387-32720-4
ELECTRONIC POSTAGE SYSTEMS: Technology, Security, Economics
by Gerrit
Bleumer; ISBN: 978-0-387-29313-2
MULTIVARIATE PUBLIC KEY CRYPTOSYSTEMS
by Jintai Ding, Jason
E.
Gower and
Dieter Schmidt; ISBN-13: 978-0-378-32229-2
UNDERSTANDING INTRUSION DETECTION THROUGH VISUALIZATION
by

Stefan Axelsson; ISBN-10: 0-387-27634-3
QUALITY OF PROTECTION: Security Measurements and Metrics
by Dieter Gollmann,
Fabio Massacci and Artsiom Yautsiukhin; ISBN-10: 0-387-29016-8
COMPUTER VIRUSES AND MALWARE
by John Aycock; ISBN-10: 0-387-30236-0
Additional information about this series can be obtained from
http:Nwww.springer.com
Secure Data Management
in
Decentralized Systems
edited
by
Ting Yu
North Carolina State University
USA
Sushi1 Jajodia
George Mason University
USA
Ting Yu Sushi1 Jajodia
North Carolina State University
George Mason University
Dept. Computer Science Center for Secure Information Systems
3254
EB
I1 4400 University Drive
Raleigh NC 27695 Fairfax VA 22030-4444

Library of Congress Control Number: 2006934665
SECURE DATA MANAGEMENT IN DECENTRALIZED SYSTEMS

edited
by
Ting
Yu
and Sushil Jajodia
ISBN- 13: 978-0-387-27694-6
ISBN- 10: 0-387-27694-7
e-ISBN-13: 978-0-387-27696-0
e-ISBN- 10: 0-387-27696-3
Printed on acid-free paper.
O
2007 Springer Science+Business Media, LLC
All rights reserved. This work may not be translated or copied in whole or
in part without the written permission of the publisher (Springer
Science+Business Media, LLC, 233 Spring Street, New York, NY 10013,
USA), except for brief excerpts in connection with reviews or scholarly
analysis. Use in connection with any form of information storage and
retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now know or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks and
similar terms, even if the are not identified as such, is not to be taken as
an expression of opinion as to whether or not they are subject to
proprietary rights.
Printed in the United States of America.
Contents
Preface

VII
Part
I

Foundation
Basic Security Concepts
Sushi1 Jajodia, Ting Yu

3
Access Control Policies and Languages in Open Environments
S. De Capitani di Vimercati, S. Foresti, S. Jajodia,
I!
Samarati

21
Trusted Recovery
Meng Yu, Peng Liu, Wanyu Zang, Sushi1 Jajodia

59
Part
I1
Access Control for Semi-structured Data
Access Control Policy Models for XML
Michiharu Kudo, Naizhen Qi

97
Optimizing Tree Pattern Queries over Secure XML Databases
Hui Wang, Divesh Srivastava, Laks VS. Lakshmanan, SungRan Cho, Sihem
Amer-Yahia

127
Part
I11
Distributed Trust Management

Rule-based Policy Specification
Grigoris Antoniou, Matteo Baldoni, Piero A. Bonatti, Wolfgang Nejdl, Daniel
Olmedilla

169
Automated lkust Negotiation in Open Systems

Adam J. Lee, Kent
E.
Seamons, Marianne Winslett, Ting Yu
.2 17
VI
Contents
Building Trust and Security in Peer-to-Peer Systems
Terry Bearly, Vijay Kumar

.259
Part IV Privacy in Cross-Domain Information Sharing
Microdata Protection
V
Ciriani, S. De Capitani di Vimercati, S. Foresti,
l?
Samarati

.291
k-Anonymity
V
Ciriani, S. De Capitani di Vimercati, S. Foresti,
l?
Samarati


,323
Preserving Privacy in On-line Analytical Processing Data Cubes
Lingyu Wang, Sushi1 Jajodia, Duminda Wijesekera

,355
Part V Security in Emerging Data Services
Search on Encrypted Data
Hakan Hacigumu~, Bijit Hore, Bala Iyer, Sharad Mehrotra

.383
Rights Assessment for Relational Data
RaduSion

427
Index

459
Preface
Database security is one of the classical topics in the research of information system
security. Ever since the early years of database management systems, a great deal of
research activity has been conducted. Fruitful results have been produced, many of
which are widely adopted in commercial and military database management systems.
In recent years, the research scope of database security has been greatly expanded
due to the rapid development of the global internetworked infrastructure. Databases
are no longer stand-alone systems that are only accessible to internal users of or-
ganizations. Instead, allowing selective access from different security domains has
become a must for many business practices. Many of the assumptions and problems
in traditional databases need to be revisited and readdressed in decentralized envi-
ronments. Further, the Internet and the Web offer means for collecting and sharing

data with unprecedented flexibility and convenience. New data services are emerg-
ing every day, which also bring new challenges to protect of data security. We have
witnessed many exciting research works toward identifying and addressing such new
challenges. We feel it is necessary to summarize and systematically present works in
these new areas to researchers.
This book presents a collection of essays, covering a wide range of today's active
areas closely related to database security organized as follows. In Part I, We review
classical work in database security, and report their recent advances and necessary
extensions. In Part 11, We shift our focus to security of the Extensible Markup Lan-
guage (XML) and other new data models. The need for cross-domain resource and
information sharing dramatically changes the approaches to access control. In Part
111, we present the active work in distributed trust management, including rule-based
policies, trust negotiation and security in peer-to-peer systems. Privacy has increas-
ingly become a big concern to Internet users, especially when information may be
collected online through all kinds of channels. In
Part
IV,
privacy protection efforts
from the database community are presented. Topics include micro data release and
k-anonymity. In Part
V,
we include two essays, which are about challenges in the
database-as-a-service model and database watermarking.
The audience of this book includes graduate students and researchers in secure
data management, especially in the context of the Internet. This book serves as help-
VIII
Preface
ful supplementary reading material for graduate-level information system security
courses.
We would like to express our sincere thanks to Li Wu Chang (Naval Research

Laboratory), Rohit Gupta (Iowa State University), Yingjiu Li (Singapore Manage-
ment University), Gerome Miklau (University of Massachusetts, Amherst), Clifford
B.
Neuman (University of Southern California), Tatyana Ryutov (Information Sci-
ences Institute), and Yuqing Wu (Indiana University) for their valuable and insightful
comments on the chapters of this book.
Ting
Yu
Sushi1
Jajodia
Part
I
Foundation
Basic Security Concepts
Sushi1 Jajodial and Ting Yu2
'
Center of Secure Information Systems
George Mason Unversity

North Carolina State University

1
Introduction
The computer security problem is an adversary problem: there is an adversary who
seeks to misuse the storage, processing, or transmittal of data to gain advantage.
The misuse is classified as either
unauthorized observation
of data,
unauthorized or
improper mod@ation

of data, or
denial of service.
In denial of service misuse, the
adversary seeks to prevent someone from using features of the computer system by
monopolizing or tying up the necessary resources.
Thus, a complete solution to the computer security problem must meet the fol-
lowing three requirements:
0
Secrecy or confidentiality:
Protection of information against unauthorized dis-
closure
Integrity:
Prevention of unauthorized or improper modification of information
0
Availability:
Prevention of denial of authorized access to information or services
These three requirements arise in practically all systems. Consider a payroll
database in a corporation. It is important that salaries of individual employees are
not disclosed to arbitrary users of the database, salaries are modified by only those
individuals that are properly authorized, and paychecks are printed on time at the end
of each pay period. Similarly, in a military environment, it is important that the target
of a missile is not given to an unauthorized user, the target is not arbitrarily modified,
and the missile is launched when it is fired.
The computer security problem is solved by maintaining a separation between
the users on one hand and the various data and computing resources on the other,
thus frustrating misuse. This separation is achieved by decomposing the computer
security problem into three subproblems:
securitypolicy, mechanism,
and
assurance.

Thus, a complete solution consists of first defining a security policy, then by choosing
some mechanism to enforce the policy and, finally, by assuring the soundness of
both the policy and the mechanism. Not only each subproblem must be solved, it
is important that each solution fit the solutions of the other two subproblems. For
4
Sushi1 Jajodia and Ting
Yu
example, it does not make sense to have a security policy that cannot be implemented,
nor does it make sense to have a mechanism that is easily bypassed.
2
Security
Policy
The security policy elaborates on each of the three generic objectives of security-
secrecy, integrity, and availability-in the context of a particular system. Thus, com-
puter security policies are used like requirements; they are the starting point in the
development of any system that has security features. The security policy of a system
is the basis for the choice of its protection mechanisms and the techniques used to
assure its enforcement of the security policy.
Existing security policies tend to focus only on the secrecy requirement of se-
curity. Thus, these policies deal with defining what is authorized or, more simply,
arriving at a satisfactory definition of the secrecy component.
The choice of a security policy with reasonable consequences is nontrivial and
a separate topic in its own right. In fact, security policies are investigated through
formal mathematical models. These models have shown, among other things, that
the consequences of arbitrary but relatively simple security policies are undecidable
and that avoiding this undecidability is nontrivial
[5,7,8].
To read more about the
formal security models, see
[3].

All security policies are stated in terms of
objects
and
subjects.
This is because in
reasoning about security policies, we must be careful about the distinction between
users and the processes that act on behalf of the users. Users are human beings that
are recognized by the system as users with an unique identity. This is achieved via
identification and authentication mechanisms; the familiar example is a user identi-
fier and password.
All system resources are abstractly lumped together as objects and, thus, all ac-
tivities within a system can be viewed as sequences of operations on objects. In the
relational database context, an object may be a relation, a tuple within a relation, or
an attribute value within a tuple. More generally, anything that holds data may be
an object, such as memory, directories, interprocess messages, network packets,
I10
devices, or physical media.
A subject is an abstraction of the active entities that perform computation in the
system. Thus, only subjects can access or manipulate objects. In most cases, within
the system a subject is usually a process, job, or task, operating on behalf of some
user, although at a higher level of abstraction users may be viewed as subjects. A user
can have several subjects running in the system on his or her behalf at the same time,
but each subject is associated with only a single user. This requirement is important
to ensure the accountability of actions in a system.
Although the subject-object paradigm makes a clear distinction between subjects
and objects (subjects are active entities, while objects are passive entities), an entity
could be both a subject and an object. The only requirement is that if an entity be-
haves like a subject (respectively, object), it must abide by rules of the model that
apply to subjects (respectively, objects).
Basic Security Concepts

5
The reason a distinction must be made between users and subjects is that while
users are trusted not to deliberately leak information (they do not require a computer
system to do so), subjects initiated by the users cannot be trusted to always abide
by the security policy. Example
1
given below in section
2.2
illustrates just such a
situation.
2.1
Identification and Authentication
Any system must be able to identify its users (ident8cation) and confirm their iden-
tity (authentication). The system assigns each legitimate user (i.e., user that is al-
lowed to use the system) a unique identification (userid), which is kept in an iden-
tification (id) file. A user must present the userid at login, which is matched against
the id file. The usual means of performing authentication is by associating a reusable
password with each userid. (A password is reusable if the same password is used
over and over for authentication.) The password files are typically protected through
encryption.
Reusable passwords are inherently risky since often users choose weak pass-
words (such as their own name, their own name spelled backward, or some word
that also appears in a dictionary) which can be guessed by password cracker pro-
grams. A more elaborate method of defeating reusable passwords is called spoojng.
Spoofing is a term borrowed from electronic warfare, where it refers to a platform
transmitting deceptive electronic signals that closely resemble those of another kind
of platform. In the computer system context, spoofing is a penetration attempt made
by writing a false user interface, say, for the operating system or the
DBMS.
The

spoofing program mimics the portion of the user interface that requests user identi-
fication and password, but records the user's password for use by the writer of the
spoofing program. If it is properly done, the spoof will be undetected.
One does not have to rely on password cracker or spoofing programs to capture
reusable passwords in today's networked environment. Unless special measures are
taken (which is rarely the case), computer systems exchange information over the
network in clear text (i.e., in unencrypted form). It is a trivial matter to capture the
host name, userid, and password information using either Trojan network programs
(e.g., telnet or rlogin) or network packet sniffing programs.
To overcome this vulnerability, systems have been developed that provide one-
time passwords (i.e., passwords that are used only once). These systems include
smart cards, randomized tokens, and challenge-response systems. Some methods of
authentication are based on biometrics (e.g., fingerprint matching, retina scan, voice
recognition, or key stroke patterns). However, there are problems associated with
each of these methods. For example, tamper resistance of smart cards is problem-
atic
[I]
and biometrics are not always reliable.
One way to prevent spoofing attacks is to provide a special mechanism to counter
it. One such mechanism, called trusted path, will be discussed below. Untrusted
DBMSs and their operating systems provide no such special mechanism and are
vulnerable to spoofing and other attacks.
6
Sushi1 Jajodia and Ting Yu
2.2
Access Control Policies
Access control policies define what is authorized by means of security attributes
associated with each storage, processing or communication resource of the com-
puter system. The attributes take the form of modes of access:
read, write,

search,
execute, own,
etc., and security is defined in terms of allowable
access modes.
The accesses are held by subjects. A subject has privileges, i.e., the accesses it
is potentially allowed to have to the objects. The privileges of the subjects and the
accesses they currently hold make up the
protection
or
security state
of the computer
system.
As mentioned above, access control policies usually assume that mechanisms are
present for uniquely associating responsible human users with subjects (typically via
an identification and authentication mechanism).
Discretionary Access Control
The most familiar kind of access control is discretionary access control. With dis-
cretionary access control, users (usually the
owners
of the objects) at their discretion
can specify to the system who can access their objects. In this type of access control,
a user or any of the user's programs or processes can choose to share objects with
other users.
Since discretionary access control allows users to specify the security attributes
of objects, the discretionary access control attributes of an object can change during
the life of the system. For example, the
UNIX
operating system has a discretionary
access control under which a user can specify if
read, write,

or
execute
access is to be allowed for himself or herself, for other users in the same group, and
for all others. Unix uses access model bits to implement these controls; these bits can
be set and changed for files owned by the user by means of the
chrnod
command.
A
more advanced model of discretionary access control is the
access matrix
model
in which access rights are conceptually represented as an access matrix (see
figure
1).
The access matrix has one row for each subject and one column for each
object. The
(i,
j)-entry of the access matrix specifies the access rights of the subject
corresponding to the ith row to the object corresponding to the jth column.
Since the access matrix is often sparse, the access matrix is implemented using
one of the following methods:
0
Access control lists:
Each object has a list of subjects that are allowed access
to that object. Entries in the list also give the possible modes of access that a
particular subject may have. In terms of the access matrix, each column of the
access matrix is stored with the object corresponding to that column.
Capabilities:
Each subject has a list of objects that the subject is allowed to
access, together with the allowed modes of access. In terms of the access matrix,

each row of the access matrix is stored with the subject corresponding to that
row.
Basic Security Concepts
7
Objects
(and
Subjects),-b
\
rights
Fig.
1.
Example of an access matrix
The Trojan Horse Problem
The essential purpose of discretionary access control is to prevent human users from
directly accessing other user's objects. It cannot prevent indirect access via malicious
software acting on behalf of the user.
A
user can have indirect access to another user's
objects via a malicious program called a
Trojan
horse.
A Trojan horse is a mali-
cious computer program that performs some apparently useful function, but contains
additional hidden functions that surreptitiously leak information by exploiting the
legitimate authorizations of the invoking process.
Inability to defend against security violations due to Trojan horses is a funda-
mental problem for discretionary access control.
An
effective Trojan Horse has no
obvious effect on the program's expected output and, therefore, its damage may never

be detected. A simple Trojan horse in a text editor might discreetly make a copy of
all files that the user asks to edit, and store the copies in a location where the penetra-
tor, the person who wrote or implanted the program can later access them. As long as
the unsuspecting user can voluntarily and legitimately give away the file, there is no
way the system is able to tell the difference between a Trojan horse and a legitimate
program. Any file accessible to the user via the text editor is accessible to the Trojan
horse since a program executing on behalf of a user inherits the same unique ID,
privileges and access rights as the user. The Trojan horse, therefore, does its dirty
work without violating any of the security rules of the system. A clever Trojan horse
might even be programmed to delete itself if the user tries to do something that might
reveal its presence. Thus, a Trojan horse can either copy a file, set up access modes
so that the information can be read directly from the files at a later time, or delete,
modify or damage information.
To understand how a Trojan horse can leak information to unauthorized users in
spite of the discretionary access control, we give a specific example.
8
Sushi1 Jajodia and Ting Yu
Example
1
[Trojan Horse]
Suppose that user
A
creates file
fl
and writes some infor-
mation in it. Suppose now user
B
creates file
f2.
As the owner of the file, user

B
is
allowed to execute any operation on the file
f2.
In particular,
B
can grant other users
authorization on the file. Suppose then that
B
grants
A
the write privilege on file
f2,
and converts a program
P
which performs some utility function into a Trojan horse
by embedding a hidden piece of code composed of a read operation on file
fl
and a
write operation on file
f2.
Suppose now that
A
invokes program
P.
The process which executes program
P
runs with the privileges of the calling user
A
and, therefore, all access requests made

in
P
are checked against the authorizations of user
A.
Consider, in particular, the
execution of the hidden code. First the read operation on file
f
1
is requested. Since
P
is executing on behalf of
A
who the owner of the file, the operation is granted. Next,
the write operation on file
f2
is requested. Since
A
has been given the write privilege
on
f2,
the write operation is also granted. As a consequence, during execution of
program
P
contents have been read from file
fl
,
on which user
B
does not have read
authorization, and written into file

f2,
on which user
B
does have read authorization
(see Figure2). In this way, an illegal transmission of information to unauthorized user
A
has occurred in spite of the discretionary access control.
Principal
A
ACL
Program
Goodies
Fig.
2.
Example
of
a Trojan horse
This simple example illustrates how easily the restrictions stated by discretionary
authorizations can be bypassed and, therefore, the lack of assurance on the satisfac-
tion of authorizations imposed by the discretionary policy. To overcome this weak-
ness, further restrictions, beside the simple presence of the authorization for the re-
quired operations, must be imposed on discretionary access control to provide assur-
ance that the accesses which are not authorized will not be executable indirectly (by
employing a Trojan horse).
Rather than severely restricting user capabilities, as required by these solutions,
another approach is to use mandatory access control rather than discretionary access
control. We define mandatory access control next.
Basic Security Concepts
9
Mandatory

Access
Control
Mandatory access control contains security restrictions that are applied to all users.
Under mandatory access control, subjects and objects have
jked
security attributes
that are used by the system to determine whether a subject can access an object or
not. The mandatory security attributes are assigned either administratively or auto-
matically by the operating system. These attributes cannot be modified by either the
users or their programs on request.
To explain mandatory access control fully, we need to introduce lattice-based
policies since almost all models of mandatory security use lattice-based policies.
Lattice-based policies
Lattice-based policies partition all the objects in the system
into a finite set SC of security classes. The set SC is partially ordered with order
5.
That is, for any security classes A,
B,
and C, in SC, the following three properties
hold:
(1) Reflexive: A5A
(2) Antisymmetric:
If
A
5
B
and B
5
A, then A
=

B
(3)
Transitive: IfAdBandBC,thenA5C
The partial order
5
defines the
domination
relationship between security classes.
Given two security classes
L1
and
L2, L1
is said to be
dominated by
L2
(and, equiv-
alently,
L2
dominates
L1)
if
L1
5
L2.
We use
4
to denote strict domination. Thus,
L1
is
strictly dominated

by
L2
(and,
equivalently,
L2
strictly dominates
L1),
written
L1
<
L2,
if
L1
5
L2,
but
L1
#
L2.
Since
5
is a partial order, it is possible to have two security classes
L1
and
L2
such that neither
L1
dominates
L2
nor

L2
dominates
L1,
in which case
L1
and
L2
are said to be
incomparable.
Throughout this chapter, we use the terms
High
and
Low
to refer to two security
classes such that they are comparable and the latter is strictly dominated by the for-
mer. Also, we sometimes use the notation
L2
>
L1
as another way of expressing
L1
5
L2.
SC with partial order
5
is in fact a
lattice
(hence the term lattice-based policy).
In a lattice, every pair of elements possesses a unique
least upper bound

and a unique
greatest lower bound.
That is, for any pair of security classes
A
and
B
in SC,
(1) There is a unique security class
C
in SC such that
(a)
A
1:
C
and B
5
C, and
(b)
IfA<DandB5D,thenCdDforallDinSC.
Security class
C
is the least upper bound of
A
and B.
(2) There is a unique security class
E
in SC such that
(a)
E
5

A
and
E
1'
B, and
(b) IfD 5AandD5B,thenD EforallDinSC.
Security class
E
is the greatest lower bound of
A
and
B.
10
Sushi1
Jajodia
and Ting
Yu
Since
SC
is a lattice, there is a lowest security class, denoted by
system-low,
and
a highest security class, denoted by
system-high.
Thus, by definition,
system-low
5
A and A
5
system-high

for all A in
SC.
Explicit and Implicit Information Flows
In a lattice-based model, the goal is to reg-
ulate the flow of information among objects. As a simple example of information
flow, consider the program fragment given below:
if
(A
=
1)
D
:=
B
else
D
:=
C;
Obviously, there is an explicit information flow from
B
and
C
into
D
since the
information in
B
and C is transferred into
D.
However, there is another implicit in-
formation flow from A into

D
because the future value of
D
depends on the value
of A. Access control only determines that the program has read access to A,
B,
and
C, while it has write access to
D.
Thus, access control is only capable of detect-
ing explicit information flows; however, they can easily permit implicit information
flows.
In a lattice-based model, the partial order
5
determines if the information in
object A can flow into object
B.
Information in A can flow into
B
only if A
5
B.
As a result, information can either flow upward along the security lattice or stay at
the same security class.
Although there are security models based on information flow, determining in-
formation flow is much more complex than determining accesses. Thus, its use in
computer security is restricted to special cases where only static analysis is needed
to determine information flow, for example, in covert channel analysis, discussed
later in this chapter.
Example

2
[Military Security Lattice]
A security class (or level) in the military se-
curity lattice is an ordered pair consisting of a
sensitivity level
as the first component
and a set of
categories
(or
compartments)
as the second component. The set of cate-
gories could be empty, in which case we write only the sensitivity level as the security
class.
The usual sensitivity levels are
UnclassiJied, Confidential, Secret,
and
Top Secret,
which are hierarchically ordered as follows:
UnclassiJied
<
Confidential
<
Secret
<
Top Secret
There is an unordered list
C
of categories consisting of labels such as
Conven-
tional, Nuclear, Crypto,

and
NATO
(North Atlantic Treaty Organization). The set of
categories is a subset of this list. Different categories are partially ordered using the
set inclusion as follows: Given categories C1 and C2
C,
C1
5
C2 if C1
g
C2.
The partial orders defined on sensitivity levels and compartments are used to
define a partial order on the set
SC
consisting of all security classes as follows:
Given two security classes (Al, C1) and (A2, C2) in
SC,
(Al, Cl)
5
(A2,
C2) if AI
5
A2 and CI
5
C2.
Basic Security Concepts
11
S,
{NUCLEAR) TS,
O

S,
{CRYPTO)
Fig.
3.
A
portion
of
the military security lattice
It is easy to see that military security classes form a lattice. For any pair of se-
curity classes (AI, C1) and (A2, C2) with A1
<
A2, (A1, C1
n
C2) is their greatest
lower bound and (A2, C1 UC2) is their least upper bound. In fact, the system-high and
system-low elements are given by (Unclass$ed,
0))
and
(Top
Secret, C), respectively.
A
security lattice can be naturally viewed as a directed acyclic graph where the
security classes in SC are its nodes and there is a directed edge A
+
B
whenever
A
3
B.
The arrows in the graph indicate the allowable information flow.

A
portion of the military security lattice is shown in Figure
3.
The Bell-LaPadula Security Model
The Bell-LaPadula security model is a widely used formal model of mandatory
access control. It is an automata-theoretic model that describes lattice-based poli-
cies
[2].
The model contains a lattice (SC,
5)
of security classes, a set
S
of subjects, and
a set
0
of objects with a function C that assigns security classes to both subjects and
objects: C:
S
U
0
+
SC. Subjects also have a maximum security level, defined as
a function L,,,:
S
+SC,
but objects have a constant security level. By definition,
C(S)
5
Lrnaz(S).
There is a set

R
of access privileges (or rights) consisting of following access
modes:
read:
subject has read access to the object
append:
subject can append to an existing object
execute: subject can invoke the object
read-writesubject has both the read and write access to the object
With only a read access, a subject cannot modify (i.e., append or write) the ob-
ject in any way. With only an append access, a subject can neither read nor make a
12
Sushi1 Jajodia and Ting Yu
destructive modification (by overwriting) to the object. With only an
execute
ac-
cess, a subject can neither read nor modify the object. Finally, with a
read-write
access, a subject can both read and modify the object.
There is a control attribute associated with each object. The control attribute is
granted to the subject that created the object. A subject can extend to other subjects
some or all of the access modes it possesses for the controlled object. The control
attribute itself cannot be extended to other objects. (Thus, a control attribute is similar
to the ownership flag in the discretionary access control since it governs the right to
grant other subjects discretionary access to the object.)
The privileges of the subjects can be represented by an
m
x
n
access matrix

M.
The entry
M
[i,
j]
is the set of privileges that subject
Si
has to object
Oj.
Note that a
right is not an access. Accesses in the current state are described by an
m
x
n
current
access matrix
f?.
The protection state of the model is the combination of
M
and
B.
The model
reads in a sequence of requests to change the protection state; transitions to new
states either satisfy or reject the request. Specific access control policies are modeled
by a set of rules that define satisfiable requests (i.e., allowable transitions to new
states) in a given security state.
An
important characteristic of the model is the open-
ended nature of the rules. This can lead to problems, but we will simply present the
two most widely used rules which have been found to be reasonable for practical

implementation, assuming objects have a constant security class. The two rules are
usually presented as invariants:
THE SIMPLE SECURITY PROPERTY.
The access mode
read
is in
B[i, j]
only
if
read
is in
M[i, j]
and
L(Oj)
5
L(Si).
THE *-PROPERTY (read "the star property").
The access mode
append
is in
B[i, j]
only if
append
is in
M[i, j]
and
L(S;)
3
L(Q),
and the access mode

read-write
is in
B[i, j]
only if
read-write
is in
M[i, j]
and
L(&)
=
L(Oj).
Thus, the simple security property puts restrictions on read operations ("no read
up"), while the *-property restricts append operations ("no append down") and read-
write operations ("read-write at the same level").
As an example, a file with security class (Secret, {Nato, Nuclear)) can be read
by a subject with security class (Top Secret, {Nato, Nuclear; Crypto)), but not by a
subject with security class (Top Secret, {Nato, Crypto)). Also, a subject with (Secret,
{Nato)) security class cannot gain either read or read-write access to a file at (Secret,
{Crypto)) security class since these two classes are incomparable. Note that these
restrictions are consistent with the information flow shown in Figure
3.
There are several different versions of the *-property. The two most popular ver-
sions of the *-property do not make any distinction between append and read-write
operations. Thus, there is a single
write
access mode that gives a subject the right
to read as well as modify (append and write) the object.
THE REVISED *-PROPERTY.
The access mode
write

is in
B[i, j]
only
if
write
is in
M[i,j]
and
L(&)
5
L(C3j).
Basic Security Concepts
13
With simple security property and the revised *-property, a subject is not allowed
to either read up or write down.
RESTRICTED +-PROPERTY. The access mode
write
is
in
B[i,
j]
only if
write
is
in
M[i,j]
and
13(Si)
=
C(e)j).

With simple security property and the restricted +-Property, a subject can read
objects at its session level and below, but can write objects at its session level only.
The Bell-LaPadula Model and the Military Security Policy
The Bell-LaPadula model was designed to mirror the military security policy that
existed in the paper world. To see how well it meets this goal, we review the military
security policy for handling paper documents.
Each defense document is stamped with a security label giving the security level
of the document. Any document that is
Conjdential
or above is considered classified.
Any user that requires access to classified documents needs an appropriate clearance.
The clearance is assigned by some process which is external to the system, usually
by the system security officer.
A user is trusted to handle documents up to his clearance. Accordingly, it is
permissible for a user to read a document if the clearance level of the user dominates
the security label of the document. A user may be authorized to reclassify documents,
down or up (but not above his clearance level).
Suppose now that a user logs into a system that controls accesses in accordance
with the Bell-LaPadula model. At the time of the login, he must specify a security
level for the session. The system requires that this session level is dominated by the
clearance level of the user, and any process running on behalf of the user during
this session is assigned this session level. In terms of the Bell-LaPadula model, the
function
C,,,
corresponds to the clearance levels of the users. When a user reads
an object
0,
C(0)
5
C(S)

by the simple security property, and since
C(S)
5
C,,,
(S),
clearance level of the user dominates the security label of the object, which
is consistent with the military security policy.
When a user creates a file
f,
the security label for the file
f
dominates the session
level (because of the +-property which prevents any write downs). This means that
during a
Secret
session, any files created by the user will have
a
Secret
or higher
security label. If this user wishes to create a file which is labeled only
Conjdential,
he must first initiate another session at the
Conjdential
level and then create the
Conjdential
file.
Thus, the *-property is more restrictive than the military security policy. How-
ever, there is a very good reason why this is so, illustrated by the following example.
Example
3

[Trojan Horse Revisited]
Consider once again the Trojan horse example
given earlier with following modification. Suppose now that the file
fl
is a
Top Se-
cret
file and that
B
is a malicious user that wants to access
fl
even though he is
cleared only up to the
Conjdential
level. To accomplish this,
B
knows that the user
A
has a
Top Secret
clearance and often uses the program
P.
As before,
B
creates a
14
Sushi1 Jajodia and Ting Yu
Principal
A
ACL

Confidential
write
The
program cannot write to File
F,
Fig.
4.
Example of a Trojan horse that fails
Conjdential
file
f2
and gives
A
the write access to
f2
and hides the malicious code
inside
P
that copies the contents of
fi
into
fi.
Without the *-property restrictions, were
A
to invoke
P,
the contents of
fi
will
be copied in

f2.
Thus, the Trojan horse in effect downgrades a file which is
Top
Secret
to a file which is merely
Conjdential.
(Note that the user
A
is not aware that
this downgrading has taken place.) The purpose of the *-property is to foil just such
attacks.
To see this, suppose now that the system has a mechanism, called the
reference
monitor,
that enforces the two Bell-LaPadula restrictions faithfully. Given this, sup-
pose the user
A
invokes the program
P
while logged in at a
Top Secret
session. As
before, program
P
runs with the same privileges as those of user
A.
Although the
read operation on file
fi
will succeed (by the simple security property), the write

operation on file
f2
will fail (since this operation will violate the *-property). See
figure
4.
Note that although the *-property plays an important role in foiling the Trojan
horse attack in Example
4,
there is another hidden assumption that is equally im-
portant. This hidden assumption is the mandatory access control requirement that
security levels of subjects and objects are fixed; they cannot be modified by the users
or the processes running on their behalf. Since the Trojan horse cannot change the
security attributes of an object, the Trojan horse cannot make it available to a subject
that is not privileged to access it. Since the static assignment of security classes is an
essential assumption, we make it explicit as a principle, as follows.
The Tranquility Principle
A
key assumption that has been made so far is that the
security attributes of an object are fixed. The security attributes are assigned when
the object is created and are derived from the privileges of the creating subjects.
A
simple example of this is the security class. Each object is assigned a security class
at creation time which cannot change. This assumption has been formalized by the
following axiom.
Basic Security Concepts
15
THE TRANQUILITY PRINCIPLE.
A
subject cannot change the security class of
an object.

Trusted Subjects
In the model described thus far, we have assumed that security
attributes of objects and subjects are fixed. However, this assumption is too restrictive
in a real situation since there are legitimate tasks which cannot be performed under
the Bell-LaPadula model (due to "no write down" restrictions of the *-property). The
legitimate tasks include system maintenance tasks such as specification of clearance
levels of users and tasks such as, reclassification, sanitization, and downgrading of
objects, all of which entail lowering the security classes of these objects. The concept
of a
trusted subject
was added to overcome the restrictions.
TRUSTED SUBJECT.
A
subject is said to be
trusted
it
is exempt from one or more
security policy restrictions, yet
it
will not violate these restrictions.
Thus, even if a trusted subject has the power to violate our security policy, we
trust it not to do so. For reclassification, sanitization, and downgrading tasks, a sub-
ject only needs exemption from the "no write down" restriction of the *-property.
2.3
Illegal Information Flow Channels
Even if we have a lattice-based mandatory access control policy implemented by a
reference monitor, an adversary can still launch attacks against our system's security.
As we observed earlier, access controls only protect direct revelation of data, but not
against violations that produce illegal information flow through indirect means. A
program that accesses some classified data on behalf of a process may leak those

data to other processes and thus to other users. A secure system must guard against
implicit leakage of information occurring via these channels.
Covert
Channels
In terms we have established so far, a
covert channel
is any component or feature of
a system that is misused to encode or represent information for unauthorized trans-
mission, without violating our access control policy. Potential unauthorized use of
components or features can be surprising; the system clock, operating system inter-
process communication primitives, error messages, the existence of particular file
names, and many other features can be used to encode information for transmission
contrary to our intentions.
Two types of covert channels have been identified thus far. A
covert storage chan-
nel
is any communication path that results when one process causes an object to be
written and another process observes the effect. This channel utilizes system storage
such as temporary files or shared variables to pass information to another process.
A
covert timing channel
is any communication path that results when a process pro-
duces some effect on system performance that is observable by another process and
is measurable with a timing base such as a real-time clock. Covert timing channels
are much more difficult to identify and to prevent than covert storage channels. A
16 Sushi1 Jajodia and Ting Yu
system with B2 rating or higher must be free of these channels. The most important
parameter of a covert channel is its bandwidth, i.e. the rate (in bits per second) at
which information can be communicated. Surprisingly, the higher the speed of the
system the larger the bandwidth of these covert channels. According to the Orange

Book standards, a channel that exceeds the rate of
100
bits per second is considered
as having a
high
bandwidth.
The covert channel problem is of special interest for us because many database
system concurrency control mechanisms are potential covert channels. We will dis-
tinguish between covert channels that are implementation invariant and covert chan-
nels that are found only in some implementations. The former are called
signaling
channels.
Thus, a signaling channel is a means of information flow inherent in the
basic algorithm or protocol, and hence appears in every implementation.
A
covert
channel that is not a signaling channel is a property of a specific implementation,
and not of the general algorithm or protocol. Thus a covert channel may be present
in a given implementation even if the basic algorithm is free of signaling channels.
It is critical to exclude signaling channels from our algorithms as much as possible.
The absence of signaling channels in an algorithm does not guarantee the absence of
other implementation-specific covert channels in a particular implementation.
3
Mechanism
Once the security policy is defined, we need to design and implement a sound pro-
tection mechanism. The efficiency and power of the available protection mechanisms
limits the kinds of policies that can be enforced. Mechanisms should have the fol-
lowing qualities:
(1)
Every user and every system operation affected by the mechanism should have

the
least privilege
necessary.
(2)
It should be simple.
(3)
It should completely mediate all operations defined for it (i.e., it cannot be by-
passed).
(4)
It should be tamper resistant.
(5)
Its effectiveness should not depend on keeping the details of its design secret.
This last requirement is used in cryptography, where the strength of an encryption
algorithm does not depend on concealment of the algorithm itself.
3.1
Reference Monitor
The Bell-LaPadula model is associated with an enforcement mechanism called a
reference monitor.
The reference monitor is defined to be at the lowest levels of
implementation, that is, any access to data or metadata will be subject to checks made
by the reference monitor and will be prevented if the access violates the security
policy (see Figure
5).
The term "any" includes arbitrary use of individual machine
Basic Security Concepts
17
Objects
Access Control
Database
Fig.

5.
Reference monitor abstraction
instructions or of system utilities such as debuggers. By definition, the reference
monitor is correct, tamper resistant, and unbypassable. Thus we assume all accesses
are forced to be in accordance with our security policy unless they are made by
trusted subjects.
3.2
Trusted Software and Trusted Components
One of the key features of the trusted systems is trusted software. Generally, a trusted
software is a program that is correct with respect to its specification. In the context
of security, a software is said to be
trusted
if it has the power to violate our security
policy, but we trust it not to do so.
A trusted component is a system component containing only the trusted software.
By definition, trusted components cannot be exploited by a subject to violate the
security policy of the system.
In practice, trusted components are generally undesirable because they must be
verified and validated at extra expense to show they deserve the trust. According to
DOD's Trusted Computer System Evaluation Criteria (the Orange Book)
[4],
trusted
components require that they be formally verified and certified as correct by a formal
evaluation process.
3.3
Trusted Computing Base
The reference monitor itself must unavoidably be trusted. Additional trusted com-
ponents are needed to support the reference monitor, for example identification and
authentication mechanisms such as logins and passwords. We use the term
trusted

computing base
to refer to the reference monitor plus all of its supporting trusted
components.

×