Tải bản đầy đủ (.pdf) (30 trang)

Grid Security

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (530.42 KB, 30 trang )

4
Grid Security
4.1 INTRODUCTION
In general, IT security is concerned with ensuring that critical
information and the associated infrastructures are not compro-
mised or put at risk by external agents. Here, the external agent
might be anyone that is not authorized to access the aforemen-
tioned critical information or infrastructure. The critical infras-
tructure we are referring to is that which supports banking and
financial institutions, information and communication systems,
energy, transportation and other vital human services. The Grid
is increasingly being taken up and used by all sectors of busi-
ness, industry, academia and the government as the middle-
ware infrastructure of choice. This means that Grid security is
a vital aspect of its overall architecture if it is to be used for
critical infrastructures.
A number of observations have been made on critical infras-
tructures [1]. It is clear that in today’s world they are highly
interdependent, both physically and in their reliance on national
information infrastructure. Most critical infrastructures are largely
owned by the private sector, where there tends to be a reluctance
to invest in long-term and high-risk security-related technologies.
Ongoing changes to business patterns are reducing the level of
tolerance to errors in these infrastructures. However, there is insuf-
ficient awareness of critical infrastructure issues. The growth of
The Grid: Core Technologies Maozhen Li and Mark Baker
© 2005 John Wiley & Sons, Ltd
124 GRID SECURITY
IT and the Internet can therefore have major implications for the
economic and the military security of the world.
IT infrastructures are changing at a staggering rate. Their scale


and complexity are becoming ever greater in scope and func-
tional sophistication. Boundaries between computer systems are
becoming indistinct; increasingly every device is networked, so the
infrastructure is becoming a heterogeneous sea of components with
a blurred human/device boundary. There is continuous and incre-
mental development and deployment; systems evolve by adding
new features and greater functionality at an unremitting pace.
These systems are becoming capable of dynamic self-configuration
and adaptation, where systems respond to changing circumstances
and conditions of their environment [2]. Increasingly there are mul-
tiple innovative types of networked architectures and strategies
for sharing resources. This obviously leaves gaps for a multiplicity
of fault types and openings for malicious faults, as well as attacks
from internal and external parties.
The actors that may want to compromise critical information or
infrastructures are many and varied. They include those that pose
national security threats, such as information warriors or agents
involved in national intelligence. Alternatively the actors could be
terrorists involved in industrial espionage or organized crime and
who pose a shared threat to a country. Or the threats could just
be local and come from institutional or recreational hackers intent
on thrill, challenge or prestige.
4.2 A BRIEF SECURITY PRIMER
The goals of security are threefold [2]: first, prevention – prevent
attackers from violating security policy; secondly detection – detect
attackers’ violation of security policy; finally recovery – stop an
attack, assess and repair damage, and continue to function cor-
rectly even if the attack succeeds.
Obviously prevention is the ideal scenario. In this case there
would be no successful attacks. Detection occurs only after

someone violates the security policy. It is important when a vio-
lation has occurred or is underway that it is reported swiftly. The
system must then respond appropriately. Recovery means that the
system continues to function correctly, possibly after a period of
degraded operation. Such systems are said to be intrusion tolerant.
4.2 A BRIEF SECURITY PRIMER 125
This is very difficult to do correctly. Usually, recovery means that
the attack is stopped and the system fixed. This may involve shut-
ting down the system for some time, or making it unavailable
to all users except those fixing the problem, and then the system
resumes correct operation.
The three classic security concerns of information security deal
principally with data, and are:

Confidentiality: Data is only available to those who are autho-
rized;

Integrity: Data is not changed except by controlled processes;

Availability: Data is available when required.
Confidentiality is aimed at different issues. The content of a packet
during a communication has to be secure to prevent malicious
users from stealing the data. In order to prevent unauthorized
users retrieving secret information, a common approach is to
encrypt data from the sender before sending to the receiver. On
the receiving end, the receiver can extract the original information
by decrypting the encrypted text. Hence, confidentiality of data
transmission is closely related to application of different encryp-
tion algorithms.
Integrity is the protection of data from modification by unau-

thorized users. This is not the same as data confidentiality. Data
integrity requires that no unauthorized users can change or modify
the data concerned. For example, you want to broadcast a message
to the public, which is definitely not confidential to anyone. You
have to ensure the data integrity of your message from modifi-
cation by unauthorized people. In this instance, you may have to
stamp or add your signature to certify the message.
The term “availability” addresses the degree to which a system,
sub-system or equipment is operable and in a usable state.
Additional concerns deal more with people and their actions:

Authentication: Ensuring that users are who they say they are;

Authorization: Making a decision about who may access data or
a service;

Assurance: Being confident that the security system functions
correctly;
126 GRID SECURITY

Non-repudiation: Ensuring that a user cannot deny an action;

Auditability: Tracking what a user did to data or a service.
Authentication implies ensuring the right user executes the granted
services or the origin of the data was from the real sender. There
are a large number of techniques that may be used to authenticate a
user – passwords, biometric techniques, smart cards or certificates.
Before starting services between a server and client, there should
be a mechanism to identify the privilege of the user. You may
consider a user name and password for a logon scheme to be the

most common authentication scheme. For example, you have to
input your Visa Card password at an ATM terminal or enter a
password to gain access to a Web portal. Hence, authenticity is
mainly related to the identification of authorized users.
Authorization is often a first step towards providing the service
of authentication. Authorization enables the decision to allow a
particular operation when a request to perform the operation is
received. Authorization in existing systems is usually based on
information local to the server. This information may be present
in Access Control Lists (ACL) associated with files or directo-
ries. ACLs are files listing individuals authorized to login to an
account (e.g. the UNIX .rhosts file), configuration files nam-
ing authorized users of a node and sometimes files read over
the network. When applied to distributed systems, authorization
mechanisms are required to determine whether a particular task
should be run on the current node when requested by a particular
principal. Many applications, and in particular applications using
distributed systems, can benefit from an authorization mechanism
that supports delegation. Delegation is a means by which a user or
process authorized to perform an operation can grant the author-
ity to perform that operation to another process. Delegation can be
used to implement distributed authorization where, for example,
a resource manager might allocate a node to a job and delegate
authority to use that node to the job’s initiator.
Assurance is the counterpart to authorization. Authorization
mechanisms allow the provider of a service to decide whether
to perform an operation on behalf of the requester of the
service. Assurance mechanisms allow the requester of a ser-
vice to decide whether a candidate service provider meets the
requesters’ requirements for security, trustworthiness, reliability or

other characteristics. Assurance mechanisms can be implemented
4.3 CRYPTOGRAPHY 127
through certificates (see Section 4.3.5 for a discussion of certificates)
signed by a third party trusted to endorse, license or insure a
service provider; certificates are checked as a client selects the
providers to contact for particular operations.
Non-repudiation is the concept of ensuring that a contract, espe-
cially one agreed to via the Internet, cannot later be denied by
one of the parties involved. With regard to digital security, non-
repudiation means that it can be verified that the sender and the
recipient were, in fact, the parties who claimed to send or receive
the message, respectively.
Auditability is about keeping track of what is happening on
a system. The idea is that if there is an intrusion, then the sys-
tem operator can find out exactly what has been done and in
whose name.
Other security concerns relate to:

Trust: People can justifiably rely on computer-based systems to
perform critical functions securely, and on systems to process,
store and communicate sensitive information securely;

Reliability: The system does what you want, when you want it to;

Privacy: Within certain limits, no one should know who you are
or what you do.
4.3 CRYPTOGRAPHY
4.3.1 Introduction
Cryptography is the most commonly used means of providing
security; it can be used to address four goals:


Message confidentiality: Only an authorized recipient is able to
extract the contents of a message from its encrypted form;

Message integrity: The recipient should be able to determine if
the message has been altered during transmission;

Sender authentication: The recipient can identify the sender, and
verify that the purported sender did send the message;

Sender non-repudiation: The sender cannot deny sending the
message.
128 GRID SECURITY
Obviously, not all cryptographic systems (or algorithms) realize,
nor intend to, achieve all of these goals.
4.3.2 Symmetric cryptosystems
Using symmetric (conventional) cryptosystems, data is trans-
formed (encrypted) using an encrypted key and scrambled in such
a way that it can only be unscrambled (decrypted) by a symmetric
transformation using the same encryption key. Besides protecting
the confidentiality of data, encryption also protects data integrity.
Knowledge of the encryption key is required to produce cipher
text that will yield a predictable value when decrypted. Therefore
modification of the data by someone who does not know the key
can be detected by attaching a checksum before encryption, and
verified after decryption. A sketch of symmetric key cryptography
is shown in Figure 4.1.
4.3.2.1 Example: Data Encryption Standard (DES)
DES consists of two components – an algorithm and a key. The
DES algorithm involves a number of iterations of a simple transfor-

mation which uses both transposition and substitution techniques
applied alternately. DES is a so-called private-key cipher; here data
is encrypted and decrypted with the same key. Both sender and
receiver must keep the key a secret from others. The DES algo-
rithm is publicly known, learning the encryption key would allow
an encrypted message to be read by anyone.
Encrypt with
secret key
Decrypt with
secret key
Plaintext
The Internet
Cipher text
Plaintext
Figure 4.1 Symmetric key cryptography
4.3 CRYPTOGRAPHY 129
4.3.3 Asymmetric cryptosystems
In asymmetric cryptography, encryption and decryption are per-
formed using a pair of keys such that knowledge of one key does
not provide knowledge of the other key in the pair. One key, called
the public key, is published, and the other key, called the private
key, is kept private. The main advantage of asymmetric cryptogra-
phy is that secrecy is not needed for the public key. The public key
can be published rather like a telephone number. However only
someone in possession of the private key can perform decryption.
A sketch of asymmetric key cryptography is shown in Figure 4.2.
4.3.3.1 Example: RSA
An example of a public-key cryptosystem is RSA, named after
its developers, Rivest, Shamir and Adleman (RSA), who invented
the algorithm at MIT in 1978. RSA provides authentication, as

well as encryption, and uses two keys: a private key and a public
key. With RSA, there is no distinction between the function of a
user’s public and private keys. A key can be used as either the
public or the private key. The keys for the RSA algorithm are
generated mathematically, in part, by combining prime numbers.
The security of the RSA algorithm, and others similar to it, depends
on the use of very large numbers (RSA uses 256 or 512 bit keys).
With both symmetric and asymmetric systems there is a need
to secure the private key. The private key must be kept private.
It should not be sent to others, and it should not be stored on a
system where others may be able to find and use it. The stored
keys should always be password protected. Another issue, with
Encrypt with
public key
Decrypt with
private key
Plaintext
The Internet
Cipher text
Plaintext
Figure 4.2 Asymmetric key cryptography
130 GRID SECURITY
key-based systems, is that the algorithms that are used are public.
This means that algorithms could be coded and used to decrypt a
message via a brute force method of trying all the possible keys.
Such a program would fortunately need a significant amount of
computational power to accomplish such a process. With keys of
sufficient length, the time to decode a message would be unrea-
sonable. Currently, keys are typically 1024–2048 bits in length.
However, the availability of cheap computational power, in the

form of clusters of PCs, and their doubling in speed every eighteen
months, means that the brute force approach may be viable in the
future.
4.3.4 Digital signatures
Integrity is guaranteed in public-key systems by using digital sig-
natures, which are a method of authenticating digital information,
in the same manner that an individual would sign a paper docu-
ment to authenticate it. A digital signature is itself a sequence of
bits conforming to one of a number of standards.
Most digital signatures rely on public-key cryptography to work.
Consider a scenario, where someone wants to send a message to
another and prove its originator, but does not care whether any-
body else reads it. In this case, they send an encrypted copy of the
message, along with a copy of the message encrypted with their
private (not public) key. A recipient can then check whether the
message really came from originator by unscrambling the scram-
bled message with the sender’s public key and comparing it with
the unscrambled version. If they match, the message was really
from the originator, because the private key was needed to create
the encrypted copy and no one but the originator has it. Often, a
cryptographically strong hash function [4] is applied to the mes-
sage, and the resulting message digest is encrypted instead of the
entire message, which makes the signature significantly shorter
than the message and saves considerable time since hashing is
generally much faster, byte for byte, than public-key encryption.
4.3.5 Public-key certificate
A public-key certificate is a file that contains a public key, together
with identity information, such as a person’s name, all of which is
4.3 CRYPTOGRAPHY 131
signed by a Certification Authority (CA). The CA is a guarantor

who verifies that the public key belongs to the named entity.
Certificates are required for the large-scale use of public-key
cryptography, since anybody can create a public–private-key pair.
So in principle, if the originator is sending private information
encrypted with the recipient’s public key, a malicious user can
fool the originator into using their public key, and so get access
to the information, since it knows its corresponding private key.
But if the originator only trusts public keys that have been signed
(“certified”) by an authority, then this type of attack can be pre-
vented. In large-scale deployments one user may not be familiar
with another’s certification authority (perhaps they each have a
different company CA), so a certificate may also include a CA’s
public key signed by a higher level CA, which is more widely
recognized. This process can lead to a hierarchy of certificates and
complex graphs representing trust relations.
Public Key Infrastructure (PKI) refers to the software that man-
ages certificates in a large-scale setting. In X.509 PKI systems, the
hierarchy of certificates is always a top-down tree, with a root
certificate at the top, representing a CA that is so well known it
does not need to be authenticated. A certificate may be revoked
if it is known that the related private key has been exposed. In
this circumstance, one needs to look up the Certificate Revoca-
tion List, which is often stored remotely, and updated frequently.
A certificate typically includes:

The public key being signed;

A name, which can refer to a person, a computer or an organi-
zation;


A validity period;

The location (URL) of a revocation list.
The most common certificate standard is the ITU-T X.509 [5]. An
X.509 certificate is generally a plaintext file that includes informa-
tion in a specific syntax:

Subject: This is the name of the user;

Subject’s public key: This includes the key itself, and other infor-
mation such as the algorithm used to generate the public key;

Issuer’s subject: CA’s distinguished name (Table 4.1);
132 GRID SECURITY
Table 4.1
Distinguished Names (DN)
Names in X.509 certificates are not encoded simply as common names, such
as “Mark Baker” or “Certificate Authority XYZ” or “System Administrator”.
Names are encoded as distinguished names, which are name–value pairs. An
example of typical distinguished name attributes is shown below.
OU = Portsmouth, L = DSG, CN = Mark Baker
A DN can have several different attributes, and the most common are the
following:
OU: Organizational Unit
L: Location
CN: Common Name (usually the user’s name).

Digital signature: The certificate includes a digital signature of
all the information in the certificate. This digital signature is
generated using the CA’s private key. To verify the digital sig-

nature, we need the CA’s public key, which is found in the CA’s
certificate.
4.3.6 Certification Authority (CA)
The CA exists to provide entities with a trustable digital identity
which they can use to access resources in a secure way. Its role
is to issue (create and sign) certificates, make the valid certificates
publicly accessible, revoke certificates when necessary and regu-
larly issue revocation lists. The CA must also keep records of all
its transactions.
A CA can issue personal certificates to users; the purpose of the
certificate is to allow users to identify themselves to remote entities.
A personal certificate can also be used for digital signatures. A CA
can also issue host (server) and service certificates. Each host and
service connected to a network must be able to identify itself.
Some CAs can issue certificates which validate the identity of
subordinate CAs. Some CAs are therefore subordinate CAs and
some simply appoint themselves. In either case the CA publishes a
document called the Certificate Policy Statement (CPS). This is
a document which details the conditions under which it issues
4.3 CRYPTOGRAPHY 133
certificates, and the level of assurance which a relying party can
place in certificates issued by the CA.
A CA issues public-key certificates that state that the CA trusts
the owner of the certificate, and that they are who they purport to
be. A CA should check an applicant’s identity to ensure it matches
the credentials on the certificate. A party relying on the certificate
trusts the CA to verify identity so that the relying party can trust
that the user of the certificate is not an imposter.
4.3.7 Firewalls
A firewall is a hardware or software component added to a net-

work to prevent communication forbidden by an organization’s
administrative policy. Two types of firewalls are generally found,
traditional and personal. A traditional firewall is typically a ded-
icated network device or computer positioned on the boundary
of two or more networks. This type of firewall filters all traffic
entering or leaving the connected networks. On the other hand,
a personal firewall is a software application that can filter traffic
entering or leaving a single computer.
Traditional firewalls come in several categories and sub-
categories. They all have the basic task of preventing intrusion in
a connected network, but accomplish this in different ways, by
working at the network and/or transport layer of the network.
A network layer firewall operates at the network level of
the TCP/IP protocol stack; it undertakes IP-packet filtering, not
allowing packets to pass the firewall unless they meet the rules
defined by the firewall administrator. A more liberal set up could
allow any packet to pass the filter as long as it does not match one
or more negative-rules or deny rules.
Application layer firewalls operate at the application level of the
TCP/IP protocol stack; intercepting, for example, all Web, telnet or
ftp traffic. They will intercept all packets travelling to or from an
application. These firewalls will block packets; usually dropping
them without acknowledgement to the sender.
These firewalls can, in principle, prevent all unwanted traffic
from reaching protected machines. The inspection of all packets for
improper content means that firewalls can even prevent the spread
of such things as viruses or Trojans. However, in practice, this
becomes complex and difficult to attempt, in the light of the variety
134 GRID SECURITY
of applications and the diversity of content, so an all-inclusive

firewall design will not attempt this approach.
Sometimes, a proxy device, which can again be implemented in
hardware or software, can act as a firewall by responding to input
packets (e.g. connection requests) in the manner of an application,
whilst blocking other packets. Proxies help make tampering with
the internal infrastructure from an external system more difficult,
and misuse of one of its internal systems would not necessarily
cause a security breach that would be exploitable from outside,
assuming that the proxy itself remains intact. The use of internal
address spaces enhances security; an intruder may still employ
methods such as IP spoofing to attempt to pass packets to the
target internal network.
Correctly configuring a firewall demands skill. It requires a good
understanding of network protocols and of computer security in
general. Errors or small mistakes in the configuration of a firewall
can render it valueless as a security tool.
4.4 GRID SECURITY
4.4.1 The Grid Security Infrastructure (GSI)
Grid security is based on what is known as the Grid Security
Infrastructure (GSI) (Figure 4.3), which is now a Global Grid Forum
(GGF) standard [6, 7]. GSI is a set of tools, libraries, and protocols
used in Globus (see Chapter 5), and other grid middleware, to
allow users and applications to access resources securely. GSI is
PKI
(CAs and
certificates)
SSL

/


TLS
Proxies and delegation
PKI for
credentials
Secure Sockets
Layer (SSL) for
authentication
and message
protection
Proxies and delegation (GSI
extensions) for secure single
sign-on
Figure 4.3 The Grid Security Infrastructure

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×