Tải bản đầy đủ (.docx) (28 trang)

Assignment 1 Cloud computing PASS

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (785.13 KB, 28 trang )

ASSIGNMENT 1
Qualification

BTEC Level 5 HND Diploma in Computing

Unit number and title

Unit 16: Cloud computing

Submission date

Date Received 1st submission

Re-submission Date

Date Received 2nd submission

Student Name

Ngo Truong Duy Cong

Student ID

GCD210309

Class

GCD1102

Assessor name


Tran Thanh Truc

Student declaration
I certify that the assignment submission is entirely my own work and I fully understand the consequences of plagiarism. I understand that
making a false declaration is a form of malpractice.
Student’s signature
Grading grid
P1

P2

P3

P4

M1

M2

Duy Cong
D1


 Summative Feedback:

Grade:
Assessor Signature:
Internal Verifier’s Comments:

Signature & Date:


 Resubmission Feedback:

Date:


Table of Contents
Introduction__________________________________________________________________________5
P1. Analyse the evolution and fundamental concepts of Cloud Computing.________________________5
1.Overview cloud computing:___________________________________________________________5
2. Client-server:______________________________________________________________________5
3. Peer-to-peer:______________________________________________________________________8
4. High Performance Computing:_______________________________________________________10
5. Deployment Models_______________________________________________________________13
__________________________________________________________________________________17
P2. Design an appropriate architectural Cloud Computing framework for a given scenario._________18
1. ATN problems:____________________________________________________________________18
2. ATN Cloud Architecture_____________________________________________________________19
P3. Define an appropriate deployment model for a given scenario._____________________________21
P4. Compare the service models for choosing an adequate model for a given scenario._____________22
1. Cloud Service Models______________________________________________________________22
2. Choose the service for ATN company__________________________________________________25


Table of Figures:
Figure 1: Client - Server Relationship.________________________________________________________________________5
Figure 2: P2P model______________________________________________________________________________________8
Figure 3: P2P example____________________________________________________________________________________9
Figure 4: HPC parallel___________________________________________________________________________________10
Figure 5: HPC cluster____________________________________________________________________________________11

Figure 6: HPC distributed_________________________________________________________________________________12
Figure 7: Public Deployment______________________________________________________________________________13
Figure 8: Private cloud___________________________________________________________________________________14
Figure 9: Community Cloud_______________________________________________________________________________15
Figure 10: Hybrid cloud__________________________________________________________________________________16
Figure 11: Cloud services consumers are sending request to a cloud service(1). The automated scaling listener monitor the
cloud service to determine if predefined capacity thresholds re being exceeded (2)___________________________________19
Figure 12: IaaS_________________________________________________________________________________________22
Figure 13: PaaSs_______________________________________________________________________________________23
Figure 14: Software as a Service___________________________________________________________________________24


Introduction
P1. Analyse the evolution and fundamental concepts of Cloud Computing.
1.Overview cloud computing:
Cloud computing is a paradigm that enables users to use computer system resources on-demand, such as
data storage and computational power, without actively managing the underlying infrastructure. Cloud
computing is defined as a collection of networked pieces that deliver services, forming an amorphous
"cloud" that user can access without addressing or maintaining individual components.
This means that a computational resource or infrastructure, such as server hardware, storage, networking,
or application software, is available from your cloud provider or website/facility. The provider is
accessible via the Internet from any remote place and from any local computing equipment. Furthermore,
usage or accessibility is limited to customers' requirements and desires, commonly known as the pay-asyou-go or pay-per-use model.

2. Client-server:

Figure 1: Client - Server Relationship.

In the client/server model, all end systems are divided into clients and servers each designed for specific
purposes.

2.1 Client
The client takes an active role and initiates the communication session by sending a request to the server.
At this point, the client must have knowledge of the available servers and the services they provide
However, the client can only communicate with the server; they cannot see each other.
Clients are devices/programs that request services from servers. Clients can be distinguished according to
the functionality they provide and the amount of processing load they carry.


There are 2 types of client:
Fat client
Fat clients are devices/programs that are powerful enough and operate with limited dependence on
their server counterparts.
Fat clients as devices – a user workstation that is powerful and fully-featured in its own right.
 For example, a desktop PC, a laptop, a netbook
Fat clients as programs – a client carries a relatively large the proportion of the processing load.
 For example, the Lineage II gaming client (more than 2 GB in size).
Thin client
Thin clients are devices/programs that have very limited functionality and depend heavily on their
server counterparts.
Thin clients as devices – a user workstation that contains a minimal operating system and little or
no data storage
 For example, Sun Ray thin clients in Lintula, room TC215 (Moltchanov, 2013)
Thin clients as programs – a client mainly provides a user interface, while the bulk of processing
occurs in the server
 For example, the OnLive gaming client (about 10 MB in size) (Moltchanov, 2013)
2.2 Server
Servers have a passive role and respond to their clients by acting on each request and returning results
One server generally supports numerous clients.
The purpose of servers is to provide some predefined services for clients.
There are 2 types of servers:

Iterative server
qIterative design is quite simple and is most suitable for short-duration services that exhibit
relatively little variation in their execution time.
It means if the time to handle a client can be long, the waiting time experienced by subsequent
clients may be unacceptable.
Examples of Internet services deployed as repeat servers like echo (RFC 862) and daytime (RFC 867)
Iterative servers iterate through the following steps:
- Step 1: Wait for a client request to arrive
- Step 2: Process the request and send the response back to the client
- Step 3: Go back to Step 1
So, iterative servers handle clients sequentially, finishing with one client before servicing the next.


Concurrent server
Although concurrent design is more difficult, it produces better results. When the rate at which
requests are processed is lower than the rate at which requests arrive at the server, responsiveness may be
improved and latency can be decreased. Concurrent servers are frequently used to implement internet
services like HTTP, telnet, and FTP.
The following tasks are carried out by concurrent servers:
- Step 1: Wait for a client request to come in
- Step 2: Handle the request using a new process, task, or thread
- Step 3: Return to Step 1
Concurrent servers respond to client requests in parallel as a result.
2.3. Relation between Client and Server
Below are some of the characteristics to distinguish between client and server.
Hardware role: The terms "client" and "server" generally refer to the key roles performed by networked
hardware.
A "client" is usually something like a PC used by an individual and essentially initiates the
conversation by sending a request.
A "server" is usually a powerful machine dedicated to responding to customer requests, sitting in a

server room somewhere that no one other than its administrator has ever seen.
Software roles: TCP/IP uses different pieces of software for many protocols to implement “client” and
“server” roles.
Client software is usually found on client hardware and server software on server hardware, but not
always.
Some devices may run both client and server software.
Web clients: Mozilla Firefox, Internet Explorer, Google Chrome, . . .
For example, “Web Statistics” by W3Schools
Web servers: Apache, Microsoft IIS, GWS, . . .
For example, “Web Server Survey” by Netcraft Ltd.
Transaction role: During communication processes, the customer is the entity that initiates the
communication or sends a query; The server responds, often providing information. Usually, the client
software on the client will initiate a transaction, but this does not always happen.
For example, when two SMTP servers communicate for email exchange, both are server programs
running
on the server hardware. However, in the process of exchanging information one device acts as a client,
while
the other acts as a server.


3. Peer-to-peer:
Peer-to-peer (P2P) refers to a decentralized network architecture in which interconnected nodes, known as
peers, share resources directly with each other without the need for a centralized administrative system. In
a P2P network, peers have equal privileges and can both consume and supply resources to other
participants in the network. This stands in contrast to the traditional client-server model, where resource
consumption and supply are divided between clients and servers.
P2P computing has been used in a variety of application domains, but it achieved major popularity
through file-sharing systems such as Napster, which was first published in 1999. Millions of Internet users
were able to connect immediately, join groups, and contribute on the creation of user-created search
engines, virtual supercomputers, and file systems thanks to Napster.


Figure 2: P2P model

Advantage
- Improved scalability and reliability. Lack of centralized control.
- No need for a dedicated application and database server
Disadvantage
- Lack of centralized control
- Computers that have shared resources may suffer sluggish performance.
- Low securit


P2P example:
Filesharing is the exchange of media and software files between uploaders and downloaders. Filesharing
services, in addition to peer-to-peer networking, can provide scanning and security for shared files. They
may also give users with the option to bypass intellectual property rights in an anonymous manner, or they
may enable intellectual property enforcement.

Figure 3: P2P example


4. High Performance Computing:
4.1. Definition
High Performance Computing (HPC) is the use of supercomputers and computer clusters to address
advanced computation issues. It entails the integration of several disciplines such as digital electronics,
computer architecture, system software, programming languages, algorithms, and computational
approaches. HPC technologies include the tools and systems needed to implement and design highperformance computing systems.
4.1.1. Parallel
Parallel computing is another aspect of HPC. A group of processors collaborate to solve a computing
issue. These processor devices, often known as CPUs, are generally of the homogeneous variety. As a

result, this definition is the same as HPC and is broad enough to cover supercomputers with hundreds or
thousands of processors linked to other resources.

Figure 4: HPC parallel

The way applications are executed distinguishes ordinary computers from parallel computers. Because
several processor machines are used concurrently in parallel computing, the following rules apply:
 It uses several processors (many CPUs) to run.
 A issue is divided into discrete components that can be tackled at the same time.
 Each section is further divided into a set of instructions.
 Instructions from each section are executed on separate processors at the same time.
 A centralized control/coordination system is used.


4.1.2. Cluster
An HPC cluster is a collection of several distinct servers (computers) known as nodes that are linked
together via a fast connection. There may be several sorts of nodes for various types of jobs.
Every HPC cluster listed on this site has:
- A headnote, also known as a login node, where users log in - a specialized data transport node.
- Normal compute nodes (where the majority of calculations are performed).
- "Fat" compute nodes with at least 1TB of RAM.
- GPU nodes (computations on these nodes can be executed on both CPU cores and a Graphical
Processing Unit).
- An InfiniBand switch that connects all of the nodes.

Figure 5: HPC cluster

All cluster nodes are equipped with the same components as a laptop or desktop computer, including CPU
cores, RAM, and disk space. The amount, quality, and power of the components distinguishes a personal
computer from a cluster node.

The SSH application is used by users to connect to the cluster headnode from their PCs.


4.1.3. Distributed:
Distributed computing is also a computing system composed of several computers or processing units
linked by a network, which might be homogeneous or heterogeneous yet operates as a single system.
The CPUs in a distributed system can be physically near together and connected to a local network, or
geographically dispersed and connected to a wide area network. In a distributed framework, any number
of possible configurations on processing devices such as mainframes, PCs, workstations, and
minicomputers facilitate heterogeneity. The goal of distributed computing is to make such a network
function as if it were a single computer.

Figure 6: HPC distributed

Distributed computing systems are superior to centralized systems because they support the following
characteristics:
• Scalability: The system's ability to be readily expanded by adding more machines as needed, and vice
versa, without compromising the current arrangement.
• Redundancy or replication: In this case, many computers can provide the same services, such that even if
one is unavailable (or fails), work continues since other equivalent computing resources are accessible.
In real life, there’s several uses of HPC. Actually, HPC is not so common in daily life since its
specific purposes. According to its power, it’s suitable for tasks that requires a fast, reliable and stable
performance. An example that can be mentioned here is Deep Learning. With a high power of computing,
calculations can be done way much faster and the data can be analyzed rapidly. As in IBM (United
States), two supercomputers (which are fastest on the world) was built for Machine Learning purpose.


5. Deployment Models
A cloud deployment model denotes a certain sort of cloud environment, defined largely by ownership,
scale, and access.

The following are the four most prevalent cloud deployment models:
• The public cloud.
• The community cloud.
• Personal cloud.
• Cloud hybridization
5.1. Public Deployment
A public cloud is a cloud environment that is open to the public and is owned by a third-party cloud
provider. IT resources on public clouds are often provided using the previously stated cloud delivery
models and are typically given to cloud customers for a fee or are monetized through other means such as
advertising.
According to NIST, the public cloud is cloud infrastructure that is available for open usage by the general
public. It might be owned, controlled, and operated by a commercial, academic, or government body, or
some mix of these. It is present on the cloud provider's premises.

Figure 7: Public Deployment

There are several advantages and disadvantages of a public cloud.
Advantage:
- There is no need for maintaining the cloud.
- There is no limit for the number of users
- There is no need of establishing infrastructure for setting up a cloud
- They are comparatively less costly than other cloud models
- The public cloud is highly scalable


Disadvantage:
- Low security
- Privacy and organizational autonomy are not possible.
In reality, public cloud is frequently used in small businesses and systems. As provided above about some
advantages of using Public Deployment, the scalability and the requirements for maintenance is the main

points which make thing more flexible when using this kind of deployment. Furthermore, the low cost
also very important that’s efficient in small businesses.
5.2. Private Deployment:
The public cloud deployment paradigm is diametrically opposed to the private cloud deployment model. It
is a one-on-one situation for a single user (client). It is not necessary to share your hardware with anyone.
The contrast between private and public clouds is in how all of the hardware is handled. It is sometimes
referred to as the "internal cloud" and refers to the capacity to access systems and services within a certain
border or business. The cloud platform is deployed in a secure cloud environment secured by robust
firewalls and overseen by an organization's IT staff. The private cloud provides greater control over cloud
resources.

Figure 8: Private cloud

The advantages of the Private Cloud Model
- Improved control: You are the property's single owner. You obtain total control over service integration,
IT operations, rules, and user behavior.
- Data Security and Privacy: It is appropriate for keeping company information that only authorized
personnel have access to. Improved access and security can be obtained by segmenting resources within
the same infrastructure.
- Compatibility with legacy systems: This method is intended for use with legacy systems that cannot
connect to the public cloud.
- Customization: Unlike a public cloud deployment, a private cloud enables a firm to adapt its solution to
match its unique requirements.
Disadvantages of the Private Cloud Model

-Less scalable: Private clouds are scaled within a certain range as there is less number of clients.
-Costly: Private clouds are more costly as they provide personalized facilities.


5.3. Community Cloud

It allows systems and services to be accessible by a group of organizations. It is a distributed system that
is created by integrating the services of different clouds to address the specific needs of a community,
industry, or business. The infrastructure of the community could be shared between the organization
which has shared concerns or tasks. It is generally managed by a third party or by the combination of one
or more organizations in the community.

Figure 9: Community Cloud

Advantages of the Community Cloud Model
- Cost Effective: Because the cloud is shared by various enterprises or communities, it is cost effective.
- Security: The community cloud is more secure.
- Shared resources: It enables various companies to share resources, infrastructure, and so on.
- Data sharing and cooperation: It is excellent for both data sharing and collaboration.
Disadvantages of the Community Cloud Model
- Limited Scalability: Because multiple companies share the same resources based on their collaborative
interests, the community cloud is significantly less scalable.
- Rigid in customization: Because data and resources are shared across multiple organizations based on
their common interests, if one organization wishes to make modifications based on their requirements,
they cannot because it would affect other organizations.


5.4. Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing
gives the best of both worlds. With a hybrid solution, you may host the app in a safe environment while
taking advantage of the public cloud’s cost savings. Organizations can move data and applications
between different clouds using a combination of two or more cloud deployment methods, depending on
their needs.

Figure 10: Hybrid cloud


Advantages
- Having power of both the private and public clouds.
- Better security than the public cloud.
- Highly scalable.
Disadvantages:
- The security features are not as good as the public cloud
- Managing a hybrid cloud is complex.
- It has stringent SLAs.


6. Characteristic of Cloud computing
Cloud computing has five essential characteristics. Without any of these characteristics, it's not cloud
computing.
1. On-Demand Self-Service
With cloud computing, you can provision computing services, like server time and network storage,
automatically. There is no need for you to communicate with the service provider. Cloud clients may
access their cloud accounts using an online self-service interface to examine their cloud services, manage
their consumption, and provision and de-provision services.
2. Broad Network Access
Another essential cloud computing characteristic is broad network access. You can access cloud services
over the network and on portable devices like mobile phones, tablets, laptops, and desktop computers. A
public cloud uses the internet; a private cloud uses a local area network. Latency and bandwidth both play
a crucial role in cloud computing and broad network access, as they impact the quality of service.
3. Pooling of Resources
With resource pooling, a multi-tenant model allows multiple customers to share physical resources. This
paradigm distributes and redistributes real and virtual resources in accordance with demand. Customers
can share the same infrastructure or apps with multi-tenancy while still retaining their security and
privacy. Customers won't be able to specify the precise location of their resources, but they might be able
to do so at a higher level of abstraction, like a country, state, or data center. Customers can pool a variety
of resources, including bandwidth, processing, and memory.

4. Rapid Elasticity
Customers can scale quickly based on demand thanks to the elastic provisioning and releasing capabilities
of cloud services. There are almost no limits on the capabilities that can be provisioned. Customers can
use these capabilities whenever they want and in any quantity. Customers may grow cloud capacity,
pricing, and use without incurring additional contracts or charges. You won't need to buy computer
hardware thanks to rapid elasticity. Instead, can leverage the cloud provider's cloud computing resources.
5. Measured Service
In cloud systems, a metering capability optimizes resource usage at a level of abstraction appropriate to
the type of service. For example, you can use a measured service for storage, processing, bandwidth, and
users. Payment is based on actual consumption by the customer via a pay-for-what-you-use model.
Monitoring, controlling, and reporting resource use creates a transparent experience for both consumers
and providers of the service.


P2. Design an appropriate architectural Cloud Computing framework for
a given scenario.
1. ATN problems:
ATN is a Vietnamese company which is selling toys to teenagers in many provinces all over Vietnam. The
company has a revenue of over 700.000 dollars/year. Currently, each shop has its own database to store
transactions for that shop only. Each shop has to send the sale data to the board director monthly and the
board director needs lots of time to summarize the data collected from all the shops. Besides, the board
can’t
see the stock information update in real-time.
ATN solution:
Based on the above scenario shows that ATN company has large revenue, huge data system. Currently
each
store has a database to store transactions from each year, the store must send data to the director.
Management took a lot of time to summarize the data gathered. Based on this scenario, ATN should use
cloud computing service because of the following reasons:
Firstly, using cloud computing, managers can manage all of the branch stores data anywhere, as

long as the manager needs and has internet connection. Another advantage is the flexibility and mobility
of cloud
computing. The corporate chain can be active at any time, which can reduce the number of corporate
workstations. Furthermore, cloud computing allows company executives to efficiently monitor company
activities.
Secondly, stored data in the traditional server, businesses will have to buy and install hardware and
software for all machines in the company to be compatible with each other. Each new purchase or upgrade
will have to repeat this process once. This is too costly human and material resources for a large-scale
company like ATN. However, if the ATN company uses the cloud, it only needs to pay for the services
they buy. With cloud computing, IT will not need to spend a lot of time installing new hardware, software
and reconfiguring devices. Do not waste time searching and always transfer data in the company. It
provides everyone with the same technology platform, helping everyone to operate on the same platform.
Furthermore, service charges are based on storage capacity, storage and number of users, time and
storage. Therefore, ATN can easily choose its budget to save costs. If your data gets overwhelmed, you
can easily get more storage and upgrade your storage plan in minutes instead of the traditional way of
adding servers, setting up settings and licenses.
Then, this cloud model makes it easy for users to access. Since it allows managers and employees
in stores to access a particular vendor's technology service in the cloud without any knowledge or
experience with that technology, regardless infrastructure to service that technology. Cloud computing
eliminates redundant or repetitive tasks like data recovery. It's also a great choice for companies to
improve performance, so team stores can easily share data anytime and at any time.
Last but not least, in ATN's business activities, or in any industry, information security is always a
very important issue, indispensable for businesses. Should calculate the security of virtual Cloud Server is
always at the best level and upgraded many times higher than conventional servers. In addition, the
company does not have to worry about data backup plans. Data will always be available as long as the
company uses the internet. Using corporate cloud computing will solve all data loss problems and disaster
plans.


2. ATN Cloud Architecture

An architectural paradigm called the dynamic scalability architecture is based on a set of
predetermined scaling conditions that cause the dynamic allocation of IT resources from resource pools.
Since unused IT resources are successfully recovered without the need for manual intervention, dynamic
allocation permits varied utilization as determined by consumption demand changes.
When additional IT resources must be added to the workload handle, the auto scaling listener is set
with workload thresholds. Based on the conditions of a certain cloud consumer supply contract, this
method can be equipped with logic that establishes the quantity of extra IT resources that can be
dynamically provisioned.

Figure 11: Cloud services consumers are sending request to a cloud service(1). The automated scaling listener monitor the cloud service to
determine if predefined capacity thresholds re being exceeded (2)

We may provide a Cloud architecture example based on the image above.
(1) To begin with, users of cloud services are submitting requests to the cloud service.
(2) The automated scaling listener checks the cloud service in the following step to see if the
predetermined capacity threshold has been reached. There are two potential outcomes in this situation.


Case 1: If the workload in this process does not go above the power barrier. In other words, if the
workload is below or equal to the capacity threshold, the cloud service will process the request as normal.
For instance, if the workload is 3 and the production threshold is 4, a request to continue running will be
issued to the cloud service.
Case 2: Nevertheless, there are more service requests coming from users of cloud services.
Workload is above acceptable performance level. Based on a preset scaling strategy, the scaling listener
chooses the subsequent course of action automatically.
- If the cloud service implementation is deemed acceptable for further scalability, the scaling listener
would start the scaling process right away.
- The resource duplication system receives the signal from the scaling listener, which then
automatically creates multiple copies of the cloud service to satisfy the requests of all the
customers.

- The automated listener continues to monitor, evaluate, and add IT resources as needed, even while
the workload has escalated. However, if the implementation of the cloud translation does not
qualify for additional scaling, the user's request will be returned or canceled.
Example: When the workload is increased to 6, the current threshold is just 4. Based on the preset rate
policy, the rate handler chooses the subsequent sequence of action automatically. The sharing procedure
will start if there are enough resources to scale the deployment, such as enough RAM.
In response to the quantity of user requests, the scale listener then automatically sends a signal to the
resource replication mechanism, creating numerous instances of the cloud service. Now, the resource or
some request will be routed to the offered cloud service when the workload volume and the current
threshold are reached. However, requests from the user to the cloud service will be canceled if the storage
capacity does not match the scope of the deployment.



×