Tải bản đầy đủ (.pdf) (35 trang)

IT training 9781491999530 web khotailieu

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (3.99 MB, 35 trang )

Co
m
pl
ts
of

Alice LaPlante & Maliha Balala

en

Combining Machine Learning, Deep
Learning, and Associative Memory
Reasoning to Improve Operations

im

Solving Quality
and Maintenance
Problems With AI



Solving Quality and Maintenance
Problems with AI
Combining Machine Learning, Deep Learning,
and Associative Memory Reasoning
to Improve Operations

Alice LaPlante and Maliha Balala

Beijing



Boston Farnham Sebastopol

Tokyo


Solving Quality and Maintenance Problems with AI
by Alice LaPlante and Maliha Balala
Copyright © 2018 O’Reilly Media, Inc. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use. Online edi‐
tions are also available for most titles ( For more information, contact our
corporate/institutional sales department: 800-998-9938 or

Editor: Nicole Tache
Production Editor: Melanie Yarbrough
Copyeditor: Octal Publishing, Inc.
Proofreader: Matthew Burgoyne

Interior Designer: David Futato
Cover Designer: Karen Montgomery
Illustrator: Rebecca Demarest

First Edition

April 2018:

Revision History for the First Edition
2018-04-27:


First Release

The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Solving Quality and Maintenance
Problems with AI, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the information and
instructions contained in this work are accurate, the publisher and the authors disclaim all responsi‐
bility for errors or omissions, including without limitation responsibility for damages resulting from
the use of or reliance on this work. Use of the information and instructions contained in this work is
at your own risk. If any code samples or other technology this work contains or describes is subject
to open source licenses or the intellectual property rights of others, it is your responsibility to ensure
that your use thereof complies with such licenses and/or rights.
This work is part of a collaboration between O’Reilly and Intel. See our statement of editorial inde‐
pendence.

978-1-491-99953-0
[LSI]


Table of Contents

Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
1. Introduction and Primer on Predictive Quality and Maintenance. . . . . . . . . . . . . . . 1
Overview
Artificial Intelligence: Clarifying the Terminology
More Companies Looking Toward Cognitive Computing

1
5
10


2. Complementary Learning and Intel Saffron AI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Complementary Learning as the Future of Predictive Quality and
Maintenance Solutions
Intel Saffron AI: Associative-Memory Learning and Reasoning and
Complementary Learning in Action

13
15

3. Using AI-Based PQM Solutions to Solve Issues in Manufacturing,
Aerospace, and Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
PQM Issues in the Manufacturing, Aerospace, and Software Industries
AI-Based PQM Solving Real-World Issues: Two Use Cases
Getting Started with AI-Based PQM Solutions

19
21
25

iii



Executive Summary

As artificial intelligence (AI) enters the business mainstream, one of its most
promising applications is anticipating quality and maintenance problems before
they cause real damage. Called predictive quality and maintenance (PQM), these
solutions are being deployed at an accelerating rate, especially in the manufactur‐

ing, aerospace, and software industries.
But not all PQM solutions are created equal. Those based on a combination of
machine learning, deep learning, and—in particular—cognitive computing create
a truly unique out-of-the-box AI-based PQM solution.
This report is organized into three chapters. In Chapter 1, we introduce AI-based
PQM and show how today’s market for quality and maintenance applications is
evolving. In Chapter 2, we show that because none of the various types of AI can
solve all PQM problems alone, applying them simultaneously is the key to suc‐
cess. This has led to cognitive computing as a basis for what is called complemen‐
tary learning. We also introduce Intel Saffron AI as the only solution applying
complementary learning principles to today’s PQM challenges. Finally, in Chap‐
ter 3, we discuss using AI-based PQM solutions to solve issues in the manufac‐
turing, software, and aerospace industries.

v



CHAPTER 1

Introduction and Primer on Predictive
Quality and Maintenance

Overview
Following years of being dismissed as largely “hype,” we’re seeing a growing num‐
ber of positive headlines about artificial intelligence (AI): “The artificial intelli‐
gence race heats up” (The Japan Times); “Healthcare’s Artificial Intelligence
Market May Hit $6 Billion” (Forbes); and “Most Americans Already Using Artifi‐
cial Intelligence Products” (Gallup). Even the Wall Street Journal is reporting on
recent market advances. “After decades of promise and hype, artificial intelli‐

gence has finally reached a tipping point of market acceptance,” wrote Irving
Wladawsky-Berger in early 2018.
Indeed, the artificial intelligence market is expected to grow to $190.61 billion by
2025 from $21.46 billion in 2018, at a compound annual growth rate (CAGR) of
36.62%, according to IDC. To put that in perspective, in 2018 the average tech‐
nology budget for US businesses is expected to grow just under 6%, according to
Forrester.
AI is transforming virtually all industries—from retail, to healthcare providers, to
manufacturing, aerospace, and banking. Why? Because AI can deliver results in
the form of insights. A report by Forrester forecasts that companies that use
insight to drive their businesses will grow at a 27% annual rate at a time when the
global gross domestic product (GDP) will rise only 3.5% annually (see
Figure 1-1).

1


Figure 1-1. Revenue forecasts for insight-driven businesses (source: Predictions 2017:
Artificial Intelligence Will Drive The Insights Revolution, November 2016, Forres‐
ter)
One segment—and a growing one—of the overall AI applications market is AIbased predictive quality and maintenance (PQM). PQM is a relatively new tech‐
nology area designed to help companies predict when issues or defects might
occur in a product, advise on how to identify and fix them, and—the ultimate
goal—prevent problems before they cause serious damage. AI is significantly
adding value to PQM solutions on the market today.

PQM: A Primer
PQM solutions focus on detecting quality issues and improving operational pro‐
cesses to address them by accessing and analyzing data, sometimes in real time.
PQM is a relatively new merger of predictive quality and predictive maintenance

solutions. These separate technology areas previously addressed the two issues—
ensuring product quality and anticipating maintenance needs—as discrete, dis‐
tinct technologies. With PQM solutions, both quality and maintenance activities
are addressed together rather than as separate issues.
The idea behind a PQM solution is that if companies want to gain a competitive
edge, they must prioritize how to allocate their resources, cost, and time when it
comes to both improving product quality and maintaining equipment in a more
timely and efficient manner.
Here are some examples of questions that PQM solutions are helping to answer:
• How can we capture experts’ knowledge and skills and streamline them
within workflows and processes so that they can be shared and accessed by
everyone?
• How can we detect anomalies and failure patterns to determine which equip‐
ment and operational processes are likely to fail?
2

|

Chapter 1: Introduction and Primer on Predictive Quality and Maintenance


• How can we efficiently triage issues and conduct comprehensive root–cause
analyses?
• How can we optimize spare-parts inventory to reduce inventory costs while
remaining proactively responsive?
• How can we catch and address product quality issues more quickly and costeffectively and anticipate where they will occur next?
• How can we identify areas to create efficient preventative maintenance pro‐
grams to ensure maximum uptime and safety while still maintaining effi‐
ciency?
AI-based PQM solutions differ from traditional predictive quality and mainte‐

nance ones because they analyze the actual condition of a product rather than
just using average or expected statistics to predict when quality corrections or
maintenance will be required.
The latest PQM solutions harness the data gathered by both the Internet of
Things (IoT) and data from traditional legacy systems. Recent research suggests
that the market for PQM applications will grow from $2.2 billion in 2017 to $10.9
billion by 2022, a 39% annual growth rate (see Figure 1-2). Of the top 10 uses
predicted for AI in 2021, PQM comes in fifth place, according to IDC.

Figure 1-2. PQM market growth to 2022 (source: IOT Analytics)
PQM solutions can be said to have two separate but equal concerns: quality and
maintenance.
The longer that companies put off fixing quality issues in products—whether in
design or manufacturing phases—the more costs accrue. Indeed, the most expen‐

Overview

|

3


sive time to fix a problem is after it’s shipped—when there’s a brand reputation
cost exposure added to the costs of recalling the product and fixing the issue.
First, consider quality—the “Q” in PQM. AI-based PQM solutions allow compa‐
nies to work through defects and other quality issues much faster. For example,
when Intel ships a new chip, there are inevitably bugs reported by OEMs and
customers. Such products are, after all, very complex, with many integrated parts,
and Intel must act quickly to resolve any issues that arise.
We can use AI-based PQM solutions to solve quality problems faster, which low‐

ers the time-to-market or time-to-resolution while increasing customer satisfac‐
tion. Not incidentally, PQM solutions don’t just identify the root cause of a single,
isolated quality problem, but provide insights into more general issues within
design or manufacturing, which allows businesses to build better products and,
ultimately, increase customer satisfaction and revenues.
Next, consider maintenance—the “M” in PQM. Intel believes that three of the top
drivers of predictive maintenance include the need to increase uptime, reduce
risk, and cut maintenance.
Increase uptime
Unplanned downtime is a major cost driver in any industry that must main‐
tain large inventories of capital assets. For an airline, for example, delaying
flights due to unplanned maintenance can cost thousands of dollars each
minute. Unplanned shutdowns of oil platforms can run into the millions of
dollars. And in manufacturing plants, the costs of disruptions go directly to
the bottom line. It is the goal of every organization to eliminate unplanned
downtime in favor of planned maintenance.
PQM solutions can help with planned maintenance also, by shortening
maintenance operations windows.
Reduce risk
Businesses strive to comply with safety regulations. They also perform pre‐
ventive maintenance and take common sense precautions. Because of this,
the potential for catastrophic accidents to happen is minimized. But the risk
is always there. The Deepwater Horizon disaster was caused in large part to
equipment failure. Recently, the engine of a United Airlines flight fell apart in
mid-flight. Although the aircraft was able to make a safe emergency landing,
this incident occurred despite United’s compliance with Federal Aviation
Association (FAA) regulations that are defined to mitigate such risks. When
something of this magnitude happens, the repercussions go well beyond
financial.
Stop over-maintaining assets

Both the fear of unplanned downtime and the risk of catastrophe occurring
lead many businesses to actually over-maintain most of their capital assets.
4

|

Chapter 1: Introduction and Primer on Predictive Quality and Maintenance


Indeed, many businesses feel that regulatory bodies, such as the FAA for air‐
lines and the Federal Drug Administration (FDA) for medical devices,
actually require companies within their jurisdiction to maintain assets signif‐
icantly more frequently than they need to.
Some predictive maintenance studies report that PQM solutions can reduce
downtime by as much as 50%, while reducing maintenance costs between 10%
and 40%. Manufacturers, for example, can move from a reactive maintenance
model to a proactive one, giving them insight into when and where machine
breakdowns might occur so that they can keep the manufacturing line going.
According to McKinsey, in the manufacturing industry alone, these savings will
have a potential economic impact of nearly $630 billion per year by 2025.

Harnessing Dark Data with PQM Solutions
Data is continuously increasing, and businesses are challenged to make sense of it
all. The vast majority of data is “dark data”—referring to the vast amounts of
untapped data in the form of human interactions, intelligence, printed content,
photos, video, voices, and social media interactions that come in unstructured
forms. Notably, IDC estimates that only slightly more than 20% of data is being
utilized today, meaning that 80% is “dark.”
To use this dark data, businesses need to convert this information into a form
that they can understand and use.

The AI-based PQM solutions championed by Intel, IBM, and GE harness dark
data from multiple sources to predict potential quality and maintenance issues
before they affect customers—and the bottom line. In particular, Intel Saffron AI
uses several key AI technologies—machine learning, deep learning, and, espe‐
cially, cognitive computing—together in what is called complementary learning to
offer a truly unique out-of-the-box PQM solution.
In this report, we interviewed companies from manufacturing, aerospace, and
software industries to talk about the key business challenges they face, how AIbased PQM solutions are helping them address these challenges, and how they
see Intel Saffron AI helping them make better decisions, solve problems, and gain
lucrative returns.

Artificial Intelligence: Clarifying the Terminology
It can be difficult to decrypt all the talk about AI because so many different terms
are used—some of them interchangeably—and AI’s capabilities seem to span so
many possible scenarios.
The best way to think about AI is as a large umbrella of technologies, methodolo‐
gies, and algorithms that help humans perform tasks easier, faster, and more effi‐

Artificial Intelligence: Clarifying the Terminology

|

5


ciently. Under this umbrella resides a large—and growing—collection of
techniques such as machine learning, image recognition, neural networks, speech
recognition, deep learning, natural-language processing, handwriting recogni‐
tion, and cognitive computing, among others, many of which overlap or comple‐
ment one another to help enterprises resolve challenges.

For example, machine learning focuses on real-world problems by processing—
and learning from—large amounts of data. Deep learning, which many consider
a subset of machine learning, uses neural networks to be able to sort through
nearly unimaginable volumes of structured and unstructured data to come to
conclusions. Cognitive computing is a subset of AI that attempts to mimic the
way humans think in a way that addresses more complex scenarios for decision
making.
John Launchbury from the US Defense Advanced Research Projects Agency
(DARPA) gives an interesting overview of the evolution of AI in his talk “A
DARPA Perspective on Artificial Intelligence.”
At its heart, Launchbury says, AI takes different kinds of mathematically based
formulas (algorithms) to make sense of data and come to a decision on what to
do with it, and in this way creates “intelligent” systems and “smart” things.
We’re in what Launchbury calls the “third wave” of AI. Today, AI systems have
moved beyond the data-crunching algorithms to human-like cognitive ones with
the ability to explain its reasoning on decisions by making associations based on
the context. The ability to form associations autonomously by connecting con‐
cepts, observations, knowledge, and senses together. Discovering associated pat‐
terns for reasoning and inferences is fundamental to both human intelligence
and cognitive computing.
In the PQM solutions space, the relevant AI technologies are machine learning,
deep learning, and cognitive computing.

Machine Learning
Under the larger umbrella of AI, machine learning refers to a broad range of
algorithms and methodologies that can process large amounts of data so as to
identify issues or trends. For example, a machine learning system can learn to
distinguish malfunction scenarios of a network router by learning from the train‐
ing examples of previous episodes of malfunctions and normal operations of the
router.

In other words, it learns from example. There’s no need to manually code in
“rules” that it must follow. The more data it consumes, the more accurate it will
be.

6

|

Chapter 1: Introduction and Primer on Predictive Quality and Maintenance


Commonly used machine learning techniques include support vector machines,
decision trees, Bayesian belief networks, case-based reasoning, instance-based
learning, and regression.
Machine learning is experiencing a renaissance within the growth of the AI mar‐
ket. The “Machine Learning Market - Global Forecast to 2022” report from
Research and Markets shows that the global machine learning market is expected
to grow from $1.41 billion in 2017 to $8.81 billion by 2022 with a CAGR of
44.1%. McKinsey estimates that 60% of all current AI spending is on machine
learning.
According to one survey, 65% of organizations are already using or planning to
use machine learning to help them make better business decisions, whereas 74%
of all respondents called the technology “a game changer” that had the potential
to transform their jobs and industries. A full 61% said it was their company’s
most significant data initiative for the next 12 months. (See Figure 1-3.)

Figure 1-3. Machine learning initiatives are number one for today’s enterprises
(source: MemSQL)
Deloitte anticipates that the number of enterprise machine learning deployments
will double between 2017 and 2018, and double again by 2020. However, one

drawback of machine learning systems is that they are “data hungry” and need to
process large volumes of data—sometimes over a long period of time—before
they can detect patterns. More on this later.

Deep Learning
Deep learning is a subset of machine learning that relies on building neural net‐
works, which are loosely modeled on how neurons work in the human brain. In
this type of AI, the system extracts digital value from every piece of data it ingests
by asking a series of binary (true/false) questions. For example, if trying to pro‐
cess an image and determine whether it is the correct face of the owner of a

Artificial Intelligence: Clarifying the Terminology

|

7


smartphone, it will ask such things as: “is the hair brown?” “are the eyes blue?” It
then classifies and weights each piece of data. Nodes are arranged in several lay‐
ers, including an input layer where the data is fed into the system, an output layer
where the answer is given, and one or more hidden layers, which is where the
learning occurs by adjusting interconnection weights between the layers to mini‐
mize discrepancies between the predictions and the answers.
Deep learning works for large complex datasets on the scale of Google’s image
library or Twitter’s tweets. It is not new, but it is rapidly gaining popularity
because the volume of data that is available is increasing so rapidly, and faster and
more powerful processors can return results in a timely manner.
You can apply deep learning to any kind of data, even unstructured data such as
audio, video, speech, and the written word. It is being used for a number of realworld issues. For example, by using data collected by sensors, self-driving cars are

learning to identify when they come to an obstacle, and how to react appropri‐
ately using deep learning. British and American researchers recently demon‐
strated a deep learning system capable of being able to correctly predict a court’s
decision when given the facts of the case.

Cognitive Computing
Cognitive computing systems process unstructured as well as structured data and
can learn from experiences much like humans do.
Because they use computational neuroscience, cognitive computing systems imi‐
tate the way humans learn and reason—and the “learning” here refers to the fact
that humans can learn with significantly fewer numbers of examples than typical
deep learning solutions require. They work especially well in dynamic and com‐
plex environments such as manufacturing, engineering, and energy industries.
This form of AI combines elements from cognitive psychology, neuroscience,
and computer science.
A new update to the Worldwide Semiannual Cognitive Artificial Intelligence Sys‐
tems Spending Guide from International Data Corporation (IDC) forecasts
worldwide revenues for cognitive computing systems reached $12.5 billion in
2017, an increase of 59.3% over 2016. Global spending on cognitive computing
solutions will continue to see significant corporate investment over the next sev‐
eral years, achieving a CAGR of 54.4% through 2020 when revenues will be more
than $46 billion.
Following are the cognitive computing use cases that will see the greatest levels of
investment in the near future:
• Quality management investigation and recommendation systems
• Diagnosis and treatment systems
8

|


Chapter 1: Introduction and Primer on Predictive Quality and Maintenance


• Automated customer service agents
• Automated threat intelligence and prevention systems
• Fraud analysis and investigation
Combined, these five use cases delivered nearly half of all cognitive computing
systems spending in 2017, according to IDC.
A subset of cognitive computing called associative-memory learning and reason‐
ing is also very much based on how humans think. First of all, people create
memories. Those memories involve entities, where an entity is a person, a place,
a thing, or an event. People learn about these entities and create memories. Then,
they associate these memories to one another. When do they see them together?
In what context? How often?
This is how associated-learning and reasoning systems work, too. As entities
change and new data is added, an associative-memory learning and reasoning
system incrementally adds the new data into memory and builds out more nodes
and connections. This process of enabling a system to learn on the fly and pas‐
sively develop assumptions about what’s important is called lazy learning or latent
learning.
Cognitive computing systems that use associative-memory learning and reason‐
ing unify data at the entity level. They create correlations of related data (similar
bugs, similar parts, and more) and associate a weight to the similarity. The
advantages of this approach include the following:
• Less data needed
• Less data science involved (model-free)
• Faster and more agile
• More transparent (auditable data)
• Great for individual use cases because data is unified around similar entities
(360 views of customers, precision medicine, etc.)

All of these things add up to deliver significant benefits for companies applying
cognitive computing to PQM.
According to Keystone Strategy, a Boston-based strategic consulting firm, if 5%
of heavy maintenance costs were prevented via changes to maintenance plans,
that would result in $20 million to $40 million of savings for a medium-sized US
commercial passenger airline annually. If just 2% of carrier-caused delays were
prevented via changes to maintenance plans, it would yield $5 million in savings
for that carrier. If just 5% of cancellations were prevented due to changes to
maintenance plans, this would yield $23 million in savings for that carrier.

Artificial Intelligence: Clarifying the Terminology

|

9


More Companies Looking Toward Cognitive Computing
A number of firms have entered the cognitive computing market, but according
to Keystone, the offerings vary widely depending on the particular approach the
vendors take.
One approach has been to offer DIY commercial tools focused on data science
using a traditional fee-based licensing approach and then selling enabling tools to
internal analytics teams to build solutions. This sector is largely dominated by
open source.
Another approach has been to offer broad, multipurpose AI platforms that are
primarily monetized through consulting services. In many cases, there’s not a lot
in the box beyond wrappers around open source components. Keystone assem‐
bled an expert panel made up of Fortune 100 companies, which expressed consis‐
tent skepticism around the solution value of these platforms, “describing the

deployment costs as largely services driven build scenarios,” says Dan Donahue, a
partner at Keystone Strategy.
Then there are actual packaged solutions for specific industries and use cases.
Such vendors tie their solutions to a measurable ROI—whether that’s increased
revenue or increased labor efficiency—and price it accordingly. Customers can
deploy these solutions rapidly, without complex and costly services efforts. “We
see these packaged vertical solutions, if they’re done right, as one of the most suc‐
cessful ways to commercialize the technology,” says Donahue.
This is the category that Intel Saffron AI fits into (more on this later).
And finally, there’s the whole AI-as-a-Service category, which to date has focused
largely on providing standard algorithms around text, speech, and image recog‐
nition offered via consumption-based business models (see Figure 1-4).

10

|

Chapter 1: Introduction and Primer on Predictive Quality and Maintenance


Figure 1-4. Different types of players in the cognitive computing space (source: Key‐
stone Strategy)
Keystone’s research indicated that the market could evolve in one of two direc‐
tions: either commercial platforms coupled with professional services offerings
would dominate, or the industry-specific/vertical applications would win out.
The company asked its expert panel and found that most of them leaned toward
the industry-specific/vertical solutions, indicating that we would soon see a rich
ecosystem of packaged vertical solutions in the marketplace.
“We’ve definitely seen some scars from long-running, ambiguously successful AI
platform deployments—and negative reactions to how much the services cost to

actually get something running,” says Donahue. “A packaged solution to solve a
specific problem, priced in a way that’s tied to a measurable value creation metric,
would be much more attractive given these are still emerging, often unproven
technologies.”

More Companies Looking Toward Cognitive Computing

|

11



CHAPTER 2

Complementary Learning
and Intel Saffron AI

Complementary Learning as the Future of Predictive
Quality and Maintenance Solutions
Because none of the types of artificial intelligence (AI) can solve all problems,
applying them simultaneously is the key to success. This need for a combined
approach is giving rise to cognitive computing as a basis for complementary
learning. This is what DARPA’s John Launchbury refers to as the “contextual
adaptation systems” in the third wave of AI.
Strengths and weaknesses of different AI approaches are giving rise to comple‐
mentary learning because solving a challenging problem often requires solving
underlying subproblems effectively, which calls for different models or
approaches.
To understand how machine learning, deep learning, and cognitive computingbased AI can work together in a predictive quality and maintenance (PQM) solu‐

tion, it’s important to understand that a comprehensive AI-based PQM solution
needs to solve two types of problems: surveillance and prescriptive.
Surveillance use cases involve scenarios in which businesses need to recognize
problems by observation. By detecting patterns and alerting businesses, the sur‐
veillance approach to AI allows companies to act quickly when something out of
the ordinary is detected in their equipment or other assets. For example, manu‐
facturers want to understand what the sensor data coming in from the factory
floor via the Internet of Things (IoT) is telling them. In the past, they would have
needed to build rules into the sensor network to send alerts when certain thresh‐
olds were passed, or anomalies sensed.

13


But the problem was identifying all those rules. Although it’s possible to define
the parameters in which, for example, a network router should be operating,
when a large number of assets exists—such as a fleet of airplanes—it’s next to
impossible.
That’s when machine learning and deep learning come in. These two types of AI
can process the data, access the knowledge, and specify what those parameters
are in a much more adaptable and scalable way. The systems learn—or rather,
construct—the rules themselves by learning from the data.
But to do this, an enormous amount of data is needed—perhaps tens of thou‐
sands of examples of an issue before a system is fully trained. And if the system
did not perform as expected in some circumstances, humans will need to provide
additional feedback—although that feedback might not be in the form of rules,
but in the form of new data illustrating the desired outcomes or instructing the
system with exception cases. The goal here is to help the machine learn quickly
from as few examples as possible.
After the issues have been identified using machine learning and deep learning,

the natural next step for businesses is to solve those issues.
This is where prescriptive use cases come in—and where cognitive computing
capabilities are required. After all, for a system to do those things, it would
require the ability to reason. It would need to extract and consolidate relevant
information from heterogeneous unstructured data sources such as audio, video,
and emails to indicate or assist businesses to find the root causes of issues.
Another way to think about it is that machine learning and deep learning are
good for knowledge extraction. Cognitive computing is good with knowledge
representation—finding connections and insights from data.
Let’s walk through a basic example. The first step toward solving a problem with
a piece of equipment or product is that data—which can be structured or
unstructured—needs to be processed and identified. If it’s text, natural-language
processing (NLP) will be used to parse the meaning. If it’s an object, computer
vision will identify whether it is an airplane, an engine, or a network router.
Computer vision and NLP are part of the knowledge extraction. Those are the
patterns detected by machine learning and deep learning. In effect, the system
has answered the question, “What is it?”
When the “what” question has been understood, cognitive computing can then
come in to ask questions such as: Have I ever seen this before? What type of a
problem is it? Who knows how to fix this? What do I do next? What caused this
problem? And, will it happen again?” Cognitive computing systems then answer
those questions.

14

|

Chapter 2: Complementary Learning and Intel Saffron AI



When we talk about complementary learning with respect to PQM applications,
we’re talking about combining surveillance, or knowledge extraction, with the
second, more prescriptive, knowledge representation application that uses
memory-based reasoning.

Intel Saffron AI: Associative-Memory Learning and
Reasoning and Complementary Learning in Action
Intel Saffron AI is based on cognitive computing that utilizes associative memory
learning and reasoning, along with patterns detected from machine learning and
deep learning, in the complementary way previously discussed. By using humanlike reasoning to find hidden patterns in data, Intel Saffron AI enables decisions
that can deliver rapid return on investment (ROI).
The core of Intel Saffron AI is the Intel Saffron Memory Base, a long-term persis‐
tent knowledge store built on an associative-memory matrix. It stores unified
data about entities in an associative-memory store. That memory store correlates
similar information together and makes it faster to query and easier to retrieve
for analysis. This means that Intel Saffron AI mimics how a human naturally
observes, perceives, and remembers by creating memory-based associations.
Intel Saffron AI uses data from a mix of machine learning and deep learning AI
subsystems, like NLP for entity extraction, sentiment analysis to establish links,
and topic mapping for content mapping. The platform is both semantic and stat‐
istical in nature.
Intel Saffron AI ingests all types of data, including structured, unstructured text,
nonschematic, and on-schema. This data then resides in a hyperdimensional
matrix that connects one node (data or entities like people, places, things, or
events) to another node using edges (which are statistical connections).
Although most graph stores work as a key–value pair, Intel Saffron AI acts like a
multidimensional graph store that allows for N connections between nodes, and
functions like a hyper matrix. The connections make associations based on con‐
text, frequency, and time.
When a new node (data) comes in, the platform applies memory-based cognitive

techniques and creates weighted associations between people, places, things, and
events. In this way, Intel Saffron AI acts like a massive correlation engine that cal‐
culates the statistical probabilities using the Kolmogorov Complexity (K Complex‐
ity). It then derives a universal distance measure that shows how closely two
objects are related and to find regular patterns in the data. This way of cognition
by similarity enables anticipatory decision making, which involves making deci‐
sions by estimating the current situation, using diagnoses, prescribing possible
actions, and predicting likely outcomes.

Intel Saffron AI: Associative-Memory Learning and Reasoning and Complementary Learning in Action

|

15


Customers can implement Intel Saffron AI across industries. Its bedrock technol‐
ogy is the patented Intel SaffronMemoryBase, which provides a layer of REST
APIs that customers can develop and customize for their own needs. Intel Saf‐
fron AI now offers industry-specific applications that will harness the power of
the platform to solve specific quality and maintenance problems for manufactur‐
ing, software, and aerospace.

What Makes Intel Saffron AI Different?
A complementary learning solution like Intel Saffron AI enables powerful
machine and human interactions. It aims to help humans make decisions better
and faster. It does this by relieving human workers of having to perform repeti‐
tive, time-consuming tasks so that they can focus on what humans can do best:
build relationships and apply judgment and creativity to more complex issues. In
addition, Intel Saffron AI keeps advancing, learning from human feedback and

interactions.
It does this by excelling in three ways: its transparency—which makes it easy to
understand its results and recommendations; for the fact that no statistical mod‐
els are required; and that it brings together both structured and unstructured
data from multiple sources.

Transparency
Intel Saffron AI works by identifying similarities. But unlike a traditional
machine learning or deep learning application, which makes its decisions by
algorithms and “black box” methodologies—that is, businesses have no insight
into why they got a particular result—Intel Saffron AI is completely transparent.
Because it works by knowledge representation, it stores all the attributes that led
it to a particular decision or conclusion and makes them readily available to
users. It’s easy to get explanations.
Intel Saffron AI in effect takes an entity and creates a “neighborhood” around it,
showing the most similar issues it has ever seen to this particular one, and why it
thinks they’re similar. Businesses have full access to all of this information, giving
them a chance to tell Intel Saffron AI when it’s wrong, so it can learn for next
time.

One-shot learning: No statistical models required
The key benefit of not needing to model data is flexibility, especially when data is
sparse, dynamic, or incomplete. This is what Intel calls “one-shot learning”: see
something once, and Intel Saffron AI learns.
Here’s an example: if a child is burned by a hot stove, hopefully she learns from
that experience and avoids the stove in the future. If the child was acting based

16

|


Chapter 2: Complementary Learning and Intel Saffron AI


on a statistical-based learning, however, she would have to experience pain multi‐
ple times before she had enough data to build a statistically relevant model—and
not get burned.
After all, the real world isn’t a closed system. Unlike the game of checkers, chess,
or even the more complex game of Go, there aren’t a fixed number of possible
moves. But in an open and ever-changing place like real life—and markets—
there is no way to monitor for every possible contingency. A good PQM system
needs to be able to adjust to evolving scenarios.
Intel Saffron AI, different from machine learning and deep learning, learns
through association rather than by modeling possible outcomes. It builds signa‐
tures of entities that it gradually learns more about. Then it compares those sig‐
natures to identify hidden connections, patters and trends—surfacing insights
that are otherwise invisible.

Unifies both structured and unstructured data across multiple sources
A lot of insights in the real world come from unstructured data—maintenance
logs, manuals, handwritten notes, audio and video recordings, and emails. The
ability to analyze both structured and unstructured data is one of the strengths of
Intel Saffron AI. When you couple this with the insights from machine and deep
learning, you can reveal much more insights.
In other words, deep and machine learning analyze structured data to identify
symptoms, whereas associative-memory learning and reasoning analyzes
unstructured data to provide a diagnosis.

Intel Saffron AI: Associative-Memory Learning and Reasoning and Complementary Learning in Action


|

17


×