Зарегистрироваться
Восстановить пароль
FAQ по входу

Stone J.V. Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning

  • Файл формата pdf
  • размером 5,29 МБ
  • Добавлен пользователем
  • Описание отредактировано
Stone J.V. Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning
New York: Sebtel Press, 2020. — 216 p. — ISBN 9780956372819, 0956372813.
The brain has always had a fundamental advantage over conventional computers: it can learn. However, a new generation of Artificial Intelligence (AI) algorithms, in the form of deep neural networks, is rapidly eliminating that advantage. Deep neural networks (DNN) rely on adaptive algorithms to master a wide variety of tasks, including cancer diagnosis, object recognition, speech recognition, robotic control, chess, poker, backgammon and Go, at super-human levels of performance.
Unlike most books on deep learning, this is not a 'user manual' for any particular software package. Such books often place high demands on the novice, who has to learn the conceptual infrastructure of neural network algorithms, whilst simultaneously learning the minutiae of how to operate an unfamiliar software package. Instead, this book concentrates on key concepts and algorithms.
Having said that, readers familiar with programming can benefit from running working examples of neural networks, which are available at the website associated with this book. Simple examples were written by the author, and these are intended to be easy to understand rather than efficient to run. More complex examples have been borrowed (with permission) from around the internet, but mainly from the PyTorch repository. A list of the online code examples that accompany this book is given opposite. The Python and MatLAB code examples below can be obtained from the online GitHub repository. The computer code has been collated from various sources. It is intended to provide small scale transparent examples, rather than an exercise in how to program artificial neural networks. The examples below are written in Python.
In this richly illustrated book, key neural network learning algorithms are explained informally first, followed by detailed mathematical analyses. Topics include both historically important neural networks (e.g. perceptrons), and modern deep neural networks (e.g. generative adversarial networks). Online computer programs, collated from open source repositories, give hands-on experience of neural networks, and PowerPoint slides provide support for teaching. Written in an informal style, with a comprehensive glossary, tutorial appendices (e.g. Bayes' theorem), and a list of further readings, this is an ideal introduction to the algorithmic engines of modern Artificial Intelligence.
Who Should Read This Book?
The material in this book should be accessible to anyone with an understanding of basic calculus. The tutorial style adopted ensures that any reader prepared to put in the effort will be amply rewarded with a solid grasp of the fundamentals of deep learning networks.
“This text provides an engaging introduction to the mathematics underlying neural networks. It is meant to be read from start to finish, as it carefully builds up, chapter by chapter, the essentials of neural network theory. After first describing classic linear networks and nonlinear multilayer perceptrons, Stone gradually introduces a comprehensive range of cutting edge technologies in use today. Written in an accessible and insightful manner, this book is a pleasure to read, and I will certainly be recommending it to my students.” - Dr Stephen Eglen, Cambridge University, UK.
List of Pseudocode Examples.
Online Code Examples.
Preface.
Artificial Neural Networks.
Introduction.
What is an Artificial Neural Network?
The Origins of Neural Networks.
From Backprop to Deep Learning.
An Overview of Chapters.
Linear Associative Networks.
Introduction.
Setting One Connection Weight.
Learning One Association.
Gradient Descent.
Learning Two Associations.
Learning Many Associations.
Learning Photographs.
Summary.
Perceptrons.
Introduction.
The Perceptron Learning Algorithm.
The Exclusive OR Problem.
Why Exclusive OR Matters.
Summary.
The Backpropagation Algorithm.
Introduction.
The Backpropagation Algorithm.
Why Use Sigmoidal Hidden Units?
Generalisation and Over-fitting.
Vanishing Gradients.
Speeding Up Backprop.
Local and Global Minima.
Temporal Backprop.
Early Backprop Achievements.
Summary.
Hopfield Nets.
Introduction.
The Hopfield Net.
Learning One Network State.
Content Addressable Memory.
Tolerance to Damage.
The Energy Function.
Summary.
Boltzmann Machines.
Introduction.
Learning in Generative Models.
The Boltzmann Machine Energy Function.
Simulated Annealing.
Learning by Sculpting Distributions.
Learning in Boltzmann Machines.
Learning by Maximising Likelihood.
Autoencoder Networks.
Summary.
Deep RBMs.
Introduction.
Restricted Boltzmann Machines.
Training Restricted Boltzmann Machines.
Deep Autoencoder Networks.
Summary.
Variational Autoencoders.
Introduction.
Why Favour Independent Features?
Overview of Variational Autoencoders.
Latent Variables and Manifolds.
Key Quantities.
How Variational Autoencoders Work.
The Evidence Lower Bound.
An Alternative Derivation.
Maximising the Lower Bound.
Conditional Variational Autoencoders.
Applications.
Summary.
Deep Backprop Networks.
Introduction.
Convolutional Neural Networks.
LeNet1.
LeNet5.
AlexNet.
GoogLeNet.
ResNet.
Ladder Autoencoder Networks.
Denoising Autoencoders.
Fooling Neural Networks.
Generative Adversarial Networks.
Temporal Deep Neural Networks.
Capsule Networks.
Summary.
Reinforcement Learning.
Introduction.
What’s the Problem?
Key Quantities.
Markov Decision Processes.
Formalising the Problem.
The Bellman Equation.
Learning State-Value Functions.
Eligibility Traces.
Learning Action-Value Functions.
Balancing a Pole.
Applications.
Summary.
The Emperor’s New AI?
Artificial Intelligence.
Yet Another Revolution?
Further Reading.
Appendices.
Glossary.
Mathematical Symbols.
A Vector and Matrix Tutorial.
Maximum Likelihood Estimation.
Bayes’ Theorem.
References.
Index.
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация