title | date | categories | tags | ||||||
---|---|---|---|---|---|---|---|---|---|
Best Machine Learning & Artificial Intelligence Books Available in 2021 |
2020-06-08 |
|
|
Quite recently, one of my colleagues asked me to give some tips for books about machine learning. In his case, he wanted to have a book about machine learning for beginners, so that he could understand what I'm doing... which helps him think about how machine learning can create value for the company I work for during daytime.
Relatively quickly, I was able to find the book he needed - a perfect balance between technological rigor and understandability and readability. He quite liked it. And that's when I thought: there must be more people who are looking for machine learning books that suit their needs! That's why this post is dedicated to books about machine learning. More specifically, it is tailored to a set of categories: for example, you'll find beginner machine learning books, machine learning books about frameworks like PyTorch. I also cover books about Keras/TensorFlow and scikit-learn, or books about the maths behind machine learning. We even look at academic textbooks and books that discuss societal and business impacts of machine learning (and artificial intelligence in general).
This will therefore be a long post. Using the Table of Contents below, you can first select a group of books that you're interested in (or click one of the highlighted links above). Then, you'll be able to read my ideas about the books. I will cover a couple of things: the author, the publishing date (which illustrates whether it's a true classic or contains state-of-the-art knowledge), what it covers and how it does that, and my impression about the book. Additionally, I'll try to provide an overview of other reviews made available online.
Disclaimer: creating this post - and a website like MachineCurve - involves a large time investment. MachineCurve participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising commissions by linking to Amazon.
I will therefore earn a small affiliate commission when you buy any product on Amazon linked to from this website. This does not create any additional cost for you. Neither does this mean that my ideas are biased towards commerce – on the contrary, they’re real. Through affiliate commissions, I have more time for generating Machine Learning content! 💡
Last Updated: December 10, 2020
This is a work in progress! I'm working on adding more and more books on a daily basis.
In this table of contents, you can see all categories of Machine Learning books that we're reviewing on this page, as well as the individual books that are part of the categories. Click on one of the categories or books to go there directly.
[toc]
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=1617293709&asins=1617293709&linkId=85136039148af9df39286ae456a15293&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>
If you want to learn building Deep Learning models from scratch using Python.
Author: Andrew Trask, Senior Research Scientist at DeepMind
Publishing date: January 25, 2019
Price/quality: 🟡🟢 Acceptable to good
What it covers:
- Grokking Deep Learning teaches deep learning from a conceptual and a programming perspective. It teaches building deep learning models from scratch.
- You don't use any framework yet - rather, you'll use Python and NumPy.
- It covers fundamental concepts, like supervised vs unsupervised learning, forward propagation, gradient descent, backpropagation, to make you understand things from a high-level perspective.
- It then proceeds with more detailed stuff: regularization, batches, activation functions.
- After the conceptual deep dive, it broadens your view as it covers multiple types of neural networks - such as Convolutional Neural Networks, neural networks for Natural Language Processing, and so on.
- Finally, it provides a guide as to what steps you could take next.
My impression:
Grokking Deep Learning (affiliate link) is a great book for those who wish to understand neural networks - especially if they have become thrilled by the deep learning related hype. But don't understand me incorrectly, it's not a hype oriented book. Rather, it helps you take your first baby steps.
As with all good things, it starts with why. Why study deep learning; what could you possibly gain from it? You'll soon discover that the world is changing, and that it becomes increasingly automated. Deep learning is a major catalyst of this movement. What's more, it helps you understand what happens within deep learning frameworks - and, it claims, has a uniquely low barrier to entry.
Let's take a look at this from my perspective. When I first started with deep learning, I used François Chollet's Deep Learning with Python (affiliate link) to get a heads start. I've always been a big fan of this book because it makes deep learning concepts very accessible, but does so through the lens of Keras. Grokking Deep Learning takes the true conceptual path - you won't be able to create blazingly cool TensorFlow models, or create GANs with PyTorch, but you will understand what happens within the neural nets.
And it indeed does so in a brilliantly easy way. The only prerequisites are knowledge of Python and some basic mathematics knowledge - related to calculus and vector theory. And if you don't have the info, you'll learn it from the book. It contains a large amount of visualizations that help you understand intuitively what is going on. Definitely recommended if you want to get the basics. However, it seems like that towards the end of the book, the chapters become denser and less easily comprehensible. So especially the first chapters provide a good introduction. Still, if you like a little bit of searching around besides reading things from books, it could be a good choice. The Amazon reviews (affiliate link) are mostly very positive.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=1119245516&asins=1119245516&linkId=867570e867014131e7adcf92060b1a82&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you have some interest in technology and want to understand how Machine Learning models work.
Author: John Paul Mueller (freelance author and technical editor, 100+ books) & Luca Massaron (data scientist)
Publishing date: May 10, 2016
Price/quality: 🟢 Good
What it covers:
- Why machine learning is playing such a prominent role in today's list of technologies promising change.
- Introducing data science related languages, such as Python and R, which can be used for machine learning too.
- Introducing basic steps for coding in R with R Studio and in Python with Anaconda.
My impression:
Machine Learning For Dummies (affiliate link) is a good introductory book to machine learning, although it's already getting older (it was released in 2016). It first introduces artificial intelligence and covers what I think is an important aspect - art and engineering - as machine learning forces you to follow your intuition every now and then. This is followed by an introduction to Big Data, which is the other part of the coin needed.
In my point of view, the book forces you to choose a language for coding relatively quickly, as it proceeds with you preparing your learning tools: you either use R, or Python (or both, but often you'd just choose one). In doing so, it gives you a crash course of programming in both of the languages, for when you haven't done so before. And if you're not satisfied with both, it'll give you guidance to other machine learning tools as well - such as SAS, SPSS, Weka, RapidMiner and even Spark, for distributed training. However, it doesn't cover them in depth.
Then, it proceeds with the basics of machine learning - and shows you how supervised ML essentially boils down to error computation and subsequent optimization. It also covers data preprocessing, and then introduces a wide array of machine learning techniques: clustering, support vector machines, neural networks and linear models. Finally, it allows you to create models for image classification, text/sentiment classification and product recommendation.
I do appreciate the effort put into the book by the authors. However, I think that it would be best if you already have some background experience with programming - despite the crash course. In my point of view, it's also important to have a clear view about the differences between say, supervised and unsupervised machine learning, as it covers them all relatively quickly - and the field is wide. Nevertheless, if you are into machine learning programming, this can be a very good book for starters (affiliate link) - especially considering its price.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=1119467659&asins=1119467659&linkId=a6f97e6817ade6ce5a7cad21251e7158&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you have a background in business, are non-technical but want to understand what happens technologically.
Author: John Paul Mueller (freelance author and technical editor, 100+ books) & Luca Massaron (data scientist)
Publishing date: March 16, 2018
Price/quality: 🟢 Really good
What it covers:
- Some history about Artificial Intelligence
- How AI is used in modern computing
- The limits of AI, common misconceptions and application areas.
My impression:
If you're looking for an AI book that is written for business oriented people who are interested in the technology side of AI without diving deep into technology, this could be the book you're looking for.
When I give guest lectures about the impact of AI and Machine Learning, I always make sure to include a slide which asks my audience a particular question: "What is Artificial Intelligence?"
Funnily, they will find out, the precise answer to the question is given by them by remaining silent... as nobody knows.
That's why I think this book is such a good introduction for persons who want to understand Artificial Intelligence in more detail, beyond the realm of it has great impact on your business, without getting lost in programming code.
First of all, the book does precisely that: introducing Artificial Intelligence, questioning what intelligence is, taking a look at its history, including the first AI winter (connectionist-expert systems debate) and the second (the demise of the latter and the revival of the first).
It then proceeds by looking at the fuel of AI - being data. It covers why data is so useful, but also why it cannot be trusted all the time, and the limits of getting data in order. Once complete, the discussion gets a bit more technical - looking at the concept of an algorithm, introducing machine learning as well as specialized hardware for creating AI applications and running them (i.e., GPUs).
Following the conceptual part is a part that considers the uses of AI in society. First, a wide range of applications is covered - such as AI for corrections and AI for suggestions. This includes a chapter on automating industrial processes and even the application of AI in healthcare, which is a controversial topic - privacy related issues are just around the corner, not to mention the ethical implications of health
Subsequently, it provides a lot of information about applying artificial intelligence in software applications - introducing machine learning and deep learning for doing so, as well as in hardware applications, i.e. robotics, unmanned vehicles and self-driving cars. This is concluded by a chapter about the future of AI - especially from the lens of the hype that we've seen emerging in the past few years. But it also looks at the potential of AI to disrupt today's jobs, how it can be applied in space and how it can contribute to society in general.
I really like the book. I do. It helps bridge the gap between business and technology, and is in fact the book that I recommended my colleague when he wanted to understand the technology side of AI in more detail. As he's a business oriented person, he doesn't code and neither wants to learn how to. This book provides all the broad technology oriented details, links them to application areas, and is appreciative of the history of AI, the nonsense of the current AI hype, and what the future may hold. I definitely recommend it.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=B01EER4Z4G&asins=B01EER4Z4G&linkId=8d96182373f146fff14db06177ce4adf&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you want to understand what happens inside a neural network in great detail - as a learning by doing experience.
Author: Tariq Rashid
Publishing date: March 31, 2016
Price/quality: 🟢 Really good
What it covers:
- The mathematics of neural networks, but in a comprehensive way - secondary school mathematics will suffice.
- Creating your own neural network with pure Python.
- Iterative improvement of your code by showing what works and what doesn't.
My impression:
The book Make Your Own Neural Network (affiliate link) starts with a prologue in which the author covers the history of the AI field in a nutshell. Very briefly, he covers the rise of AI in the 1950s, the first AI winter, and all progress until now. Very insufficient detail in order to fully grasp what has been around in the past few years, but that's not the point - rather, it does set the stage, which was the goal.
The book contains three parts:
- In the first part, How they work, the author covers mathematical ideas related to neural networks.
- In the second part, DIY with Python, you're going to get to work. More specifically, you will build a neural network that can classify handwritten digits.
- Finally, in the third part, Even More Fun, you're going to expand your neural network in order to find whether you can boost its performance. You're even going to try and look inside the neural network you've created.
In my point of view, the author really does his best to make neural networks comprehensible for absolute beginners. That's why it's likely not a book for you if you already have some experience: you likely won't learn many new things. However, if you have absolutely no experience, I think it's absolutely one of the best books to start with. Kudos to Tariq Rashid, who has done a terrific job at making neural network theory very accessible.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=1492047546&asins=1492047546&linkId=5b58aa9476cbe9006fa79676b113ce7c&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you want a quick-lookup reference manual for when you're undecided about what to do when Machine Learning engineering.
Author: Matt Harrison
Publishing date: August 27, 2019
Price/quality: 🟢 Really good
What it covers:
- Data preprocessing: cleaning your dataset and what to do when data goes missing.
- Feature selection: which features are useful to your model? How to find out?
- Model selection: what configuration of my ML model works best?
- Supervised learning: classification and regression.
- Unsupervised learning: clustering and dimensionality reduction.
- Evaluating your machine learning model.
My impression:
You sometimes don't want a book filled with details, but rather a reference guide that you can use when you're troubled by some kind of machine learning related problem. Machine Learning Pocket Reference (affiliate link) is then a great book for you. As you can see above, it covers many things related to the building your machine learning model part of the machine learning lifecycle. From data preprocessing to evaluating your machine learning model, this guide will help you proceed.
Contrary to many books on the topic, the author avoids state-of-the-art neural network frameworks like TensorFlow, Keras and PyTorch. In doing so, he wants to focus on the concepts at hand - i.e., performing all the work with just Python and Scikit-learn, which provides many interesting helper functions. A few of the things that are covered:
- Pandas Profiling, which generates reports about your Pandas DataFrame which helps you inspect your dataset easily.
- Validation curves, which help the evaluation process, as well as Confusion Matrices.
- Performing exploratory data analysis, which includes box plots and violin plots.
And much more - including code for doing so!
When solving a problem with machine learning, time is often your greatest ally and your greatest enemy. Training a machine learning model can be time-intensive, and by consequence you want to do many things right. But then, exploring the data, cleaning the data - those are time-consuming tasks. Machine Learning Pocket Reference (affiliate link) is scattered with useful tools and techniques that help make the life of machine learning engineers easier. Once again: if you want a quick pocket guide to fall back to if you're facing a problem, you could try Google... or this book!
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=0374257833&asins=0374257833&linkId=13ee2014023e7d9f0529c84132307224&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you want a holistic perspective on AI: where it comes from, what it is now and where it is heading.
Author: Melanie Mitchell
Publishing date: October 15, 2019
Price/quality: 🟢 Really good
What it covers:
- Understanding the intelligence of AI programs and how they work
- Understanding how they fail
- Understanding the differences between AI and humans and what Artificial General Intelligence looks like
My impression:
The book Artificial Intelligence: A Guide for Thinking Humans (affiliate link) is not your ordinary AI book: it doesn't cover the practical perspective. Rather, it's a book that provides a deep dive into the AI field - allowing you to understand where things have come from, how things work, and what AI could possibly achieve.
It is written in five parts: Background, Looking and Seeing, Learning to Play, Artificial Intelligence Meets Natural Language and The Barrier of Meaning.
In the first part, Mitchell covers the roots of Artificial Intelligence. She traces current AI developments back into the past, which in my opinion is very important for people who wish to learn a thing or two about AI. You simply need to know the past. And she covers it with great detail, as you will see - she'll cover all theoretical developments, including neural networks and the patterns of AI summers (hypes) and winters (the exact opposite).
Following the Background part, the book continues with Looking and Seeing. Here, among others, the reader is introduced to Convolutional Neural Networks - which are the ones that triggered the AI hype back in 2012. It covers machine learning, the branch of AI that you hear a lot about, in depth - and does not shy away from discussing AI and Ethics, an important theme in deployment of 'smart' algorithms.
In the other parts, Melanie Mitchell covers games & AI (which leans towards reinforcement learning) and Natural Language Processing, two important themes in AI research and practice! It's actually a stepping stone towards the final, and equally important part: The Barrier of Meaning. Here, the author takes a look at Artificial General Intelligence - or what happens if AIs become as intelligent as human beings. What does it mean to 'understand'? What is knowledge, and how can it be represented in AI? As you will see, it's not very simple to replicate human intelligence. But efforts are underway.
In my opinion, the book Artificial Intelligence: A Guide for Thinking Humans (affiliate link) is a great book for those who wish to understand AI from a holistic perspective. Where does it come from? What is it now? And where is it going to? Melanie Mitchell answers those questions without making the book boring. And the reviews are in her favor: she's got a 5-star rating on Amazon. Definitely recommended - especially given the price.
7. AI Crash Course: A fun and hands-on introduction to machine learning, reinforcement learning, deep learning, and artificial intelligence with Python, by Hadelin de Ponteves
If you want to get a conceptual and hands-on introduction to Reinforcement Learning.
Author: Hadelin de Ponteves
Publishing date: November 29, 2019
Price/quality: 🟢 Really good
What it covers:
- Learning about the basics of Reinforcement Learning
- Getting practical experience by building fun projects, solving real-world business problems, and learning how to code Reinforcement Learning models with Python
- Discovering Reinforcement Learning and Deep Reinforcement Learning, i.e. the state-of-the-art in AI research
My impression:
AI Crash Course (affiliate link) is not your ordinary machine learning book. Why that is the case? Very simple - although the title suggests that it covers "deep learning", in my point of view, it seems to cover the reinforcement part of deep learning only. That is: it skips the more traditional supervised deep learning approaches (e.g. supervised deep neural networks) and unsupervised learning, which are still important areas of active research and practice today.
The fact that it does, does not make it a bad book. On the contrary: for what it does, it's good - the crash course is really relevant and is perceived to be really good by many readers, and definitely worth the money. Let's take a look at what it does from my perspective.
In chapter 1, Hadelin de Ponteves introduces you to the topic of Artificial Intelligence. It's called "Welcome to the Robot World" and not for nothing: taking the analogy of robotic systems, the author introduces you to main concepts of reinforcement learning (e.g. Q-learning and Deep Q-learning) and gives examples of the deployment of Artificial Intelligence across a wide range of industries. Chapter 2 introduces you to GitHub and Colab, while Chapter 3 subsequently provides you with a crash course in Python - relevant for those who haven't worked with the language before.
Now that you have been introduced to AI, some tools and Python, it's time to get to work. Chapter 4 kickstarts the AI Crash Course with "AI Foundation Techniques", or better: Reinforcement Learning Foundation Techniques. It introduces how AI models convert inputs to outputs, how a reward can be attached to outputs, how the environment impacts the way your AI works and one of the core topics in Reinforcement Learning - the Markov decision process. Finally, the book covers how you can train your Reinforcement Learning model.
After the introduction, the book covers a lot of applications. Using Thompson Sampling, Q-learning, Deep Q-learning and other techniques, you will create models for sales/advertising, logistics, autonomous vehicles, business in general and gaming. After those applications, where you'll create real code, the book recaps and finally suggests additional reading materials.
The book is good. You'll definitely feel as if you achieved something after completing every chapter. It even provides a lot of examples. However, I do think that the author could have better named it Reinforcement Learning Crash Course - because readers may be confused to discover the areas of supervised learning and unsupervised learning if they dive deeper into Machine Learning after reading the book. And what to think about the other approaches in AI, which have nothing to do with Machine Learning? Despite the name, AI Crash Course (affiliate link) definitely a book recommended to those who wish to get an introduction to reinforcement learning.
8. Machine Learning For Absolute Beginners: A Plain English Introduction (Machine Learning From Scratch), by Oliver Theobald
If you have no experience with Machine Learning yet and want to understand the basic concepts.
Author: Oliver Theobald
Publishing date: 2017
Price/quality: 🟢 Really good
What it covers:
- Teaching you the basic concepts of machine learning, which means that it is suitable to absolute beginners
- Teaching you how to build a model in Python, although it's not the focus
- Preparing you for more advanced machine learning books
My impression:
Well, what's in a name? The book Machine Learning for Absolute Beginners (affiliate link) is, in my opinion, a really great book to start with machine learning if you know absolutely nothing about it. It doesn't have a lot of code and it covers the basic concepts - but given the price, it's a great purchase if you want to look at whether machine learning is something for you.
The book starts by introducing Machine Learning by telling a story about IBM and a machine that plays checkers better than its programmer. Following the introduction, categories of ML - being supervised learning, unsupervised learning and reinforcement learning - are introduced, as well as what can be found in the toolbox of a machine learning engineer.
Then, the book proceeds with data: how to clean it, and make it ready for actual machine learning projects. Those are then highlighted: the book covers regression analysis with machine learning, clustering, bias & variance, and a lot of machine learning techniques such as neural networks, decision trees and model ensembles.
Once you know about the concepts, it teaches you how to build a model in Python with the Scikit-learn framework, as well as how to optimize it. This prepares you for other, more advanced books - e.g. the ones introducing Scikit-learn in more detail, or TensorFlow/Keras.
I think the book Machine Learning for Absolute Beginners (affiliate link) is priced appropriately and does what it suggests: teach you about the absolute basic concepts in Machine Learning. Don't expect the book, which is just short of 130 pages, to make you an expert. But if you have no clue about ML and what it is, this book will help you understand things quickly. Definitely recommended in that case!
9. Neural Network Projects with Python: The ultimate guide to using Python to explore the true power of neural networks through six projects, by James Loy
If you want to start gaining hands-on experience with neural networks, without losing track of the concepts.
Author: James Loy
Publishing date: February 28, 2019
Price/quality: 🟢 Really good
What it covers:
- Architectures of neural networks: Convolutional Neural Networks (CNNs/ConvNets) and Recurrent Neural Networks (LSTMs)
- Using popular frameworks like Keras to build neural networks
- Diving deep into application areas of machine learning like the identification of faces, of other objects, and sentiment analysis
My impression:
Machine learning has come a long way - but today, neural networks are the primary driver of the machine learning hype. The book Neural Network Projects with Python (affiliate link) understands this and introduces you to the topic. It's a good read for those who have some machine learning experience and want to be introduced to neural networks.
The book starts with a Machine Learning and Neural Networks 101. The chapter covers what machine learning is, how your computer can be set up to run ML, and introduces you to frameworks for data science and ML: pandas, TensorFlow/Keras, and other libraries.
This is followed by a variety of neural network types such as Multilayer Perceptrons (which we know from the past, as traditional ML models), Deep Feedforward Networks (the trend of deep learning), and their specific variants - ConvNets, autoencoders, and recurrent ones being LSTMs. Finally, the book lets you implement an Object detection system for faces with contemporary frameworks. Each chapter covers the particular sensitivities of the problem at hand: for example, it covers scaling of data, other preprocessing, feature selection, but also a review of neural network history and configuration. This allows you to understand the ML concepts in more depth.
As mentioned, I do think that this book is a good introduction to neural networks for those who have no experience, or limited ML experience at least.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=0134845625&asins=0134845625&linkId=de4de00cb6a07815ef9a1d84415bdb93&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you're a beginner to ML and want to start building models rather than being botered with maths.
Author: Mark Fenner
Publishing date: July 30, 2019
Price/quality: 🟢 Good
What it covers:
- Understanding machine learning algorithms and concepts
- Introducing yourself to machine learning pipelines: feature engineering, model creation, and evaluation.
- Applying machine learning to images and text in classification and regression settings
- Studying neural networks and building your own models with Scikit-learn
My impression:
Machine learning is a daunting topic to many, especially given the fact that many books on the topic are filled with maths. In Machine Learning with Python for Everyone (affiliate link), Mark Fenner aims to teach the absolute ML beginner how things work, without requiring them to have substantial experience with mathematics. Although some equations will be inevitable, Fenner mostly explains things through stories and visualizations - as well as code.
The book contains 4 parts. In Part 1, Fenner establishes the foundation that you need to understand the absolute basics of ML. In chapter 1, the book covers what ML, and especially supervised ML, involves (i.e. features and targets), shows what classifiers and regressors are, and introduces evaluating machine learning systems. This is followed by chapter 2, which covers a bit of technology background - and some maths. This is followed by two more detailed chapters on creating classifiers and regressors. You'll be introduced to what they are, and some initial examples with e.g. a Nearest Neighbors or Naive Bayes Classifier, or Linear Regression.
Moving to Part 2, Fenner starts discussing model evaluation. Chapter 5 introduces the reader to why models must be evaluated, and how model error can be represented bya cost function. Beyond simple error measurement, the chapter also covers how sampling strategies such as Cross Validation can be used to make better evaluation estimations. Chapters 6 and 7 deepen your knowledge about evaluation by specifically looking at evaluation methods for classifiers and regressors (with e.g. ROC curves, Precision-Recall curves, and basic regression metrics).
Part 3 deepens the reader's knowledge about classifiers, regressors and machine learning pipeline. In chapter 8, you'll read about a variety of classifiers (Decision Trees, Support Vector Machines, Logistic Regression) which you haven't read about earlier in the book. You'll also learn how to compare them in order to select the right one for the job. The same happens with regressors: you'll learn about Linear Regression, Support Vector Regression, Regression Trees and others.
Chapter 10 introduces you to feature engineering. How to select features for your model while being confident that they actually contribute to generating the prediction? How to prepare data so that it can be fed to the model training process properly? Those and other questions will be answered in chapter 10. This is followed by tuning of the model (and specifically, its hyperparameters) in chapter 11.
Part 4 introduces the reader to more complex topics. Chapter 12 introduces you to ensemble methods, which is a difficult way of describing the combination of models to generate better performance. This is followed by chapters about automatic feature engineering (such as Principal Components Analysis, or PCA), feature engineering for specific domains (such as text, clustering, and images), and advanced feature engineering (such as neural networks).
That's it! In fewer than 600 pages, Mark Fenner covers the width and depth of the machine learning field - without a lot of maths. This way, readers who have a background in programming (preferably Python) and don't want to be bothered by the math have a chance of getting started with their machine learning endeavor. The reviews love the book: "This is a wonderful book", "best approach I have seen" and so on. However, it does not guide you through writing code fully… one review suggests that the reader must be familiar with Jupyter Notebooks (or getting familiar with them) before they can get started. Perhaps, that's a suggestion for improvement. But for the rest, Machine Learning with Python for Everyone (affiliate link) is definitely recommended to those who want to just start with ML! :)
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ss&ref=as_ss_li_til&ad_type=product_link&tracking_id=webn3rd02-20&language=en_US&marketplace=amazon®ion=US&placement=B07RHG1WGF&asins=B07RHG1WGF&linkId=3d72729d5e598fdbe0c3f87ec53706bb&show_border=true&link_opens_in_new_window=true"></iframe>If you want to understand Machine Learning at a high level.
Author: Steven Samelson
Publishing date: 2019
Price/quality: 🟠🟡 Moderate to acceptable
What it covers:
- A high-level coverage of the field of machine learning.
My impression:
The book Machine Learning: The Absolute Complete Beginner’s Guide by Steven Samelson (affiliate link) was published in 2019, has 122 pages and proposes to be guiding beginners in Machine Learning in their learning path.
As such, it's important to realize that this boek does not provide an in-depth introduction to Machine Learning. It's neither a technical handbook. Rather, it can be considered to be a high-level perspective on what ML is, what it is not, and how you can start.
For this reason, some people feel a bit disappointed upon reading the book, as they had expected this book to be that in-depth introduction. It's not. What's more, people on Amazon complain about the grammar - that it's not good. Therefore, I'd say: be careful checking out this book - despite the low price of approximately 5 dollars.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ss&ref=as_ss_li_til&ad_type=product_link&tracking_id=webn3rd02-20&language=en_US&marketplace=amazon®ion=US&placement=149204511X&asins=149204511X&linkId=eee05b76e32b1822676619b4a7e60374&show_border=true&link_opens_in_new_window=true"></iframe>If you want to grasp the ML model lifecycle: from training a model to deploying it while measuring its success - focused on product development rather than theory.
Author: Emmanuel Ameisen
Publishing date: January 2020
Price/quality: 🟢 Really good
What it covers:
- Developing machine learning driven products rather than models only
- Covering the machine learning lifecycle for doing so
- Teaching you how to define a ML problem that solves your product goals
- Explaining how to build a ML pipeline with a dataset, train and evaluate your ML models, and finally deploying and monitoring them
My impression:
The book Building Machine Learning Powered Applications (affiliate link) has four parts: (1) Finding the correct Machine Learning approach; (2) Building a Working Pipeline; (3) Iterating on Models, and (4) Deploying and Monitoring them.
Being a software engineer, this list really excites me, because I'm always focused on getting things out there - moving away from theory and deploying my machine learning models and other software in the field. I haven't encountered many books yet which suggest that they cover the entire path from having a problem to model deployment from a practical point of view!
In the preface, the author Emmanuel Ameisen - who now works as a Machine Learning engineer at payment platform Stripe - introduces the foundations for this book: that its their goal to teach you how to use ML for building practical applications. It then moves to chapter 1, which covers "framing" your product's goal into a Machine Learning problem.
That's an important aspect, because many problems that can be solved by ML can be solved even better by traditional, sometimes even human-only, methods! Machine Learning, despite the hype, should not be used for everything. In chapter 2, the author continues here, by diving deeper into the framing exercise - teaching you how to create a plan for making a machine learning model. It covers the first steps in model monitoring, scope estimation, and planning how to get to work.
In chapter 3, the author helps you create an end-to-end pipeline for your machine learning model. You're taught what are the contents, from data preprocessing to creating the skeleton of your model. This is followed by chapters 4 and 5, which actually cover exploring and processing your first dataset as well as training and evaulating your first attempt in training the machine learning model.
Despite the good effort, it's likely that your model will not work out of the box - that is, very well, after your first training. Chapter 6 covers this, by teaching how to debug your ML model in order to make it better. It covers best practices in software and machine learning development, visualizing the model, ensuring predictive power (that means, that your model works on your data) as well as generalization (that is, it also works on data it has never seen before). Chapter 7 expands on this by introducing classifiers.
Part 4 covers model deployment and does so with chapters 8 to 11. First, and I think this is a good thing, the book starts with deployment considerations in chapter 8. Doing so, the author stresses to think first and deploy later. It covers aspects related to data ownership, bias and its consequences, as well as abuse of your ML model.
Chapters 9, 10 and 11 subsequently cover the actual aspects of model development. In chapter 9, you're taught whether to deploy on a server side setting or deploy your model directly at your client. Chapter 10 covers building safeguards for models, to ensure that your model keeps running - despite the variety of edge cases that it could potentially face. Chapter 11, finally, covers how your machine learning model can be monitored.
If you're responsible for training machine learning models as well as their deployment, I think that this is a great book - but only when you're a beginner in doing so (let me explain this a bit later). I think that this book is unique in the sense that it covers model training as well as deployment. It's also very young (early 2020) and therefore really relevant for today's practice. Now, with respect to your level in doing so - if you are a beginner in training and deployment, this book is great. You will learn a lot when reading it; especially because it also greatly covers the 'why' of deployment. Now, if you aren't a beginner, I think that it's likely that you already know a lot about the training aspects of machine learning models. This makes the first few parts of the book less relevant for you. Even then, still, I think that the book's continuation into model deployment will teach even the more advanced ML engineers a few things - because ML deployment is still a developing area in today's ML practice.
In sum: definitely a unique book that I would love to have on my book shelf :)
13. Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence, by Jon Krohn, Grant Beyleveld and Aglaé Bassens
If you want to get a more in-depth introduction with a great balance between theory and practice.
Author: John Krohn, Grant Beyleveld and Aglaé Bassens
Publishing date: August 5, 2019
Price/quality: 🟢 Really good
What it covers:
- Finding out why deep learning is different and how it can benefit practice
- Mastering the theory: finding out about neurons, training, optimization, ConvNets, recurrent nets, GANs, reinforcement learning, and so on
- Building interactive deep learning applications, to help move forward your own AI projects
My impression:
We hear a lot about Deep Learning today. The field is a subset of Machine Learning, which is a subset of the broader field of Artificial Intelligence in its own. The book Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence (affiliate link) aims to specifically teach you the introductory principles of Deep Learning - with the goal of faciliating that you'll work on your own AI projects. For this, it has four parts:
- Introducing Deep Learning
- Essential Theory Illustrated
- Interactive Applications of Deep Learning
- You and AI
The chapters seem to be densely packed with information. Personally, I never think this is a bad thing - because you'll get your money's worth of information. However, if you're not so much into reading, perhaps different books are better choices, providing you a coarser overview of the Deep Learning field. Now, with respect to the chapters, in part 1, deep learning is introduced. Chapter 1 covers the differences between biological vision (i.e., how organisms see) and computer or machine vision. It provides a bit of history as to how computer vision has evolved over the years, and what applications are possible today. We therefore find that the book starts with a bit of history as well as applications - in my point of view those are good methods for approaching the field holistically.
Chapter 2 does the same, but then for human language and machine language. It introduces how deep learning is used for Natural Language Processing, and how things have evolved in the past. Briefly, it guides you how language is represented in machines, and how techniques that can handle those representations are used in today's applications. Chapters 3 and 4 repeat this, but then talk about machine art (chapter 3) and game-playing machines (chapter 4). Part 1 therefore gives you a true overview of the Deep Learning field - as well as its history and applications. A great introduction as to the context and the why of what's happening.
Now that the reader is introduced to context, the book moves forward with essential theory. It introduces neural networks by means of a practical implementation using Keras, followed by theory about artificial neurons. That is, chapters 5 and 6 cover both a practical introduction to neural networks as a theoretical one. Following the path from the Perceptron to modern neurons, the reader can appreciate how things have evolved while directly putting things to practice. Once again, I appreciate this approach of the author.
Chapters 7, 8 and 9 study deep learning theory in more detail. Now that the reader has been introduced to neurons in chapter 6, chapter 7 moves forward by structuring them into networks. What does the input layer mean? What are densely-connected (or fully-connected) layers? And how can we stack them together to generate a neural network? Those are the questions that will be covered in the chapter. Subsequently, chapter 8 moves even more in-depth by studying cost functions (or: loss functions), optimization (or: how models are made better), backpropagation. Here, once again, things are made practical by means of Keras implementations, which is good. Chapter 9 subsequently moves forward by introducing weight initialization (and things like Xavier init), vanishing and exploding gradients, L1/L2 regularization, Dropout and modern optimizers. It really throws you off the cliff in terms of complexity, but you'll learn a lot.
Part 3, covering chapters 10 to 13, moves forward to practical applications of Deep Learning. Now that you understand better how those models work, it's time to study Machine or Computer Vision, Natural Language Processing, Generative Networks and Reinforcement Learning. Those application areas have their own specifics when it comes to deep learning. For example, computer vision widely uses ConvNets, which are covered in the book. GANs work differently compared to Computer Vision and NLP models, because they work with two models - at once - for generating new data. Reinforcement Learning is even more different, and the book teaches you how agents can be used in situations where insufficient training data is available.
Finally, part 4 - chapter 14 - covers how YOU can move forward with AI, and specifically your own deep learning projects. It covers a set of ideas, points you towards additional resources, and gives you pointers to frameworks/libraries for Deep Learning such as Keras, TensorFlow and PyTorch. It also briefly covers how Artificial General Intelligence (AGI) can change the world, and where we are on the path towards there.
In my point of view, this book is a great introduction to the field of Deep Learning. This is especially so for people who are highly specialized in one field of Deep Learning, while they know not so much about other fields. For example, this would be the case if you are a Computer Vision engineer, wanting to know more about Reinforcement Learning. The book nicely combines theory with practical implementations, meaning that you're not drowned in theory but gain enough theoretical understanding in order to understand what's happening. Once again, as with many books reviewed here, definitely recommended. Even so that I'm actually considering buying it myself :)
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ss&ref=as_ss_li_til&ad_type=product_link&tracking_id=webn3rd02-20&language=en_US&marketplace=amazon®ion=US&placement=1680506609&asins=1680506609&linkId=0697c4a832b333a3fe04ca1784da50b8&show_border=true&link_opens_in_new_window=true"></iframe>If you want to start with machine learning as a software engineer
Author: Paolo Perrotta
Publishing date: March 31, 2020
Price/quality: 🟢 Good
What it covers:
- Creating machine learning models from a developer perspective
- Intuitively understanding what happens in supervised machine learning without being buried in maths
- Working through the history of machine learning techniques (linear regression, perceptrons) before creating actual deep learning models, to give you a bit of context as well.
My impression:
The field of machine learning presents some really big barriers to entry, especially if you don't come from a mathematics origin. In fact, websites like MachineCurve exist because people - me included - think that it's possible to learn machine learning without having to dive through all the maths. In fact, I even think that it's perfectly possible to get intuition about what happens without having to write down that math. If it works, and if it works well - and if you can explain why - why would that be no good?
The book Programming Machine Learning: From Coding to Deep Learning (affiliate link) takes a similar perspective. It is a book for developers who want to learn how to build machine learning models from scratch. It covers three widely used terms within the field: supervised learning, neural networks, and deep learning.
Supervised learning, which involves training machine learning models based on a training dataset, is introduced in Part 1 of the book. You'll learn how to create a program that can learn within a short time, and turn it into a perceptron (indeed, that single-neuron network that was very popular in the 1950s). The author attempts to achieve this by first explaining how machine learning works. That is, supervised learning is introduced, and it is explained why it is different than regular programming. Then, the book teaches you how to make your system ready for machine learning. Part 1 also covers these elements:
- Understanding the problem that you're trying to solve with machine learning
- Understanding gradient descent
- Understanding hyperspaces and how they are used in classifiers
- Understanding more advanced methods for classification
Subsequently, the book moves to larger neural networks: in Part 2, you'll learn how to create code for Multilayer Perceptrons. Finally, in Part 3, you'll be using modern machine learning libraries for creating actual deep learning models. It also introduces more advanced deep learning techniques.
The book Programming Machine Learning (affiliate link) is true to its title. That is, it explains machine learning in plain English and with a lot of pictures to support the content. While it can be a bit high-level for people who already have a background in machine learning, and while it can be a bit non-technical for those who really want to be fully introduced to a library like Scikit or TensorFlow, it's a good book if you are a (Python) developer who wants to start learning ML. It is like a stepping stone to a more advanced, in-depth machine learning book. Good buy.
15. Machine Learning with Python: An introduction to Data Science with useful concepts and examples, step by step, learning to use Python, by William Gray
If you want to grasp the basics of machine learning, and start with Python based ML development.
Author: William Gray
Publishing date: July 26, 2019
Price/quality: 🟡 Acceptable
What it covers:
- The fundamental concepts and applications of machine learning
- What kind of machine learning algorithms there are, as well as other branches of Artificial Intelligence
- Python basics
- Machine Learning in Python, including case studies
- Key frameworks, open source libraries, including the creation of a machine learning model and learning how to deploy it in a web application.
My impression:
The book Machine Learning with Python (affiliate link) is in fact two books in one - a book about the concepts of Machine Learning, while another focuses on key frameworks and real Python applications. Let's briefly focus on the contents of the two books first, and then proceed with our impression.
The first book first starts with an introduction to machine learning. Why use AI? What is its purpose, and what are research goals? Then: what is machine learning, and how does it differ from AI? Those are the questions with which this book starts. Subsequently, fundamental concepts of Machine Learning are discussed - that is, methods, applications, deep learning and deep neural networks. It also attempts to cover some applications such as Siri, Cortana, use at Paypal and Uber, and Google Translate. Chapter 3 then covers all use of AI around us.
Chapter 4 moves on with explaining the various categories of ML algorithms. It covers supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning, including use cases for the application of the latter category. Chapters 5 to 7 work towards an introduction to Python. It attempts to brush up your math skills to start with Python maths libraries, teaches you Python syntax, and introduces you to Numpy, Pandas and Matplotlib - the main data analysis libraries. Chapter 8, 9 and 10 cover machine learning applications, and how Python has evolved. Chapters 11, 12 and finally 13 cover challenges of ML in big data, general AI, and whether third world countries can learn AI.
The second book starts again with introducing Python for machine learning. Chapter 2 then moves forward by understanding key ML frameworks, starting with the right questions, and whether you should best use deep learning techniques or more classic techniques. It also looks forward to model deployment and model optimization.
Chapter 3 introduces most of the modern frameworks used in Machine Learning such as TensorFlow, Keras, PyTorch and so on. Once again, it introduces ML approaches i.e. supervised and unsupervised learning in Chapter 4. Chapter 5 introduces Scikit-learn based classification while chapter 6 teaches you how to implement neural networks from scratch. Chapter 6 moves to activation functions, overfitting, hyperparameter tuning and types of classification algorithms. Chapter 7 introduces TensorFlow, chapter 8 teaches you how to deploy ML models into a web application and chapter 9 moves on to the future of ML and how to attain a competitive advantage with Machine Learning.
Now that we have seen the contents of the book - or the books in book- in-book, in fact - I'm a bit puzzled. If you have some experience with machine learning, there is a way to get up the learning curve when you wish to learn machine learning. If you then look at the table of contents of this book, it seems that the topics flow back and forth a bit - and sometimes, a logical structure seems to be missing.
This is also reflected in the reviews on Amazon (affiliate link): "a lot of repetition to explain again what ML is", and "it looks like a series of blog articles". References are missing and I also spotted textual mistakes here and there. This is what I saw as well: topics are reintroduced time and again. Therefore, I'd like to stress that this is an acceptable book for beginners. It will definitely teach you something. However, if you expect absolute quality, this might not be the book for you.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ss&ref=as_ss_li_til&ad_type=product_link&tracking_id=webn3rd02-20&language=en_US&marketplace=amazon®ion=US&placement=1492047546&asins=1492047546&linkId=10104b78f956dbbc83b36d83c883ffdb&show_border=true&link_opens_in_new_window=true"></iframe>
If you want to start with structured machine learning.
Author: Matt Harrison
Publishing date: August 27, 2019
Price/quality: 🟢 Really good
What it covers:
- Exploring your dataset, performing analyses for finding how to proceed;
- Cleaning data and handling missing data
- Selecting features useful to your machine learning model
- Selecting a model class
- Building a classifier and a regression model
- Evaluating the model with classifier- and regression-specific metrics
- Unsupervised learning techniques such as Clustering and Dimensionality reduction
- Pipelines in Scikit-learn
My impression:
The book Machine Learning Pocket Reference: Working with Structured Data in Python (affiliate link) by Matt Harrison has 19 chapters and claims that it helps you guide the basic waters of machine learning. That is, if you are a beginner, and have no idea - not even about some basic concepts - this book should be for you. Let's take a look.
As we mentioned before, the book covers the process of training a machine learning model: exploring your dataset, to find starting points; cleaning and preparing your data; selecting features and a model; model training, and finally, model evaluation. It structures the chapters in that way: rather sequentially. I think this is good, because it allows you to gain understanding of the concepts as well as link them together in a natural flow.
Now, with respect to the chapters: chapter 1 covers the technical basics. That is, Harrison discusses the libraries that are used throughout the book. The list is quite extensive, but it includes autosklearn
, sklearn
, fastai
, xgboost
, sklearn
, the basic stack of numpy
, scipy
and matplotlib
, and many others. It also covers how to install them with Pip
or Conda
, creating an environment specifically for the book so that your work does not interfere with other projects on your host machine.
In my point of view, Chapter 2 then comes at precisely the right time. If you want to begin studying a topic, I think it's very important to perform what I call castle building. The castle represents your knowledge, and by studying a variety of topics you can both build your castle and make it uniquely targeted at a particular job or skillset. This helps you study for understanding rather than recollection. Elon Musk attempts to connect the dots when he reads, and I too think that this is very important. But where to start? Often, a holistic standpoint is good in those cases - a starting point where you look at the entire field from a high level. It allows you to see the dots, which you can then start connecting. Chapter 2 of this book precisely provides you with that holistic point of view: it presents - at a high level - the process of creating structured machine learning models. It starts with "asking a question" and eventually moves to "deploy model". From my perspective, and the way I learn, I think that's a great way of teaching.
Chapter 3 then introduces you to a variety of topics, such as imports, what happens when asking a question, importing data, creating the model, and optimizing it. It introduces you to all the different topics a bit more in-depth. This is expanded in the subsequent chapters. Chapter 4 covers missing data, and what to do when data is missing, dropping it, and so on. Chapter 5 covers cleaning your dataset to make it more suitable for machine learning. Chapter 6 subsequently covers data exploration: through statistics such as size, summary stats, histograms, scatter plots, box plots, and so on, it's possible to draw conclusions about the data sets. For example, they allow you to discover whether your dataset is large enough for a particular model.
This is further expanded upon in chapters 7 and 8, which specifically target data preprocessing and feature selection. Chapter 9 subsequently covers what to do when you have an imbalanced dataset: this happens in scenarios where a large group of samples belong to one class, while the other class (or classes) have significantly fewer samples. As models would naturally favor this large class, this could lead to a biased model. The book covers methods for handling those cases. This is where you move from data preparation to model building.
Chapter 10 introduces you to a variety of methods and techniques for classification. For example, it covers logistic regression, naive bayes, SVMs, decision trees - the quite older methods - and the relatively newer ones, such as XGBoost and other boosting techniques. It does not cover deep neural networks in depth. Chapter 11 moves forward by answering the question: given this large set of possible models, how do I select the right one? That's also one of the important questions to answer. Chapter 12 subsequently moves to metrics and evaulation, which you'll have to perform after training, and introduces a variety of metrics that can be used for this. Chapter 13 then explains how to add model explainability - or to answer the question why your model predicted what it predicted. Chapters 14, 15 and 16 perform the same as chapters 10, 11 and 12, but then for regression models. The book then moves to unsupervised learning techniques in chapters 17 and 18.
Finally, in chapter 19, it covers the creation of pipelines using Scikit-learn. It does so for both classification, regression and PCA.
In sum, I think that this is a very useful book that is packed with useful code. It should help you get that a-ha moment back when you're confused during a ML project. What it's not, and this is what the book introduction also covers, an extensive guide. Rather, it's a "companion" - a nice analogy found in one of the reviews on Amazon (affiliate link). If you're a beginner, wishing to grasp the concepts in more detail but in a relatively "pocket" version, this could be the book for you. Definitely recommended.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=1617294438&asins=1617294438&linkId=531821f14c94745281c3d59eb8a7b34a&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>
If you have a background in Python and want to get started with Deep Learning.
Author: François Chollet
Publishing date: 2017
Price/quality: 🟡🟢 Acceptable to good
What it covers:
- An introduction to deep learning. What is it? How does it differ from previous approaches?
- Mathematical building blocks of neural networks, in an accessible way.
- Your first neural network with Keras (densely-connected neural network).
- Convolutional neural networks for image classification.
- Recurrent neural networks for text classification.
- Best practices for deep learning.
- An introduction to generative deep learning.
My impression:
This is the book which I used to start learning about deep learning. I come from a software background and have always loathed the maths intensive books... it felt as if I couldn't start properly when reading those books. François Chollet's Deep Learning with Python (affiliate link) provided the precise balance between rigor and accessibility. Rather than explaining things through a lens of mathemathics, Chollet - who is one of the key players in the field of Deep Learning - utilizes a programming lens instead.
In the book, Chollet first introduces deep learning. What is it? How is it different from previous machine learning techniques like Support Vector Machines? He then proceeds with some mathemathics, as at least a basic understanding and appreciation of maths is necessary in order to get started with neural networks. Those cover the fundamentals of deep learning. The first part also includes basic neural networks with Python code, utilizing the Keras framework - of which he is the creator.
In the second part of the book, Chollet introduces a variety of practical applications of deep learning. This part builds on top of the first and can be considered to be the "advanced" part of the book. He introduces ConvNets for computer vision, covers text based classifiers, provides best practices for deep learning and covers generative deep learning models. This wraps up the Deep Learning with Python book.
Back in 2018, when I started with deep learning, this was a great book. And to be honest, it still is. Especially for beginners, it can be a great book to understand neural networks conceptually. However, the deep learning landscape has changed significantly over the past few years. The biggest change impacting this book: the deep integration of Keras with Tensorflow since TensorFlow 2.0. Where Chollet utilizes the standalone Keras version keras
throughout the book, it's best practice to use tensorflow.keras
these days. While this is often not problematic (as a simple replacement does wonders), some parts of the framework have moved to different locations, which poses the risk that parts of your code might no longer work properly. This means that you might need to perform some Googling around.
If you're a fan of Chollet and his style, go for the book (affiliate link). If not, or if you want to ensure that you buy a book that is more up to date, this could perhaps not be the book for you. Nevertheless, it's one of my all time deep learning favorites... it's the book I started things with :)
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=webn3rd02-20&marketplace=amazon®ion=US&placement=161729554X&asins=161729554X&linkId=ab061813f38b0f0e120e4ef2eb9085e0&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff"></iframe>If you have a background in R and want to start with Deep Learning.
Author: François Chollet and J.J. Allaire
Publishing date: February 9, 2018
Price/quality: 🟡🟢 Acceptable to good
What it covers:
- An introduction to deep learning. What is it? How does it differ from previous approaches?
- Mathematical building blocks of neural networks, in an accessible way.
- Your first neural network with Keras (densely-connected neural network).
- Convolutional neural networks for image classification.
- Recurrent neural networks for text classification.
- Best practices for deep learning.
- An introduction to generative deep learning.
My impression:
The original Keras framework was targeted at many backends - TensorFlow, Theano, CNTK for Python, to name a few. But it also runs on R. The book Deep Learning with R (affiliate link) by François Chollet and J.J. Allaire allows you to take your first steps in developing neural networks if you have a background in R rather than Python.
It was published a few months after the Python version, being Deep Learning with Python. The table of contents is the same - which means that it is practically the same book, but then filled with examples in R rather than Python ones. As well as the Python variant, it covers multiple things. Firstly, it introduces deep learning. What is it and how is it different than previous approaches? Then, it continuous by introducing some mathematical building blocks of neural networks. Don't worry, the maths aren't heavy.
This is followed by your first baby steps in building a neural network. Using a few Dense layers of the Keras framework, you'll build your first classifier. This is followed by more practical and relatively state-of-the-art types of layers, such as Convolutional layers and hence ConvNets for image classification and Recurrent layers and hence Recurrent Neural Networks for text classification. The authors then finalize with deep learning best practices and an introduction to generative deep learning.
As with the Python book, I think it's a great book for those who wish to understand the concepts behind deep learning but have a background in R. However, here too, you must be careful about the fact that the book is a bit older... and the Keras landscape has changed significantly over the last few years. In fact, it seems that TensorFlow 2.x is now the lingua franca in terms of backends used with Keras. This means that 'old' Keras still supports R, but that it's no longer the main focus. That's why I'd suggest to switch to the Python version instead, or even try a few different Python related machine learning books which are more up to date. But conceptually, this is still a great book (affiliate link).
3. Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2, by Sebastian Raschka
If you want to get a broad introduction to Machine Learning and Deep Learning, followed by Python examples with the Scikit-learn and TensorFlow 2.0 frameworks.
Author: Sebastian Raschka
Publishing date: December 9, 2019
Price/quality: 🟢 Really good
What it covers:
- Learn to use the Scikit-learn and TensorFlow frameworks for machine learning and deep learning.
- Study across a wide range of applications, such as image classification, sentiment analysis and more.
- Select and build a wide range of model types (neural networks and classic models) including best practices for evaluating and tuning them.
My impression:
The name Sebastian Raschka already makes me think positively about this book. When working with his Mlxtend toolkit for visualizing classifier decision boundaries (affiliate link), Raschka was quick to respond to issues filed at his GitHub repository, and responded positively to my request for using his toolkit in my blog posts. Still, that does not provide an objective review of his book. Let's take a look what's inside.
The book starts with an introduction about machine learning, looking at what it is. It introduces the three main branches of ML - being supervised learning, unsupervised learning and reinforcement learning, and follows up with basic terminology and an introduction to the basic machine learning workflow. He also makes a case for using Python in the book, being one of the important languages for data science and machine learning.
He then proceeds by explaining the most salient machine learning models there are, and how they work. It starts with the Rosenblatt Perceptron, a very old type of artificial neural network, and explains how it is optimized i.e. by means of the Perceptron Learning Rule. Next, he covers a wide range of traditional ML approaches: logistic regression, Support Vector Machines (for linear learning), kernel SVMs (for nonlinear learning), decision trees and kNN. This way, Raschka makes sure that you both appreciate and understand salient machine learning algorithms before deep learning was hot.
Part of training a machine learning model is data preprocessing, i.e. handling missing data, selecting features for your model, and so on. This is especially true for classic ML models. That's why Raschka, before introducing neural networks, proceeds with a wide range of important topics first. He covers preprocessing, dimensionality reduction (especially important with traditional ML algorithms), as well as best practices for model evaluation and hyperparameter tuning. Ensemble learning is also covered, i.e. how multiple models can be combined to generate one prediction, possibly improving predictive power on the go.
Once you finish this part of the book, you'll work on various applications. Throughout a wide range of application areas (sentiment analysis, computer vision, agents in complex environments, image synthesis) as well as ML types (neural networks, unsupervised approaches, ConvNets, GANs and recurrent neural networks), Raschka covers recent developments thoroughly. This includes building models from scratch, to understand them conceptually, as well as recreating what you did using modern machine learning frameworks such as Scikit-learn and TensorFlow (including Keras). Reinforcement learning is covered separately, and includes its theoretical foundations, important algorithms, and a first implementation using the OpenAI Gym Toolkit.
If you have some experience writing Python code and want to start with machine learning, Python Machine Learning(affiliate link) is a really good book. It's up to date (as it was released relatively recently), it covers state-of-the-art frameworks and toolkits, but it also doesn't fail to explain the concepts, best practices and the history of machine learning. Thus, rather than ending up having a good understanding of one particular type of ML, Raschka's book introduces you to the full breadth of ML and invites you to specialize further. Recommended!
4. TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers, by Pete Warden & Daniel Situnayake
If you want to understand pragmatically what needs to be done to make Edge AI possible with TensorFlow.
Author: Pete Warden and Daniel Situnayake
Publishing date: December 16, 2019
Price/quality: 🟢 Really good
What it covers:
- Various applications: speech recognition, image recognition, gesture responder
- Real embedded hardware: deploy ML models on Arduino and low-power microcontrollers
- ML background: understand basic concepts of machine learning and training models based on various data sources
- TensorFlow Lite for Microcontrollers, which helps you make tiny Machine Learning models
- Ensure privacy and security by design
- Optimize for latency or energy usage and understand how this impacts your model
My impression:
If you take a close look at trends in machine learning, you will find that models are required to be smaller and smaller. The reason is simple: they must run in an embedded way, at the edge, for the reason that people no longer want to run them in cloud environments but directly in the field.
This is pretty problematic, as machine learning models - and especially today's deep neural networks - have a large amount of parameters and are thus often way too large for running on microcontrollers such as Arduinos and other low-power environments. What to do about this is what is covered in the book TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers (affiliate link). If you're looking to learn about Edge AI and really learn to apply it, this could be a very interesting book for you.
As with any book about machine learning, it covers its basic concepts first. You'll learn what ML is as well as the workflow used in Deep Learning projects. This is followed by an introduction of the microcontrollers used by the authors. However, don't expect a big introduction about ML and microcontrollers in this book: it's about the synthesis of both and the authors therefore expect that you have gained basic knowledge elsewhere, which makes sense to me.
The book then proceeds by showing how you can build and deploy models for word detection, person detection and gesture detection. This includes exporting the model to TensorFlow Lite, which allows you to convert your model into one that can run on low-power environments; also converting it into C for running on Arduinos is covered.
Following the applications, TensorFlow Lite for Microcontrollers is introduced. This first starts with a hierarchy between TensorFlow, TensorFlow Lite and the Microcontrollers edition, and covers many of the microcontroller related aspects for deploying machine learning models. Finally, the authors cover designing your own applications with best practices, and show you what optimization of your ML model looks like.
This is a pretty recent book about a topic within Machine Learning that in my opinion will gain a lot in popularity in the years to come. Deploying AI models in the field will be increasingly important, and you'll be able to set yourself apart if you know a lot about this niche. Book reviews (affiliate link) are very positive. Recommended!
5. Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, by Antonio Gulli, Amita Kapoor & Sujit Pal
If you want to learn more about the TensorFlow 2.0 and Keras APIs, and get some real coding experience.
Author: Antonio Gulli, Amita Kapoor and Sujit Pal
Publishing date: December 20, 2019
Price/quality: 🟢 Good
What it covers:
- Learn creating deep neural networks with TensorFlow 2.0 and Keras
- A wide range of application areas for TF 2.0 and Keras deep neural networks
- Many code examples
My impression:
If you already know what deep learning is all about, and want to get some practical experience, this book could be for you. The second edition of Deep Learning with TensorFlow 2 and Keras (affiliate link) entirely focuses on TensorFlow 2.0 and the Keras API, gives a lot of code examples, and touches all the important concepts within deep learning from a developer perspective.
However, it seems that it does require that you already know what stuff is about at a conceptual level. For example, with the Perceptron algorithm, it simply introduces the way it activates, what it does (construct a hyperplane between samples) and how it is optimized. That's it. For other details, it assumes that you know to find your way to Google. This is the same for all other topics covered in the book.
However, as I wrote, if you're looking for a book filled with code examples - this one is it. It introduces TensorFlow, Keras and what things have changed in TensorFlow 2.0, followed by the Perceptron, Multilayer Perceptron, and examples with Keras (including all the detailed tuning options such as adding Batch Normalization to your architecture, adding other forms of regularization, choosing an optimizer, and so on).
After the introduction, and a detailed comparison between TensorFlow 1.0 and 2.0, it proceeds with application areas. First, you'll find a chapter about regression, followed by classification using Convolutional Neural Networks - i.e. computer vision. In both chapters, the typical problems (such as linear regression or image classification) are discussed, as well as how this impacts architectural choices in your quest to find a well-performing machine learning model.
Subsequently, more advanced applications of ConvNets are covered, as well as GANs for generative deep learning. This is followed by word embeddings, which can be used in natural language processing, by recurrent neural networks, autoencoders, unsupervised learning and reinforcement learning. That's pretty much it in terms of what can be done with deep learning these days.
Once those details are covered, the book moves forward by looking at how TensorFlow can be deployed in cloud instances so that you can perform learning with e.g. a powerful GPU. Then, running TensorFlow models on mobile/IoT and in the web browser are covered, followed by AutoML - which includes automated hyperparameter tuning. Towards the end, the book covers the maths behind deep learning, and TPUs - pieces of hardware that can accelerate your training process.
Coming back to the original statement about this book: if you already have some experience with machine learning, this can be a great book. However, if you're an absolute beginner, it may be wise to look at a beginners book above first - for the simple purpose that if you understand what is going on, you'll flow through this book more easily. For people who already have some experience under their belt, Deep Learning with TensorFlow 2 and Keras (affiliate link) is definitely recommended.
6. Practical Deep Learning for Cloud, Mobile, and Edge: Real-World AI & Computer-Vision Projects Using Python, Keras & TensorFlow, by Anirudh Koul, Siddha Ganju and Meher Kasam
If you want to learn using TensorFlow, Keras and TensorFlow lite focused on Edge-based Computer Vision.
Author: Anirudh Koul, Siddha Ganju, and Meher Kasam
Publishing date: October 14, 2019
Price/quality: 🟢 Really good
What it covers:
- Using Keras, TensorFlow, Core ML, and TensorFlow Lite
- Putting the focus on 'lite' models, running them on Raspberry Pi, Jetson Nano and Google Coral, as well as in the web browser
- A bit of reinforcement learning and transfer learning
- Case studies and practical tips
My impression:
The book Practical Deep Learning for Cloud, Mobile, and Edge: Real-World AI & Computer-Vision Projects Using Python, Keras & TensorFlow (affiliate link) by Anirudh Koul, Siddha Ganju and Meher Kasam is an interesting book for the Machine Learning practitioner. Contrary to many books about TensorFlow, Keras and other libraries, it does not aim to provide a general introduction to the libraries.
Rather, it aims to capture a trend which I believe will be immensely important in the years to come with respect to generating new model predictions, or inference: moving Deep Learning models away from cloud-based environments and into the field, where they run on embedded devices. As we shall see, the authors aim to capture this trend by looking at a variety of use cases and deployment scenarios, and providing the tools from the TensorFlow arsenal that could make this work for you.
The book starts by exploring the landscape of Artificial Intelligence in chapter 1. This always makes me happy, because I am a fan of books that provide the reader with necessary context in order to understand what is going on. In the chapter, the authors discuss what AI is (and why precisely that is a difficult question), a brief history from AI through the hypes and AI winters, and introduce Deep Learning as one of the most recent trends within the field of AI. It also gives critical succes factors for a well-performing Deep Learning model, and hints towards responsible utilization of AI - also an increasingly important topic for the next years.
As I said, the book has a focus on Edge AI cases. Computer vision problems are very prominent in this area - as models rely on progress in this branch of Deep Learning if they want to see what is going on in the field. In chapter 2, the authors introduce Image Classification with Keras, by means of the ImageNet competition and Model Zoos. This is followed by Transfer Learning in chapter 3, where building a classifier is extended in multiple ways: using a pretrained model for getting better results; organizing the data; data augmentation; and finally training and testing the model. By focusing on computer vision problems, you'll be introduced to the Keras APIs that you need.
Chapter 4 will teach you to build a Reverse Image Search Engine - by means of Feature Embeddings. Through t-SNE and PCA, as well as some other techniques, you'll learn to build a tool for image similarity. This is followed by Chapter 5, which focuses on Convolutional Neural Networks and all their related components. It introduces TensorBoard for showing realtime training progress, breaking up your data into training, testing and validation data, early stopping and other stuff. What I'm mostly happy about are the two final components of this chapter: how hyperparameters affect accuracy (with a discussion on things like batch size, optimizers, learning rate, and so on), and tools for automating ML: Keras Tuner, AutoAugment and AutoKeras. Really great - and this makes this book especially future proof!
If you've been familiar with Deep Learning for some time, you know that it's often necessary to have big GPUs if you want to train your model. Chapter 6 helps you manage the GPUs you're using by teaching how to maximize speed and performance of TensorFlow, i.e. how to squeeze every bit of power out of your graphics card that you could possibly do. Chapter 7 extends this by providing practical tips, and Chapter 8 teaches you how to use Cloud APIs for Computer Vision Problems.
When you want to deploy your Machine Learning model, it's important that you do so professionally. In chapter 9, the authors introduce how to scale inference and how to deploy your model in a good way by means of TensorFlow Serving and KubeFlow. Doing so, the authors describe a set of desirable qualities for production machine learning scenarios (think availability, scalability, latency, failure handling, monitoring, and so on), and teach you how to deploy models by means of Google Cloud Platform, TensorFlow Serving and KubeFlow. Great stuff!
The next chapters start zooming in to specific usage scenarios of your Deep Learning model. If you want to run your model in a web browser, to give just one example, that is entirely possible with TensorFlow.js. Chapter 10 focuses entirely on this matter. This is followed by Chapter 11, which shows how to convert your TensorFlow and Keras models into CoreML models, which allows you to run them on an iOS device. Chapter 12 extends this topic and Chapter 13 teaches you how to run TF models on Android, extended in Chapter 14 on the TensorFlow Object Detection API.
If you truly want to run your model in the field, it's likely that you'll be using a piece of embedded hardware for doing so, like a Raspberry Pi or a FPGA board or an Arduino or an NVIDIA Jetson Nano. Chapter 15 compares those devices and gives you a hands-on example of running your model on an embedded device. The last two chapters, Chapter 16 and 17, move towards building a Self-driving Car, eventually providing a brief introduction to Reinforcement Learning.
Having studied this book (affiliate link) for a while, I can only argue that this is one of the best books that your money can buy at this point in time. It's good, because it introduces today's state-of-the-art Deep Learning libraries, and I think it's also future proof, because it covers three topics (automation, scaling & cloud based training and Edge AI) which in my point of view will be the most important ones in the years to come. Definitely a buy for me!
1. Programming PyTorch for Deep Learning: Creating and Deploying Deep Learning Applications, by Ian Pointer
If you want to get to know the PyTorch API in a learning by doing fashion.
Author: Ian Pointer
Publishing date: September 20, 2019
Price/quality: 🟢 Really good
What it covers:
- Deploying deep learning models into production
- Find out how PyTorch is used in various companies
- Learning how to create deep learning models with PyTorch
My impression:
The book starts with an introduction to deep learning and PyTorch. What machine do you need? What do you do when you want to train your model in the cloud? How to install PyTorch? Those are the questions that are covered before the actual work starts.
One of the main fields that was accelerated by deep learning is computer vision. That's why it's not surprising that this book starts off with computer vision, and specifically image classification, for you to start writing neural networks with PyTorch. It covers the dataset that you will be using, how to load it with PyTorch, how to build your training set and what the point is of testing and validation datasets. Subsequently, you'll create the neural network - train it - make predictions - and save the model. The book thus offers you a full iteration through the machine learning training workflow with PyTorch.
Once you have set your first steps, the book continues with more advanced topics related to ConvNets (which are used for computer vision problems). You'll cover the conceptual stuff, a history of ConvNets, and pretrained models. This is followed by an entire chapter about Transfer Learning, which allows you to reuse models and train them for your problem at hand.
The other field that was massively accelerated by deep learning is natural language processing. It's neither surprising that after ConvNets, recurrent neural networks are introduced for text classification. The book introduces Torchtext for doing so, and covers augmenting your dataset. The chapter about text classification is followed by sound classification, debugging PyTorch models, using models in production (Python Flask web service, Kubernetes deployment, TorchScript and libTorch) and more advanced topics (such as GANs).
Overall, this is a good book and is like François Chollet's Deep Learning with Keras book: it introduces you to deep learning with the focus on one specific framework; in this case, PyTorch. Amazon reviews suggest that mainly the first chapters are really good, but that the latter ones are a bit more difficult to get through. This makes sense, but still, it's a good book if you specifically want to learn PyTorch.
2. PyTorch Computer Vision Cookbook: Over 70 recipes to master the art of computer vision with deep learning and PyTorch 1.x, by Michael Avendi
If you're looking for a book that teaches you PyTorch for computer vision.
Author: Michael Avendi
Publishing date: March 20, 2020
Price/quality: 🟢 Really good
What it covers:
- Developing, training and finetuning deep learning models with PyTorch.
- Specifically focus on computer vision tasks such as classification, object detection and object segmentation.
- Learning advanced applications of computer vision such as neural style transfer, image generation and video classification.
- Discovering best practices for applying deep learning to computer vision problems.
My impression:
This is a new book about using PyTorch for computer vision problems. As you most likely know, the field of computer vision is one of the ones most accelerated by the advent of deep learning since 2012. Today's deep learning frameworks therefore contain a lot of functionality specifically tailored to such models. In PyTorch Computer Vision Cookbook (affiliate link), Michael Avendi covers the breadth of applications of PyTorch to computer vision.
The first chapter covers getting started with PyTorch for deep learning. It provides some technical requirements for running PyTorch on your system, provides information and instructions about how to install the tools you need, and introduces you to some general PyTorch concepts -such as the nn.Sequential and nn.Module APIs, running the model on your GPU with CUDA, and saving/loading models.
Subsequently, Avendi proceeds by writing about binary image classification. That is, an image is assigned one of two classes. This is followed by a multiclass classification problem, where the image is assigned to one of multiple classes instead. In both chapters, the book follows a workflow that is relatively default: exploring the dataset, splitting it into training/testing data, transforming it, building the classifier, performing hyperparameter tuning, training and evaluation, and deployment and inference. This covers almost the entire deep learning model lifecycle.
After classification, the book covers object detection. It covers both single-object detection as well as multi-object detection. The chapters about object detection follow a similar structure as the ones about image classification. That's unsurprising, given how those approaches look like each other. And once you're up to speed about classification, it covers a more detailed approach to object detection - object segmentation. Rather than detecting the object and putting a bounding box around it, it classifies each pixel into a class, allowing you to create models that really segment objects.
More advanced topics follow next. First, Avendi covers Neural Style Transfer - an application area of deep learning where two images are blended to create a new one. It is unsurprising that Generative Adversarial Networks are covered next, which are used for generating new image data. Finally, the book covers video processing with PyTorch. Here too, all the chapters are focused on getting things done.
The PyTorch Computer Vision Cookbook (affiliate link) is therefore a highly practical book for those who wish to learn programming in PyTorch for computer vision. It provides many code examples, and covers the full breadth of computer vision application areas. Interesting book.
3. PyTorch 1.x Reinforcement Learning Cookbook: Over 60 recipes to design, develop, and deploy self-learning AI models using Python, by Yuxi (Hayden) Liu
If you're looking for a book that teaches PyTorch for reinforcement learning.
Author: Michael Avendi
Publishing date: October 31, 2019
Price/quality: 🟢 Good
What it covers:
- Learning about Reinforcement Learning algorithms
- Applying Reinforcement Learning tools to simulate environments your agents operate in
- Using PyTorch to build the models
My impression:
In the news, you hear a lot about machine learning - but often you'll hear about one branch of ML, being supervised learning. Image classification, regression, object detection - those applications all require that a model is trained on a dataset before it can be used in practice.
Sometimes, however, this simply cannot be done. This could either be the case because the environment is too complex or too unstable, or because you don't have enough data to make a supervised approach worthwhile. Reinforcement Learning could then provide a viable path. With RL, which is an emergent theme in machine learning research, you effectively have an intelligent agent which you train by rewarding good behavior and punishing poor behavior.
The book PyTorch 1.x Reinforcement Learning (affiliate link) allows machine learning engineers to find quick solutions to a variety of Reinforcement Learning scenarios. It starts with the tools that you'll need to start with RL: your working environment, OpenAI Gym, Atari environments, CartPole, and developing algorithms with PyTorch. It then covers a variety of Reinforcement Learning techniques in subsequent chapters:
- Markov Decision Processes and Dynamic Programming
- Monte Carlo methods
- Temporal Difference and Q-Learning, including SARSA
- Multi-armed Bandit Problems
- Scaling up your RL approach
- Deep Q-Networks
- Policy Gradients
It's not a book for beginners - in the sense that if you have no prior experience with machine learning, you will find the book really difficult. If, however, you are aware of what RL is, and want to gain both very detailed insights in the breadth of RL approaches as well as real practical experience, PyTorch 1.x Reinforcement Learning (affiliate link) could be a really good extension of your current knowledge.
4. Deep Learning with PyTorch 1.x: Implement deep learning techniques and neural network architecture variants using Python, by Laura Mitchell, Sri. Yogesh K. & Vishnu Subramanian
If you're looking for a book that introduces you to Deep Learning concepts and allows you to write code in the process.
Author: Laura Mitchell, Sri. Yogesh K. and Vishnu Subramanian
Publishing date: November 29, 2019
Price/quality: 🟢 Really good
What it covers:
- Learning to work with the PyTorch framework
- Understanding how to deploy training your PyTorch models on GPUs
- Using a wide range of model types - CNNs, RNNs, LSTMs, ResNet, DenseNet and Inception
- Applying your knowledge to application areas such as computer vision and Natural Language Processing
- Working with advanced neural networks such as GANs and Autoencoders, as well as Transfer Learning and Reinforcement Learning
My impression:
The book starts with an introduction to Deep Learning using PyTorch. It does so in a way that I often appreciate, namely by tarting off with diving into the history and origins of Artificial Intelligence and Machine Learning. I think that it is important for people to understand where things have come from… and how Deep Learning fits this process. Subsequently, it covers application areas of Deep Learning, frameworks (specifically tailored to PyTorch) and setting up your work environment.
Subsequently, it dives into neural network building blocks, by explaining what they are, how they can be built with the PyTorch framework (nn.Sequential and nn.Module), and how tensor operations work. Those two chapters prepare you for more advanced topics, which follow next.
Neural networks are highly configurable - we all know that. That's why it's important to get a feeling for those internals too, and the book covers this by studying things like activation functions, which architecture to choose for what problem loss functions, and how neural networks are optimized. In essence, this is the high-level supervised learning process that we also cover on this website.
Next, it proceeds with the two default application areas for Deep Learning - Computer Vision and Natural Language Processing. Those application areas are the ones where advances have had the greatest impact, and the book will teach you to create real deep learning models for those application areas with PyTorch. For computer vision, this includes studying CV-specific DL aspects (such as convolutions), transfer learning, and visualizing the black box. For Natural Language Processing, this includes working with text data (tokenization, n-gram representation and vectorization), using word embeddings and why ConvNets (CV networks!) can also be applied here.
Once you're up to speed about these two application area, the book proceeds with Autoencoders and Generative Adversarial Networks - which spawn a wide range of new applications such as generative machine learning (yes, generating new data indeed). Finally, the book covers Transfer Learning in more detail, introduces you to Deep Reinforcement Learning (including Q-learning and Policy methods) and finally covers what's next - i.e., areas that might gain popular traction in the years to come.
A book like François Chollet's Deep Learning with Python for Keras, I think this is a good book to get started with Deep Learning. It introduces you to the concepts and does not assume that you already know them, and it specifically focuses on a framework, which allows you to get practical experience in the process. If you're looking for a more mathematical book, this one isn't for you, but if you want to from a developer perspective - Deep Learning with PyTorch 1.x is really recommended.
5. Python Deep learning: Develop your first Neural Network in Python Using TensorFlow, Keras, and PyTorch, by Samuel Burns
Author: Samuel Burns
Publishing date: April 3, 2019
Price/quality: 🟠 Moderate
What it covers:
- Understanding Deep Learning in detail
- Getting started with Deep Learning in Python
- Coding a Neural Network from scratch
- Using Python 3.X, TensorFlow, Keras and PyTorch
My impression:
The book starts with an introduction to Deep Learning and Artificial Neural Networks. Subsequently, it explores the libraries and frameworks that it uses: TensorFlow, Keras and PyTorch, as well as instructions for installing them.
Once you have a working environment, the book proceeds with TensorFlow basics such as Constants, Variables and Sessions, which allow you to work with the framework. It then proceeds with what it calls Keras basics, such as Learning Rate, Optimizers, Metrics and Loss Functions. I cringe a bit here, as many of those Keras basics are fundamental concepts in Deep Learning instead - and do not fully belong to Keras itself. What's more, the book makes use of the 'old' version of Keras, which supports the CNTK, Theano and TensorFlow backends. Today, Keras is tightly coupled to TensorFlow as tensorflow.keras, and the other backends are no longer recommended. Take this into account when considering the book.
Once the Keras basics have been introduced, the book moves forward to PyTorch basics: computational graphs, tensors, building a neural network … and then applies it to ConvNets and Recurrent Neural Networks. That's all. In my opinion, the book stays at a high level, is a good starting point, but there are much better books out there today… especially since some of the concepts are already outdated (e.g. the Keras version used in the book). I wouldn't recommend this book per se: better pick one of the above.
1. Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2, by Sebastian Raschka
If you want to get a broad introduction to Machine Learning and Deep Learning, followed by Python examples with the Scikit-learn and TensorFlow 2.0 frameworks.
Author: Sebastian Raschka
Publishing date: December 9, 2019
Price/quality: 🟢 Really good
What it covers:
- Learn to use the Scikit-learn and TensorFlow frameworks for machine learning and deep learning.
- Study across a wide range of applications, such as image classification, sentiment analysis and more.
- Select and build a wide range of model types (neural networks and classic models) including best practices for evaluating and tuning them.
Read on this site:
We already covered Python Machine Learning (affiliate link) at a different spot on this website. Click here to read my impression of this book.
2. Introduction to Machine Learning with Python: A Guide for Data Scientists, by Andreas C. Müller & Sarah Guido
If you want to discover the breadth of machine learning algorithms including getting practical experience.
Author: Andreas C. Müller and Sarah Guido
Publishing date: October 2016
Price/quality: 🟢 Really good
What it covers:
- Diving into machine learning concepts before you start coding
- Taking a look at the advantages and disadvantages of various machine learning algorithms
- The machine learning workflow: data represention, feature selection, hyperparameter tuning, and model evaluation, including advanced topics such as pipelines and text processing
- Best practices for your machine learning projects
My impression:
The book starts with an introduction to machine learning. It covers problems that can be solved by ML and, more importantly, covers the limits as well - as machine learning is not the answer to all the problems. The introduction also covers why Python is the lingua franca for data science projects these days, helps you with Scikit-learn, covers a variety of essential tools (Jupyter, NumPy, SciPy, Matplotlib, Pandas, Mglearn) and allows you to work on writing your first machine learning model!
Machine learning has three broad categories of work - Supervised Learning, Unsupervised Learning (including data preprocessing) and Reinforcement Learning. The second chapter covers Supervised learning and the wide range of models and model types available (kNN, linear, Naive Bayses, Decision Trees, SVMs and ensembles). This includes best practices related to generalization, overfitting/underfitting and model uncertainty. The chapter is focused on classification and regression problems.
Once you're up to speed about supervised learning, you'll learn about Unsupervised Learning. The book covers data preprocessing and scaling, dimensionality reduction and clustering.This is followeed by chapters on representing your dataset, feature selection, model evaluation and improvement, machine learning pipelines and working with textual data.
Altogether, this book allows you to get a very broad and yet in-depth understanding of the wide range of possibilities within the machine learning field. It also allows you to actually create models with the Scikit-learn framework and provides a wide range of examples for doing so. Although this does not include the deeper neural networks (which are often built with Keras and TensorFlow), this book is a really good basis for those who want to start with machine learning from a developer perspective. What's more, despite the fact that the book is relatively old (it was released in October 2016), Introduction to Machine Learning with Python (affiliate link) is still up to date as the Scikit-learn API is not changed very often. Definitely recommended!
3. Hands-On Unsupervised Learning Using Python: How to Build Applied Machine Learning Solutions from Unlabeled Data, by Ankur A. Patel
If you're looking for a book about unsupervised machine learning with Python.
Author: Ankur A. Patel
Publishing date: February 21, 2019
Price/quality: 🟡 Acceptable
What it covers:
- Comparing unsupervised learning to the other two approaches, supervised and reinforcement learning
- Setting up an end-to-end machine learning workflow, tailored to unsupervised models
- Performing anomaly detection, clustering and semisupervised learning
- Creating restricted Boltzmann machines and using GANs
My impression:
When you start in the book, you will first cover the fundamentals of unsupervised learning in part one. This is important, because you will have to understand where unsupervised learning precisely is located in the machine learning ecosystem. It covers the difference between rule based models and machine learning, supervised versus unsupervised learning, and looking at the conceptual details of supervised and unsupervised approaches. It even looks to combine reinforcement learning with unsupervised learning!
Once you're through the concepts, the book allows you to set up your end-to-end machine learning project. It covers various libraries (TensorFlow, Keras, XGBoost, LightGBM) and allows you to set up a Jupyter Notebook in which you'll work. It then proceeds with data inspection, data preparation, model preparation, and picking a machine learning model. Then, you'll look at evaluating your model, model ensembles, and model selection. Note that this chapter includes a few supervised approaches, but that the book then continues with unsupervised leraning.
Indeed: the next part fully covers unsupervised learning using Scikit-learn. It covers dimensionality reduction, principal component analysis, singular value decomposition and random projection… and a variety of other methods for unsupervised learning! This is all preparatory work and belongs to the sphere of feature selection.
The book then moves on to actual unsupervised machine learning approaches. It starts off with anomaly detection. Then, it proceeds with clustering, and group segmentation, before it moves to more advanced topics such as autoencoders, semisupervised learning and deep unsupervised learning (with Restricted Boltzmann Machines, GANs and deep clustering).
The reviews of Hands-On Unsupervised Learning (affiliate link) are mixed. Some call the examples trivial, and others mention that it seems to be hurried. However, others seem to be happy. Looking through the book, it indeed seems to be the case that explanations are often not too detailed, and that especially visualizations are missing - which could have greatly helped. In my opinion, it's a good book - especially if you're looking for one about unsupervised machine learning - but you should already have some ML experience under the belt. And you should like quick and high-level explanations. It's not for beginners, I'd say.
4. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, by Aurélien Géron
If you want to get broad practical machine learning experience while understanding the concepts.
Author: Aurélien Géron
Publishing date: March 13, 2017
Price/quality: 🟢 Really good
What it covers:
- Exploring what's out there in the machine learning field: support vector machines, decision trees, random forests, ensemble methods, and neural networks.
- Using Scikit-learn and TensorFlow to build classic ML models and neural networks
- More advanced details about neural network architectures (ConvNets, recurrent nets, reinforcement learning)
My impression:
The book Hands-On Machine Learning aims to provide a very broad yet deep understanding of the machine learning field. As you would expect, it starts off with the ML fundamentals. First of all, questions like "what is machine learning?" and "why use it?" are answered, followed by a coverage of application areas, types of machine learning problems, and main challenges that you will face as a machine learning engineer.
Once you're up to speed about the basics, you'll learn what a real machine learning project entails - from getting up to speed with the data, selecting features that are useful, training your model, to deploying it into production settings. This provides the general basis that will prove to be very useful in all your machine learning projects.
The book then proceeds with a wide range of classic models - linear regression, polynomial regression, other linear models and logistic regression. In doing so, it introduces gradient descent based optimization, a technique that you will also find when studying deep(er) neural networks. Once this is done, you'll learn about Support Vector Machines - and how they work, both linearly and nonlinearly. And even how SVMs can be used for regression! And the book also makes sure that you'll understand how they work internally; it does not only provide code examples and guidance with respect to how to code them.
Decision trees, ensemble learning and random forests are subsequently covered as traditional machine learning techniques. Then, before it moves to neural networks and deep learning, it covers unsupervised approaches for dimensionality reduction and unsupervised learning (e.g. clustering).
As mentioned, it then moves into neural networks and deep learning territory. And despite the fact that I'm really impressed by the first part, I think this part covers many of the Deep Learning issues in great detail - it's a really good section. You'll learn about the history of neural networks first (Rosenblatt Perceptron and Multilayer Perceptron), and how they can be used in both regression and classification tasks. The book then moves to a practical implementation using TensorFlow 2.x based Keras; this is the version of Keras that is up to date with the state-of-the-art. It includes a variety of basic operations such as saving and restoring a model and using callbacks, as we are used to with the Keras library.
If you think that's it, you're wrong :) The book then proceeds with advanced concepts related to deep learning, such as vanishing and exploding gradients (and what to do about them), better optimization techniques, and how overfitting can be avoided with regularization. In the final chapters, more advanced TensorFlow topics are covered - such as how to train models at scale - as well as one application area, being Deep Computer Vision.
I agree with the reviews. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems (affiliate link) is one of the best books out there right now. Despite its age (and other competitors emergent at the scene of combining various frameworks), this one is still worth buying! From this book forward, you can proceed to books like the Pocket Reference so that you'll find even more details. Definitely recommended.
<iframe style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ss&ref=as_ss_li_til&ad_type=product_link&tracking_id=webn3rd02-20&language=en_US&marketplace=amazon®ion=US&placement=1138495689&asins=1138495689&linkId=9ee7f5e8b908c536a937942bdeb80685&show_border=true&link_opens_in_new_window=true"></iframe>
If you want a practitioner's guide to the machine learning process - as well as applying the ML stack within R.
Author: Brad Boehmke and Brandon M. Greenwell
Publishing date: 2019
Price/quality: 🟢 Really good
What it covers:
- It takes a developer perspective to machine learning using R, by using a variety of R packages such as glmnet, h2o, ranger, xgboost, and keras;
- Nevertheless, it teaches the user about the entire machine learning process i.e. from feature engineering to model evaluation & interpretation.
- A variety of algorithms, such as regression, random forests, gradient boosting, and deep learning, is presented.
- Teaching you a firm understanding of what is possible with R when it comes to machine learning, including hands-on experience implementing such models.
My impression:
The book Hands-On Machine Learning with R (affiliate link) by Bradley Boehmke and Brandon M. Greenwell claims to be a practitioner's guide for machine learning in R. Since many machine learning models are created with Python these days, it could in theory be a great book for those who have R experience or don't want to make the switch to Python. This is especially true because the book argues that it will use a variety of frameworks that are well-known in the Python world. Let's take a look at how the book proceeds!
From a high-level, we can observe that the book has X parts. Part 1, Fundamentels, will teach you, well, the fundamentals of machine learning. Chapter 1 teaches the reader what the differences are between supervised learning and unsupervised learning. This includes a look at what the differences are between regression and classification problems. Chapter 2 then moves on with the modeling process: from data splitting to creating models, resampling, and model evaluation. This is followed by a chapter on feature and target engineering, which is to be performed in many machine learning projects before a model can be actually trained. By looking at missing values, filtering features, numeric and categorical feature engineering and things like dimensionality reduction, the reader is presented with a thorough overview of what is necessary to train with good features.
Part 2 then moves on to Supervised Learning, which was introduced in chapter 1. The chapters provide content about Linear Regression, Logistic Regression, Regularized Regression and techniques like K-Nearest Neighbors and Decision Trees. Bagging, Random Forests and Gradient Boosting are also covered, and so are SVMs. In ML terms, those are often called relatively 'old-fashioned', but they are still very practically usable today. That's why I think it's good that this book covers those topics. Part 2 also covers Deep Learning, model ensembling and how to interpret machine learning models - i.e., looking inside the black box.
Part 3 moves to Dimensionality Reduction. Often, if you're dealing with a machine learning dataset, you have many dimensions, from which you should select a few when engineering your features. Principal Components Analysis, Generalized Low Rank Models and Autoencoders can all be used for reducing the dimensionality of your machine learning problem - whether that is by selecting dimensions or reducing the dimensionality altogether.
Finally, Part 4 moves forward to Clustering - that is, Unsupervised Learning. It introduces the reader to K-means clustering, Hierarchical clustering and Model-based clustering.
While the book (affiliate link) is available here for those who wish to take a look, I can highly recommend it. While the authors write relatively academically, they do so in an inviting way - they don't burden the user with maths, but rather, provide source code and some math to gain an intuition about what happens in the process. It also covers a large part of many machine learning techniques (whether that is for feature engineering, training your model, or evaluating it) used today, as well as a variety of machine learning algorithms. They do so with well-written English and a lot of examples - both visual examples and source code examples. That's why, given my ML experience, the structure of the book and its contents, I would definitely recommend it to those who wish to get a book about Machine Learning with R. While not cheap, it is definitely a great investment to those who really want to give it a go. Great buy!
Coming soon!
Coming soon!
Coming soon!
Coming soon!
Coming soon!