Chapter 8
Fast-AI Fastbook chapter 8, Collaborative Filtering
This chapter will be all about so-called latent factors. Latent factors are underlying concepts in your data that are not up-front, but can be learned by association.
For example, on Netflix you may have watched lots of movies that are science fiction, full of action, and were made in the 1970s. Netflix may not know these particular properties of the films you have watched, but it will be able to see that other people that have watched the same movies that you watched also tended to watch other movies that are science fiction, full of action, and were made in the 1970s. In other words, to use this approach we don’t necessarily need to know anything about the movies, except who like to watch them.
Chapter 7
Fast-AI Fastbook chapter 7, Training a state-of-the-art model
This chapter introduces some more advanced techniques for training an image classification model. To demonstrate this, we will be using imagenette, a subset of imagenet of 10 distinctive categories.
Lets first create a simple model that will serve as our base-line:
dblock = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_items=get_image_files,
get_y=parent_label,
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75))
dls = dblock.dataloaders(path, bs=64)
Nothing new so far in the data set up.
Chapter 6
Fast-AI Fastbook chapter 6, Multi-Category and Regression
So up until now we have learned to do simple image recognition into a single category, and learned some ways to optimize our trainings and improve our models.
In this chapter we will look at two other types of Computer Vision problems: Multi-Category Classification and Regression. In the process we will learn more about output activations, and more types of loss functions.
Multi-Label Classification
This refers to the problem of identifying the categories of objects in images into more than 1 (or 0) categories. With a single-category classification, the model always outputs something, even if you feed it complete trash. That might not be what we want. On the other hand, an image may contain multiple objects, belonging to different categories, and we might want to know about all of them, not just the most prominent one.
Chapter 5
Fast-AI Fastbook chapter 5, Deep dive Image Classification
Allright, now that we know how to run and create basic models, let’s dive a bit deeper into how pytorch and fast.ai work under the hood. This will encompass the coming chapters, going from Image Classification to Natural Language Processing to really low level architectural improvements.
But lets start relatively simple, by expanding the image classification example to go from binomial (2 choices) to multinomial (X amount of choices).
Chapter 4
Fast-AI Fastbook chapter 4, MNIST basics
Lets look under the hood and see exactly what is going in with a relatively simple deep learning model.
How are images represented in a computer?
“Unfortunately” we cannot just feed an image as is to a computer. To illustrate this, lets take a model as example that can classify an image as either the number 3, or the number 7. This is done using MNIST as data. All images are grayscale (to make it easier)
Chapter 1
Fast-AI Fastbook chapter 1, intro
Deep learning is used in many different areas of, some of which are:
- Natural Language Processing (NLP). Speech recognition, summarizing documents, etc.
- Computer vision: Image interpretation, face recognition
- Medicine: X-ray image anomaly detection, diagnosing
- Biology: Protein synthesis sequencing
- Image Generation: Removing noise, converting images, generating images from text.
- Recommendation Systems
- Robotics
What is Machine Learning?
Machine learning, is like normal programming, a way to get computer s to do a specific task. But instead of minutely telling the computer how to do something step by step, we give the computer loads of examples of the problem to solve (and its solution) and let the computer figure out a generic way how to solve it by itself.