Adding lesson 16

pull/62/head
Jim Bennett 4 years ago
parent 4ca7868cb9
commit abaeae49cd

@ -15,6 +15,8 @@
"geofencing",
"microcontrollers",
"mosquitto",
"photodiode",
"photodiodes",
"sketchnote"
]
}

@ -10,11 +10,11 @@ The sensor you'll use is a [Grove GPS Air530 sensor](https://www.seeedstudio.com
This is a UART sensor, so sends GPS data over UART.
### Connect the GPS sensor
## Connect the GPS sensor
The Grove GPS sensor can be connected to the Raspberry Pi.
#### Task - connect the GPS sensor
### Task - connect the GPS sensor
Connect the GPS sensor.
@ -98,14 +98,14 @@ Program the device.
1. Reboot your Pi, then reconnect in VS Code once the Pi has rebooted.
1. From the terminal, create a new folder in the `pi` users home directory called `gps-sensor`. Create a file in this folder called `app.py`:
1. From the terminal, create a new folder in the `pi` users home directory called `gps-sensor`. Create a file in this folder called `app.py`.
1. Open this folder in VS Code
1. The GPS module sends UART data over a serial port. Install the `pyserial` Pip package to communicate with the serial port from your Python code:
```sh
pip3 install pip install pyserial
pip3 install pyserial
```
1. Add the following code to your `app.py` file:

@ -22,6 +22,7 @@ In this lesson we'll cover:
* [Image classification via Machine Learning](#image-classification-via-machine-learning)
* [Train an image classifier](#train-an-image-classifier)
* [Test your image classifier](#test-your-image-classifier)
* [Retrain your image classifier](#retrain-your-image-classifier)
## Using AI and ML to sort food
@ -133,6 +134,8 @@ To use Custom Vision, you first need to create two cognitive services resources
### Task - create an image classifier project
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai), and sign in with the Microsoft account you used for your Azure account.
1. Follow the [Create a new Project section of the Build a classifier quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier?WT.mc_id=academic-17441-jabenn#create-a-new-project) to create a new Custom Vision project. The UI may change and these docs are always the most up to date reference.
Call your project `fruit-quality-detector`.
@ -151,6 +154,8 @@ Ideally each picture should be just the fruit, with either a consistent backgrou
> 💁 It's important not to have specific backgrounds, or specific items that are not related to the thing being classified for each tag, otherwise the classifier may just classify based on the background. There was a classifier for skin cancer that was trained on moles both normal and cancerous, and the cancerous ones all had rulers against them to measure the size. It turned out the classifier was almost 100% accurate at identifying rulers in pictures, not cancerous moles.
Image classifiers run at very low resolution. For example Custom Vision can take training and prediction images up to 10240x10240, but trains and runs the model on images at 227x227. Larger images are shrunk to this size, so ensure the thing you are classifying takes up a large part of the image otherwise it may be too small in the smaller image used by the classifier.
1. Gather pictures for your classifier. You will need at least 5 pictures for each label to train the classifier, but the more the better. You will also need a few additional images to test the classifier. These images should all be different images of the same thing. For example:
* Using 2 ripe bananas, take some pictures of each one from a few different angles, taking at least 7 pictures (5 to train, 2 to test), but ideally more.
@ -181,18 +186,28 @@ Once your classifier is trained, you can test it by giving it a new image to cla
### Task - test your image classifier
1. Follow the [Test and retrain a model with Custom Vision Service documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#test-your-model) to test your image classifier. Use the testing images you created earlier, not any of the images you used for training.
1. Follow the [Test your model documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#test-your-model) to test your image classifier. Use the testing images you created earlier, not any of the images you used for training.
![A unripe banana predicted as unripe with a 98.9% probability, ripe with a 1.1% probability](../../../images/banana-unripe-quick-test-prediction.png)
1. Try all the testing images you have access to and observe the probabilities.
## Retrain your image classifier
When you test you classifier, it may not give the results you expect. Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat. You can improve your classifier by retraining it with images it gets wrong.
Every time you make a prediction using the quick test option, the image and results are stored. You can use these images to retrain your model.
### Task - retrain your image classifier
1. Follow the [Use the predicted image for training documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#use-the-predicted-image-for-training) to retrain your model, using the correct tag for each image.
1. Once you model has been retrained, test on new images.
---
## 🚀 Challenge
Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat.
What do you think would happen if you used a picture of a strawberry with a model trained on bananas, or a picture of an inflatable banana, or a person in a banana suit, or even a yellow cartoon character like someone from the Simpsons?
Try it out and see what the predictions are. You can find images to try with using [Bing Image search](https://www.bing.com/images/trending).

@ -10,13 +10,126 @@ Add a sketchnote if possible/appropriate
## Introduction
In this lesson you will learn about
In the last lesson you learned about image classifiers, and how to train them to detect good and bad fruit. To use this image classifier in an IoT application, you need to be able to capture an image using some kind of camera, and send this image to the cloud to be classified.
In this lesson you will learn about camera sensors, and how to use them with an IoT device to capture an image. You will also learn how to call the image classifier from your IoT device.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Camera sensors](#camera-sensors)
* [Capture an image using an IoT device](#capture-an-image-using-an-iot-device)
* [Publish your image classifier](#publish-your-image-classifier)
* [Classify images from your IoT device](#classify-images-from-your-iot-device)
* [Improve the model](#Improve-the-model)
## Camera sensors
Camera sensors, as the name suggests, are cameras that you can connect to your IoT device. They can take still images, or capture streaming video. Some will return raw image data, others will compress the image data into an image file such as a JPEG or PNG. Usually the cameras that work with IoT devices are much smaller and lower resolution that what you might be used to, but you can get high resolution cameras that will rival top end phones. You can get all manner of interchangeable lenses, multiple camera setups, infra-red thermal cameras, or UV cameras.
![The light from a scene passes through a lens and is focused on a CMOS sensor](../../../images/cmos-sensor.png)
Most camera sensors use image sensors where each pixel is a photodiode. A lens focuses the image onto the image sensor, and thousands or millions of photodiodes detect the light falling on each one, and record that as pixel data.
> 💁 Lenses invert images, the camera sensor then flips the image back the right way round. This is the same in your eyes - what you see is detected upside down on the back of your eye and your brain corrects it.
> 🎓 The image sensor is known as an Active-Pixel Sensor (APS), and the most popular type of APS is a complementary metal-oxide semiconductor sensor, or CMOS. You may have heard the term CMOS sensor used for camera sensors.
Camera sensors are digital sensors, sending image data as digital data, usually with the help of a library that provides the communication. Cameras connect using protocols like SPI to allow them to send large quantities of data - images are substantially larger than single numbers from a sensor such as a temperature sensor.
✅ What are the limitations around image size with IoT devices? Think about the constraints especially on microcontroller hardware.
## Capture an image using an IoT device
You can use your IoT device to capture and image to be classified.
### Task - capture an image using an IoT device
Work through the relevant guide to capture an image using your IoT device:
* [Arduino - Wio Terminal](wio-terminal-camera.md)
* [Single-board computer - Raspberry Pi](pi-camera.md)
* [Single-board computer - Virtual device](virtual-device-camera.md)
## Publish your image classifier
You trained your image classifier in the last lesson. Before you can use it from your IoT device, you need to publish the model.
### Model iterations
When your model was training in the last lesson, you may notice that the **Performance** tab shows iterations on the side. When you first trained the model you would have seen *Iteration 1* in training. When you improved the model using the prediction images, you would have seen *Iteration 2* in training.
Every time you train the model, you get a new iteration. This is a way to keep track of the different versions of your model trained on different data sets. When you do a **Quick Test**, there is a drop-down you can use to select the iteration, so you can compare the results across multiple iterations.
When you are happy with an iteration, you can publish it to make it available to be used from external applications. This way you can have a published version that is used by your devices, then work on a new version over multiple iterations, then publish that once you are happy with it.
### Task - publish an iteration
Iterations are published from the Custom Vision portal.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already.
1. Select the **Performance** tab from the options at the top
1. Select the latest iteration from the *Iterations* list on the side
1. Select the **Publish** button for the iteration
![The publish button](../../../images/custom-vision-publish-button.png)
1. In the *Publish Model* dialog, set the *Prediction resource* to the `fruit-quality-detector-prediction` resource you created in the last lesson. Leave the name as `Iteration2`, and select the **Publish** button.
1. Once published, select the **Prediction URL** button. This will show details of the prediction API, and you will need these to call the model from your IoT device. The lower section is labelled *If you have an image file*, and this is the details you want. Take a copy of the URL that is shown which will be something like:
```output
https://<location>.api.cognitive.microsoft.com/customvision/v3.0/Prediction/<id>/classify/iterations/Iteration2/image
```
Where `<location>` will be the location you used when creating your custom vision resource, and `<id>` will be a long ID made up of letters and numbers.
Also take a copy of the *Prediction-Key* value. This is a secure key that you have to pass when you call the model. Only applications that pass this key are allowed to use the model, any other applications are rejected.
✅ When a new iteration is published, it will have a different name. How do you think you would change the iteration an IoT device is using?
## Classify images from your IoT device
You can now use these connection details to call the image classifier from your IoT device.
### Task - classify images from your IoT device
Work through the relevant guide to classify images using your IoT device:
* [Arduino - Wio Terminal](wio-terminal-classify-image.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-classify-image.md)
## Improve the model
You may find that the results you get when using the camera connected to your IoT device don't match what you would expect. The predictions are not always as accurate as using images uploaded from your computer. This is because the model was trained on different data to what is being used for predictions.
To get the best results for an image classifier, you want to train the model with images that are as similar to the images used for predictions as possible. If you used your phone camera to capture images for training, for example, the image quality, sharpness, and color will be different to a camera connected to an IoT device.
![2 banana pictures, a low resolution one with poor lighting from an IoT device, and a high resolution one with good lighting from a phone](../../../images/banana-picture-compare.png)
In the image above, the banana picture on the left was taken using a Raspberry Pi Camera, the one on the right was taken of the same banana in the same location using an iPhone. There is a noticeable difference in quality - the iPhone picture is sharper, with brighter colors and more contrast.
✅ What else might cause the images captured by your IoT device to have incorrect predictions? Think about the environment an IoT device might be used in, what factors can affect the image being captured?
To improve the model, you can retrain it using the images captured from the IoT device.
### Task -improve the model
1. Classify multiple images of both ripe and unripe fruit using your IoT device.
1. In the Custom Vision portal, retrain the model using the images on the *Predictions* tab.
> ⚠️ You can refer to [the instructions for retraining your classifier in lesson 1 if needed](../1-train-fruit-detector/README.md#retrain-your-image-classifier).
1. If your images look very different to the original ones used to train, you can delete all the original images by selecting them in the *Training Images* tab and selecting the **Delete** button. To select an image, move your cursor over it and a tick will appear, select that tick to select or deselect the image.
1. Train a new iteration of the model and publish it using the steps above.
1. Update the endpoint URL in your code, and re-run the app.
## Thing 1
1. Repeat these steps until you are happy with the results of the predictions.
---

@ -0,0 +1,16 @@
import io
import time
from picamera import PiCamera
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())

@ -0,0 +1,133 @@
# Capture an image - Raspberry Pi
In this part of the lesson, you will add a camera sensor to your Raspberry Pi, and read images from it.
## Hardware
The Raspberry Pi needs a camera.
The camera you'll use is a [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/). This camera is designed to work with the Raspberry Pi and connects via a dedicated connector on the Pi.
> 💁 This camera uses the [Camera Serial Interface, a protocol from the Mobile Industry Processor Interface Alliance](https://wikipedia.org/wiki/Camera_Serial_Interface), known as MIPI-CSI. This is a dedicated protocol for sending images
## Connect the camera
The camera can be connected to the Raspberry Pi using a ribbon cable.
### Task - connect the camera
![A Raspberry Pi Camera](../../../images/pi-camera-module.png)
1. Power off the Pi.
1. Connect the ribbon cable that comes with the camera to the camera. To do this, pull gently on the black plastic clip in the holder so that it comes out a little bit, then slide the cable into the socket, with the blue side facing away from the lens, the metal pin strips facing towards the lens. Once it is all the way in, push the black plastic clip back into place.
You can find an animation showing how to open the clip and insert the cable on the [Raspberry Pi Getting Started with the Camera module documentation](https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2).
![The ribbon cable inserted into the camera module](../../../images/pi-camera-ribbon-cable.png)
1. Remove the Grove Base Hat from the Pi.
1. Pass the ribbon cable through the camera slot in the Grove Base Hat. Make sure the blue side of the cable faces towards the analog ports labelled **A0**, **A1** etc.
![The ribbon cable passing through the grove base hat](../../../images/grove-base-hat-ribbon-cable.png)
1. Inset the ribbon cable into the camera port on the Pi. Once again, pull the black plastic clip up, insert the cable, then push the clip back in. The blue side of the cable should face the USB and ethernet ports.
![The ribbon cable connected to the camera socket on the Pi](../../../images/pi-camera-socket-ribbon-cable.png)
1. Refit the Grove Base Hat
## Program the camera
The Raspberry Pi can now be programmed to use the camera using the [PiCamera](https://pypi.org/project/picamera/) Python library.
### Task - program the camera
Program the device.
1. Power up the Pi and wait for it to boot
1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension.
1. By default the camera socket on the Pi is turned off. You can turn it on by running the following commands from your terminal:
```sh
sudo raspi-config nonint do_camera 0
sudo reboot
```
This will toggle a setting to enable the camera, then reboot the Pi to make that setting take effect. Wait for the Pi to reboot, then re-launch VS Code.
1. From the terminal, create a new folder in the `pi` users home directory called `fruit-quality-detector`. Create a file in this folder called `app.py`.
1. Open this folder in VS Code
1. To interact with the camera, you can use the PiCamera Python library. Install the Pip package for this with the following command:
```sh
pip3 install picamera
```
1. Add the following code to your `app.py` file:
```python
import io
import time
from picamera import PiCamera
```
This code imports some libraries needed, including the `PiCamera` library.
1. Add the following code below this to initialize the camera:
```python
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
```
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
The `camera.rotation = 0` line sets the rotation of the image. The ribbon cable comes in to the bottom of the camera, but if your camera was rotated to allow it to point easier at the item you want to classify, then you can change this line to the number of degrees of rotation.
![The camera hanging down over a drink can](../../../images/pi-camera-upside-down.png)
For example, if you suspend the ribbon cable over something so that it is at the top of the camera, then set the rotation to be 180:
```python
camera.rotation = 180
```
The camera takes a few seconds to start up, hence the `time.sleep(2)`
1. Add the following code below this to capture the image as binary data:
```python
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
```
This codes creates a `BytesIO` object to store binary data. The image is read from the camera as a JPEG file and stored in this object. This object has a position indicator to know where it is in the data so that more data can be written to the end if needed, so the `image.seek(0)` line moves this position back to the start so that all the data can be read later.
1. Below this, add the following to save the image to a file:
```python
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
```
This code opens a file called `image.jpg` for writing, then reads all the data from the `BytesIO` object and writes that to the file.
> 💁 You can capture the image directly to a file instead of a `BytesIO` object by passing the file name to the `camera.capture` call. The reason for using the `BytesIO` object is so that later in this lesson you can send the image to your image classifier.
1. Point the camera at something and run this code.
1. An image will be captured and saved as `image.jpg` in the current folder. You will see this file in the VS Code explorer. Select the file to view the image. If it needs rotation, update the `camera.rotation = 0` line as necessary and take another picture.
> 💁 You can find this code in the [code-camera/pi](code-camera/pi) folder.
😀 Your camera program was a success!

@ -0,0 +1 @@
pip3 install azure-cognitiveservices-vision-customvision

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 566 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 424 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 297 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Loading…
Cancel
Save