✅ You will be using VS Code for both Arduino and Single-board computers. If you haven't used this before, read more about it on the [VS Code site](https://code.visualstudio.com?WT.mc_id=academic-17441-jabenn)
## Applications of IoT
IoT covers a huge range of use cases, across a few broad groups:
@ -34,24 +34,41 @@ You can access Azure Maps APIs by leveraging its [REST API](https://docs.microso
In this lesson, you will use the web SDK to draw a map and display your sensor's GPS location's path.
## Create an Azure Maps resource
Your first step is to create an Azure Maps account. The easiest way to do this is in the [Azure portal](https://portal.azure.com?WT.mc_id=academic-17441-jabenn).
Your first step is to create an Azure Maps account. You can do this using the CLI or in the [Azure portal](https://portal.azure.com?WT.mc_id=academic-17441-jabenn).
1. After logging in to the portal, click the "Create a resource" button and in the search box that appears, type "Azure Maps".
Using the CLI, create a maps account:
1. Select "Azure Maps" and click 'Create'.
```
az maps account create --name
--resource-group
[--accept-tos]
[--sku {S0, S1}]
[--subscription]
[--tags]
```
1. On the Create Azure Maps Account page, enter:
Use the `gps-service` resource group you've used in previous lessons. You can use the S0 subscription for this small task.
- Your subscription in the dropdown box.
- A Resource group to use (use 'gps-sensor' as you have done throughout these lessons)
- Add a name for your account
- Choose a Pricing tier. The Pricing tier for this account. S0 will work for this small project.
A sample call would look like this:
1. Read and accept the terms of service, and click the 'Create' button.
The service will deploy. Next, you need to get your Subscription Key. There are two ways to authenticate your maps in a web app: using Active Directory (AD) or 'Shared Key Authentication', also known as Subscription Key. We'll use the latter, for simplicity.
1. The service will deploy and you can visit it by clicking 'Go to resource' in the next screen.
In the CLI, find your keys:
1. Navigate to your new Azure Maps Account Authentication screen. Here, you discover that you have two ways to authenticate your maps in a web app: using Active Directory (AD) or 'Shared Key Authentication', also known as Subscription Key. We'll use the latter, for simplicity. Copy the Primary Key value and make a note of it!
```
az maps account keys list --name
--resource-group
[--query-examples]
[--subscription]
```
A sample call would look like this:
```
az maps account keys list --name MyMapsAccount --resource-group MyResourceGroup
```
✅ You will be able to rotate and swap keys at will using the Shared Keys; switch your app to use the Secondary Key while rotating the Primary Key if needed.
@ -94,7 +111,7 @@ The map will load in the 'myMap' `div`. A few styles allow it so span the width
function init() {
var map = new atlas.Map('myMap', {
center: [-122.33, 47.6],
center: [-122.26473, 47.73444],
zoom: 12,
authOptions: {
authType: "subscriptionKey",
@ -105,14 +122,13 @@ The map will load in the 'myMap' `div`. A few styles allow it so span the width
}
</script>
```
If you open your index.html page in a web browser, you should see a map loaded, and focused on Seattle:
If you open your index.html page in a web browser, you should see a map loaded, and focused on the Seattle area.

✅ Experiment with the zoom and center parameters to change your map display.
✅ Experiment with the zoom and center parameters to change your map display. You can add different coordinates corresponding to your data's latitude and longitude to re-center the map.
> A better way to work with web apps locally is to install [http-server](https://www.npmjs.com/package/http-server). You will need [node.js](https://nodejs.org/) and [npm](https://www.npmjs.com/) installed before using this tool. Once those tools are installed, you can navigate to the location of your `index.html` file and type `http-server`. The web app will open on a local webserver http://127.0.0.1:8080/.
## The GeoJSON format
Now that you have your web app in place with the map displaying, you need to extract GPS data from your storage and display it in a layer of markers on top of the map. Before we do that, let's look at the [GeoJSON](https://wikipedia.org/wiki/GeoJSON) format that is required by Azure Maps.
@ -139,7 +155,11 @@ Sample GeoJSON data looks like this:
}
```
Of particular interest is the way the data is nested as a 'FeatureCollection'. Within that object can be found 'geometry' with the 'coordinates' indicating latitude and longitude. Geometry can have different 'types' designated to that a polygon could be drawn to a map; in this case, a point is drawn with two coordinates designated.
Of particular interest is the way the data is nested as a 'Feature' within a 'FeatureCollection'. Within that object can be found 'geometry' with the 'coordinates' indicating latitude and longitude.
✅ When building your geoJSON, pay attention to the order of 'latitude' and 'longitude' in the object, or your points will not appear where they should!
`Geometry` can have different 'types' designated to that a polygon could be drawn to a map; in this case, a point is drawn with two coordinates designated.
✅ Azure Maps supports standard GeoJSON plus some [enhanced features](https://docs.microsoft.com/azure/azure-maps/extend-geojson?WT.mc_id=academic-17441-jabenn) including the ability to draw circles and other geometries.
## Plot GPS data on a Map using GeoJSON
@ -156,12 +176,88 @@ az storage cors add --methods GET \
--account-key <key1>
```
TODO - fetch call explanation
1. First, get the endpoint of your storage container. Using the Azure CLI, you can show its information:
```
az storage account blob-service-properties show --account-name
[--query-examples]
[--resource-group]
[--subscription]
```
A typical query would look like:
```
az storage account blob-service-properties show -n mystorageaccount -g MyResourceGroup
```
1. Use that endpoint to build up your init() function. Overwrite the previous function by adding the ability to fetch data:
There are several things happening here. First, you fetch your data from your container using the endpoint you found using the Azure CLI. You parse each file in that blog storage to extract latitude and longitude. Then you initialize a map, adding a bubble layer with the data fetched and saved as source.
1. Add a loadJSON() function to your script block:
```javascript
var map, features;
function loadJSON(file) {
var xhr = new XMLHttpRequest();
features = [];
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
gps = JSON.parse(xhr.responseText)
features.push(
new atlas.data.Feature(new atlas.data.Point([parseFloat(gps.gps.lon), parseFloat(gps.gps.lat)]))
)
}
}
};
xhr.open("GET", file, true);
xhr.send();
}
```
This function is called by the fetch routine to parse through the JSON data and convert it to be read as longitude and latitude coordinates as geoJSON.
Once parsed, the data is set as part of a geoJSON `Feature`. The map will be initialized and little bubbles will appear around the path your data is plotting:

---
## 🚀 Challenge
It's nice to be able to display static data on a map as markers. Can you enhance this web app to add animation and show the path of the markers over time, using the timestamped json files? Here are some samples of using animation
It's nice to be able to display static data on a map as markers. Can you enhance this web app to add animation and show the path of the markers over time, using the timestamped json files? Here are [some samples](https://azuremapscodesamples.azurewebsites.net/) of using animation within maps.
## Post-lecture quiz
@ -169,6 +265,8 @@ It's nice to be able to display static data on a map as markers. Can you enhance
## Review & Self Study
Azure Maps is particularly useful for working with IoT devices. Research some of the uses in the [documentation](https://docs.microsoft.com/en-us/azure/azure-maps/tutorial-iot-hub-maps?WT.mc_id=academic-17441-jabenn). Deepen your knowledge of mapmaking and waypoints [with this Learn module](https://docs.microsoft.com/en-us/learn/modules/create-your-first-app-with-azure-maps/?WT.mc_id=academic-17441-jabenn).
@ -6,7 +6,7 @@ This video gives an overview of the Azure Custom Vision service, a service that
[](https://www.youtube.com/watch?v=TETcDLJlWR4)
> 🎥 Click the image above to watch a video
> 🎥 Click the image above to watch the video
## Pre-lecture quiz
@ -150,6 +150,8 @@ To use Custom Vision, you first need to create two cognitive services resources

✅ Take some time to explore the Custom Vision UI for your image classifier.
### Task - train your image classifier project
To train an image classifier, you will need multiple pictures of fruit, both good and bad quality to tag as good and bad, such as an ripe and an overripe banana.
@ -170,7 +172,7 @@ Image classifiers run at very low resolution. For example Custom Vision can take
* Repeat the same process using 2 unripe bananas
You should have at least 10 training images, with at least 5 ripe and 5 unripe, and 4 testing images, 2 ripe, 2 unripe. You're images should be png or jpegs, small than 6MB. If you create them with an iPhone for example they may be high-resolution HEIC images, so will need to be converted and possibly shrunk. The more images the better, and you should have a similar number of ripe and unripe.
You should have at least 10 training images, with at least 5 ripe and 5 unripe, and 4 testing images, 2 ripe, 2 unripe. Your images should be png or jpegs, small than 6MB. If you create them with an iPhone for example they may be high-resolution HEIC images, so will need to be converted and possibly shrunk. The more images the better, and you should have a similar number of ripe and unripe.
If you don't have both ripe and unripe fruit, you can use different fruits, or any two objects you have available. You can also find some example images in the [images](./images) folder of ripe and unripe bananas that you can use.
@ -200,7 +202,7 @@ Once your classifier is trained, you can test it by giving it a new image to cla
## Retrain your image classifier
When you test you classifier, it may not give the results you expect. Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat. You can improve your classifier by retraining it with images it gets wrong.
When you test your classifier, it may not give the results you expect. Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat. You can improve your classifier by retraining it with images it gets wrong.
Every time you make a prediction using the quick test option, the image and results are stored. You can use these images to retrain your model.
This video gives an overview of Object Detection the Azure Custom Vision service, a service that will be covered in this lesson.
[](https://www.youtube.com/watch?v=wtTYSyBUpFc)
> 🎥 Click the image above to watch the video
## Pre-lecture quiz
@ -10,24 +14,178 @@ Add a sketchnote if possible/appropriate
## Introduction
In this lesson you will learn about
In the previous project, you used AI to train an image classifier - a model that can tell if an image contains something, such as ripe fruit or unripe fruit. Another type of AI model that can be used with images is object detection. These models don't classify an image by tags, instead they are trained to recognize objects, and can find them in images, not only detecting that the image is present, but detecting where in the image it is. This allows you to count objects in images.
In this lesson you will learn about object detection, including how it can be used in retail. You will also learn how to train an object detector in the cloud.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Object detection](#object-detection)
* [Use object detection in retail](#use-object-detection-in-retail)
* [Train an object detector](#train-an-object-detector)
* [Test your object detector](#test-your-object-detector)
* [Retrain your object detector](#retrain-your-object-detector)
## Object detection
Object detection involves detecting objects in images using AI. Unlike the image classifier you trained in the last project, object detection is not about predicting the best tag for an image as a whole, but for finding one or more objects in an image.
### Object detection vs image classification
Image classification is about classifying an image as a whole - what are the probabilities that the whole image matches each tag. You get back probabilities for every tag used to train the model.

In the example above, two images are classified using a model trained to classify tubs of cashew nuts or cans of tomato paste. The first image is a tub of cashew nuts, and has two results from the image classifier:
| Tag | Probability |
| -------------- | ----------: |
| `cashew nuts` | 98.4% |
| `tomato paste` | 1.6% |
The second image is of a can of tomato paste, and the results are:
| Tag | Probability |
| -------------- | ----------: |
| `cashew nuts` | 0.7% |
| `tomato paste` | 99.3% |
You could use these value with a threshold percentage to predict what was in the image. But what if an image contained multiple cans of tomato paste, or both cashew nuts and tomato paste? The results would probably not give you what you want. This is where object detection comes in.
Object detection involves training a model to recognize objects. Instead of giving it images containing the object and telling it each image is one tag or another, you highlight the section of an image that contains the specific object, and tag that. You can tag a single object in an image or multiple. This way the model learns what the object itself looks like, not just what images that contain the object look like.
When you then use it to predict images, instead of getting back a list of tags and percentages, you get back a list of detected objects, with their bounding box and the probability that the object matches the assigned tag.
> 🎓 *Bounding boxes* are the boxes around an object. They are given using coordinates relative to the image as a whole on a scale of 0-1. For example, if the image is 800 pixels wide, by 600 tall and the object it detected between 400 and 600 pixels along, and 150 and 300 pixels down, the bounding box would have a top/left coordinate of 0.5,0.25, with a width of 0.25 and a height of 0.25. That way no matter what size the image is scaled to, the bounding box starts half way along, and a quarter of the way down, and is a quarter of the width and the height.

The image above contains both a tub of cashew nuts and three cans of tomato paste. The object detector detected the cashew nuts, returning the bounding box that contains the cashew nuts with the percentage chance that that bounding box contains the object, in this case 97.6%. The object detector has also detected three cans of tomato paste, and provides three separate bounding boxes, one for each detected can, and each one has a percentage probability that the bounding box contains a can of tomato paste.
✅ Think of some different scenarios you might want to use image-based AI models for. Which ones would need classification, and which would need object detection?
### How object detection works
Object detection uses complex ML models. These models work by diving the image up into multiple cells, then checks if the center of the bounding box is the center of an image that matches one of the images used to train the model. You can think of this as kind of like running an image classifier over different parts of the image to look for matches.
> 💁 This is a drastic over-simplification. There are many techniques for object detection, and you can read more about them on the [Object detection page on Wikipedia](https://wikipedia.org/wiki/Object_detection).
There are a number of different models that can do object detection. One particularly famous model is [YOLO (You only look once)](https://pjreddie.com/darknet/yolo/), which is incredibly fast and can detect 20 different class of objects, such as people, dogs, bottles and cars.
✅ Read up on the YOLO model at [pjreddie.com/darknet/yolo/](https://pjreddie.com/darknet/yolo/)
Object detection models can be re-trained using transfer learning to detect custom objects.
## Use object detection in retail
Object detection has multiple uses in retail. Some include:
* **Stock checking and counting** - recognizing when stock is low on shelves. If stock is too low, notifications can be sent to staff or robots to re-stock shelves.
* **mask detection** - in stores with mask policies during public health events, object detection can recognize people with masks and those without.
* **Automated billing** - detecting items picked off shelves in automated stores and billing customers appropriately.
* **Hazard detection** - recognizing broken items on floors, or spilled liquids, alerting cleaning crews.
✅ Do some research: What are some more use cases for object detection in retail?
## Train an object detector
You can train an object detector using Custom Vision, in a similar way to how you trained an image classifier.
### Task - create an object detector
1. Create a Resource Group for this project called `stock-detector`
1. Create a free Custom Vision training resource, and a free Custom Vision prediction resource in the `stock-detector` resource group. Name them `stock-detector-training` and `stock-detector-prediction`.
> 💁 You can only have one free training and prediction resource, so make sure you've cleaned up your project from the earlier lessons.
> ⚠️ You can refer to [the instructions for creating training and prediction resources from project 4, lesson 1 if needed](../../../4-manufacturing/lessons/1-train-fruit-detector/README.md#task---create-a-cognitive-services-resource).
## Thing 1
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai), and sign in with the Microsoft account you used for your Azure account.
1. Follow the [Create a new Project section of the Build an object detector quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector?WT.mc_id=academic-17441-jabenn#create-a-new-project) to create a new Custom Vision project. The UI may change and these docs are always the most up to date reference.
Call your project `stock-detector`.
When you create your project, make sure to use the `stock-detector-training` resource you created earlier. Use a n*Object Detection* project type, and the *Products on Shelves* domain.

> 💁 The products on shelves domain is specifically targeted for detecting stock on store shelves.
✅ Take some time to explore the Custom Vision UI for your object detector.
### Task - train your object detector
To train your model you will need a set of images containing the objects you want to detect.
1. Gather images that contain the object to detect. You will need at least 15 images containing each object to detect from a variety of different angles and in different lighting conditions, but the more the better. You will also need a few images to test the model. If you are detecting more than one object, you will want some testing images that contain all the objects.
> 💁 Images with multiple different objects count towards the 15 image minimum for all the objects in the image.
Your images should be png or jpegs, small than 6MB. If you create them with an iPhone for example they may be high-resolution HEIC images, so will need to be converted and possibly shrunk. The more images the better, and you should have a similar number of ripe and unripe.
The model is designed for products on shelves, so try to take the photos of the objects on shelves.
You can find some example images that you can use in the [images](./images) folder of cashew nuts and tomato paste that you can use.
1. Follow the [Upload and tag images section of the Build an object detector quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector?WT.mc_id=academic-17441-jabenn#upload-and-tag-images) to upload your training images. Create relevant tags depending on the types of objects you want to detect.

When you draw bounding boxes for objects, keep them nice and tight around the object. It can take a while to outline all the images, but the tool will detect what it thinks are the bounding boxes, making it faster.

> 💁 If you have more than 15 images for each object, you can train after 15 then use the **Suggested tags** feature. This will use the trained model to detect the objecs in the untagged image. You can then confirm the detected objects, or reject and re-draw the bounding boxes. This can save a *lot* of time.
1. Follow the [Train the detector section of the Build an object detector quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector?WT.mc_id=academic-17441-jabenn#train-the-detector) to train the object detector on your tagged images.
You will be given a choice of training type. Select **Quick Training**.
The object detector will then train. It will take a few minutes for the training to complete.
## Test your object detector
Once your object detector is trained, you can test it by giving it new images to detect objects in.
### Task - test your object detector
1. Use the **Quick Test** button to upload testing images and verify the objects are detected. Use the testing images you created earlier, not any of the images you used for training.

1. Try all the testing images you have access to and observe the probabilities.
## Retrain your object detector
When you test your object detector, it may not give the results you expect, the same as with image classifiers in the previous project. You can improve your object detector by retraining it with images it gets wrong.
Every time you make a prediction using the quick test option, the image and results are stored. You can use these images to retrain your model.
1. Use the **Predictions** tab to locate the images you used for testing
1. Confirm any accurate detections, delete an incorrect ones and add any missing objects.
1. Retrain and re-test the model.
---
## 🚀 Challenge
What would happen if you used the object detector with similar looking items, such as same brand cans of tomato paste and chopped tomatoes?
If you have any similar looking items, test it out by adding images of them to your object detector.
* When you trained your object detector, you would have seen values for *Precision*, *Recall*, and *mAP* that rate the model that was created. Read up on what these values are using [the Evaluate the detector section of the Build an object detector quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector?WT.mc_id=academic-17441-jabenn#evaluate-the-detector)
* Read more about object detection on the [Object detection page on Wikipedia](https://wikipedia.org/wiki/Object_detection)