diff --git a/5-retail/lessons/2-check-stock-device/README.md b/5-retail/lessons/2-check-stock-device/README.md index 304b5637..bd3ee4f9 100644 --- a/5-retail/lessons/2-check-stock-device/README.md +++ b/5-retail/lessons/2-check-stock-device/README.md @@ -10,13 +10,83 @@ Add a sketchnote if possible/appropriate ## Introduction -In this lesson you will learn about +In the previous lesson you learned about the different uses of object detection in retail. You also learned how to train an object detector to identify stock. In this lesson you will learn how to use your object detector from your IoT device to count stock. In this lesson we'll cover: -* [Thing 1](#thing-1) +* [Stock counting](#stock-counting) +* [Call your object detector from your IoT device](#call-your-object-detector-from-your-iot-device) +* [Bounding boxes](#bounding-boxes) +* [Count stock](#count-stock) -## Thing 1 +## Stock counting + +Object detectors can be used for stock checking, either counting stock or ensuring stock is where it should be. IoT devices with cameras can be deployed all around the store to monitor stock, starting with hot spots where having items restocked is important, such as areas where small numbers of high value items are stocked. + +For example, if a camera is pointing at a set of shelves that can hold 8 cans of tomato paste, and an object detector only detects 7 cans, then one is missing and needs to be restocked. + +![7 cans of tomato paste on a shelf, 4 on the top row, 3 on top](../../../images/stock-7-cans-tomato-paste.png) + +In the above image, an object detector has detected 7 cans of tomato paste on a shelf that can hold 8 cans. Not only can the IoT device send a notification of the need to restock, but it can even give an indication of the location of the missing item, important data if you are using robots to restock shelves. + +> 💁 Depending on the store and popularity of the item, restocking probably wouldn't happen if only 1 can was missing. You would need to build an algorithm that determines when to restock based on your produce, customers and other criteria. + +✅ In what other scenarios could you combine object detection and robots? + +Sometimes the wrong stock can be on the shelves. This could be human error when restocking, or customers changing their mind on a purchase and putting an item back in the first available space. When this is a non-perishable item such as canned goods, this is an annoyance. If it is a perishable item such as frozen or chilled goods, this can mean that the product can no longer be sold as it might be impossible to tell how long the item was out of the freezer. + +Object detection can be used to detect unexpected items, again alerting a human or robot to return the item as soon as it is detected. + +![A rogue can of baby corn on the tomato paste shelf](../../../images/stock-rogue-corn.png) + +In the above image, a can of baby corn has been put on the shelf next to the tomato paste. The object detector has detected this, allowing the IoT device to notify a human or robot to return the can to it's correct location. + +## Call your object detector from your IoT device + +The object detector you trained in the last lesson can be called from your IoT device. + +### Task - publish an iteration of your object detector + +Iterations are published from the Custom Vision portal. + +1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `stock-detector` project. + +1. Select the **Performance** tab from the options at the top + +1. Select the latest iteration from the *Iterations* list on the side + +1. Select the **Publish** button for the iteration + + ![The publish button](../../../images/custom-vision-object-detector-publish-button.png) + +1. In the *Publish Model* dialog, set the *Prediction resource* to the `stock-detector-prediction` resource you created in the last lesson. Leave the name as `Iteration2`, and select the **Publish** button. + +1. Once published, select the **Prediction URL** button. This will show details of the prediction API, and you will need these to call the model from your IoT device. The lower section is labelled *If you have an image file*, and this is the details you want. Take a copy of the URL that is shown which will be something like: + + ```output + https://.api.cognitive.microsoft.com/customvision/v3.0/Prediction//detect/iterations/Iteration2/image + ``` + + Where `` will be the location you used when creating your custom vision resource, and `` will be a long ID made up of letters and numbers. + + Also take a copy of the *Prediction-Key* value. This is a secure key that you have to pass when you call the model. Only applications that pass this key are allowed to use the model, any other applications are rejected. + + ![The prediction API dialog showing the URL and key](../../../images/custom-vision-prediction-key-endpoint.png) + +✅ When a new iteration is published, it will have a different name. How do you think you would change the iteration an IoT device is using? + +### Task - call your object detector from your IoT device + +Follow the relevant guide below to use the object detector from your IoT device: + +* [Arduino - Wio Terminal](wio-terminal-object-detector.md) +* [Single-board computer - Raspberry Pi/Virtual device](single-board-computer-object-detector.md) + +## Bounding boxes + +## Count stock + +### Task - count stock --- @@ -30,4 +100,4 @@ In this lesson we'll cover: ## Assignment -[](assignment.md) +[Use your object detector on the edge](assignment.md) diff --git a/5-retail/lessons/2-check-stock-device/assignment.md b/5-retail/lessons/2-check-stock-device/assignment.md index da157d5c..eb2342f5 100644 --- a/5-retail/lessons/2-check-stock-device/assignment.md +++ b/5-retail/lessons/2-check-stock-device/assignment.md @@ -1,9 +1,11 @@ -# +# Use your object detector on the edge ## Instructions +In the last project, you deployed your image classifier to the edge. Do the same with your object detector, exporting it as a compact model and running it on the edge, accessing the edge version from your IoT device. + ## Rubric | Criteria | Exemplary | Adequate | Needs Improvement | | -------- | --------- | -------- | ----------------- | -| | | | | +| Deploy your object detector to the edge | Was able to use the correct compact domain, export the object detector and run it on the edge | Was able to use the correct compact domain, and export the object detector, but was unable to run it on the edge | Was unable to use the correct compact domain, export the object detector, and run it on the edge | diff --git a/5-retail/lessons/2-check-stock-device/code-detect/pi/fruit-quality-detector/app.py b/5-retail/lessons/2-check-stock-device/code-detect/pi/fruit-quality-detector/app.py new file mode 100644 index 00000000..3686a971 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/pi/fruit-quality-detector/app.py @@ -0,0 +1,40 @@ +import io +import time +from picamera import PiCamera + +from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient +from msrest.authentication import ApiKeyCredentials + +camera = PiCamera() +camera.resolution = (640, 480) +camera.rotation = 0 + +time.sleep(2) + +image = io.BytesIO() +camera.capture(image, 'jpeg') +image.seek(0) + +with open('image.jpg', 'wb') as image_file: + image_file.write(image.read()) + +prediction_url = '' +prediction_key = '' + +parts = prediction_url.split('/') +endpoint = 'https://' + parts[2] +project_id = parts[6] +iteration_name = parts[9] + +prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key}) +predictor = CustomVisionPredictionClient(endpoint, prediction_credentials) + +image.seek(0) +results = predictor.detect_image(project_id, iteration_name, image) + +threshold = 0.3 + +predictions = (prediction for prediction in results.predictions if prediction.probability > threshold) + +for prediction in predictions: + print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%') diff --git a/5-retail/lessons/2-check-stock-device/code-detect/virtual-iot-device/fruit-quality-detector/app.py b/5-retail/lessons/2-check-stock-device/code-detect/virtual-iot-device/fruit-quality-detector/app.py new file mode 100644 index 00000000..2c5d3b9d --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/virtual-iot-device/fruit-quality-detector/app.py @@ -0,0 +1,40 @@ +from counterfit_connection import CounterFitConnection +CounterFitConnection.init('127.0.0.1', 5000) + +import io +from counterfit_shims_picamera import PiCamera + +from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient +from msrest.authentication import ApiKeyCredentials + +camera = PiCamera() +camera.resolution = (640, 480) +camera.rotation = 0 + +image = io.BytesIO() +camera.capture(image, 'jpeg') +image.seek(0) + +with open('image.jpg', 'wb') as image_file: + image_file.write(image.read()) + +prediction_url = '' +prediction_key = '' + +parts = prediction_url.split('/') +endpoint = 'https://' + parts[2] +project_id = parts[6] +iteration_name = parts[9] + +prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key}) +predictor = CustomVisionPredictionClient(endpoint, prediction_credentials) + +image.seek(0) +results = predictor.detect_image(project_id, iteration_name, image) + +threshold = 0.3 + +predictions = (prediction for prediction in results.predictions if prediction.probability > threshold) + +for prediction in predictions: + print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%') diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.gitignore b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.gitignore new file mode 100644 index 00000000..89cc49cb --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.gitignore @@ -0,0 +1,5 @@ +.pio +.vscode/.browse.c_cpp.db* +.vscode/c_cpp_properties.json +.vscode/launch.json +.vscode/ipch diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.vscode/extensions.json b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.vscode/extensions.json new file mode 100644 index 00000000..0f0d7401 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/.vscode/extensions.json @@ -0,0 +1,7 @@ +{ + // See http://go.microsoft.com/fwlink/?LinkId=827846 + // for the documentation about the extensions.json format + "recommendations": [ + "platformio.platformio-ide" + ] +} diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/include/README b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/include/README new file mode 100644 index 00000000..194dcd43 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/include/README @@ -0,0 +1,39 @@ + +This directory is intended for project header files. + +A header file is a file containing C declarations and macro definitions +to be shared between several project source files. You request the use of a +header file in your project source file (C, C++, etc) located in `src` folder +by including it, with the C preprocessing directive `#include'. + +```src/main.c + +#include "header.h" + +int main (void) +{ + ... +} +``` + +Including a header file produces the same results as copying the header file +into each source file that needs it. Such copying would be time-consuming +and error-prone. With a header file, the related declarations appear +in only one place. If they need to be changed, they can be changed in one +place, and programs that include the header file will automatically use the +new version when next recompiled. The header file eliminates the labor of +finding and changing all the copies as well as the risk that a failure to +find one copy will result in inconsistencies within a program. + +In C, the usual convention is to give header files names that end with `.h'. +It is most portable to use only letters, digits, dashes, and underscores in +header file names, and at most one dot. + +Read more about using header files in official GCC documentation: + +* Include Syntax +* Include Operation +* Once-Only Headers +* Computed Includes + +https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/lib/README b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/lib/README new file mode 100644 index 00000000..6debab1e --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/lib/README @@ -0,0 +1,46 @@ + +This directory is intended for project specific (private) libraries. +PlatformIO will compile them to static libraries and link into executable file. + +The source code of each library should be placed in a an own separate directory +("lib/your_library_name/[here are source files]"). + +For example, see a structure of the following two libraries `Foo` and `Bar`: + +|--lib +| | +| |--Bar +| | |--docs +| | |--examples +| | |--src +| | |- Bar.c +| | |- Bar.h +| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html +| | +| |--Foo +| | |- Foo.c +| | |- Foo.h +| | +| |- README --> THIS FILE +| +|- platformio.ini +|--src + |- main.c + +and a contents of `src/main.c`: +``` +#include +#include + +int main (void) +{ + ... +} + +``` + +PlatformIO Library Dependency Finder will find automatically dependent +libraries scanning project source files. + +More information about PlatformIO Library Dependency Finder +- https://docs.platformio.org/page/librarymanager/ldf.html diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/platformio.ini b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/platformio.ini new file mode 100644 index 00000000..5f3eb8a7 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/platformio.ini @@ -0,0 +1,26 @@ +; PlatformIO Project Configuration File +; +; Build options: build flags, source filter +; Upload options: custom upload port, speed and extra flags +; Library options: dependencies, extra library storages +; Advanced options: extra scripting +; +; Please visit documentation for the other options and examples +; https://docs.platformio.org/page/projectconf.html + +[env:seeed_wio_terminal] +platform = atmelsam +board = seeed_wio_terminal +framework = arduino +lib_deps = + seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5 + seeed-studio/Seeed Arduino FS @ 2.0.3 + seeed-studio/Seeed Arduino SFUD @ 2.0.1 + seeed-studio/Seeed Arduino rpcUnified @ 2.1.3 + seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1 + seeed-studio/Seeed Arduino RTC @ 2.0.0 + bblanchon/ArduinoJson @ 6.17.3 +build_flags = + -w + -DARDUCAM_SHIELD_V2 + -DOV2640_CAM \ No newline at end of file diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/camera.h b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/camera.h new file mode 100644 index 00000000..2028039f --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/camera.h @@ -0,0 +1,160 @@ +#pragma once + +#include +#include + +class Camera +{ +public: + Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS) + { + _format = format; + _image_size = image_size; + } + + bool init() + { + // Reset the CPLD + _arducam.write_reg(0x07, 0x80); + delay(100); + + _arducam.write_reg(0x07, 0x00); + delay(100); + + // Check if the ArduCAM SPI bus is OK + _arducam.write_reg(ARDUCHIP_TEST1, 0x55); + if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55) + { + return false; + } + + // Change MCU mode + _arducam.set_mode(MCU2LCD_MODE); + + uint8_t vid, pid; + + // Check if the camera module type is OV2640 + _arducam.wrSensorReg8_8(0xff, 0x01); + _arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid); + _arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid); + if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42))) + { + return false; + } + + _arducam.set_format(_format); + _arducam.InitCAM(); + _arducam.OV2640_set_JPEG_size(_image_size); + _arducam.OV2640_set_Light_Mode(Auto); + _arducam.OV2640_set_Special_effects(Normal); + delay(1000); + + return true; + } + + void startCapture() + { + _arducam.flush_fifo(); + _arducam.clear_fifo_flag(); + _arducam.start_capture(); + } + + bool captureReady() + { + return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK); + } + + bool readImageToBuffer(byte **buffer, uint32_t &buffer_length) + { + if (!captureReady()) return false; + + // Get the image file length + uint32_t length = _arducam.read_fifo_length(); + buffer_length = length; + + if (length >= MAX_FIFO_SIZE) + { + return false; + } + if (length == 0) + { + return false; + } + + // create the buffer + byte *buf = new byte[length]; + + uint8_t temp = 0, temp_last = 0; + int i = 0; + uint32_t buffer_pos = 0; + bool is_header = false; + + _arducam.CS_LOW(); + _arducam.set_fifo_burst(); + + while (length--) + { + temp_last = temp; + temp = SPI.transfer(0x00); + //Read JPEG data from FIFO + if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while, + { + buf[buffer_pos] = temp; + + buffer_pos++; + i++; + + _arducam.CS_HIGH(); + } + if (is_header == true) + { + //Write image data to buffer if not full + if (i < 256) + { + buf[buffer_pos] = temp; + buffer_pos++; + i++; + } + else + { + _arducam.CS_HIGH(); + + i = 0; + buf[buffer_pos] = temp; + + buffer_pos++; + i++; + + _arducam.CS_LOW(); + _arducam.set_fifo_burst(); + } + } + else if ((temp == 0xD8) & (temp_last == 0xFF)) + { + is_header = true; + + buf[buffer_pos] = temp_last; + buffer_pos++; + i++; + + buf[buffer_pos] = temp; + buffer_pos++; + i++; + } + } + + _arducam.clear_fifo_flag(); + + _arducam.set_format(_format); + _arducam.InitCAM(); + _arducam.OV2640_set_JPEG_size(_image_size); + + // return the buffer + *buffer = buf; + } + +private: + ArduCAM _arducam; + int _format; + int _image_size; +}; diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/config.h b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/config.h new file mode 100644 index 00000000..ef40b4fa --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/config.h @@ -0,0 +1,49 @@ +#pragma once + +#include + +using namespace std; + +// WiFi credentials +const char *SSID = ""; +const char *PASSWORD = ""; + +const char *PREDICTION_URL = ""; +const char *PREDICTION_KEY = ""; + +// Microsoft Azure DigiCert Global Root G2 global certificate +const char *CERTIFICATE = + "-----BEGIN CERTIFICATE-----\r\n" + "MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n" + "MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n" + "d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n" + "MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n" + "MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n" + "c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n" + "ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n" + "wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n" + "iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n" + "ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n" + "aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n" + "0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n" + "gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n" + "sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n" + "lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n" + "N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n" + "Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n" + "AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n" + "BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n" + "JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n" + "CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n" + "Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n" + "aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n" + "cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n" + "MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n" + "cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n" + "AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n" + "+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n" + "cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n" + "kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n" + "trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n" + "8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n" + "-----END CERTIFICATE-----\r\n"; \ No newline at end of file diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/main.cpp b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/main.cpp new file mode 100644 index 00000000..5c3951a1 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/src/main.cpp @@ -0,0 +1,129 @@ +#include +#include +#include +#include +#include "SD/Seeed_SD.h" +#include +#include +#include + +#include "config.h" +#include "camera.h" + +Camera camera = Camera(JPEG, OV2640_640x480); + +WiFiClientSecure client; + +void setupCamera() +{ + pinMode(PIN_SPI_SS, OUTPUT); + digitalWrite(PIN_SPI_SS, HIGH); + + Wire.begin(); + SPI.begin(); + + if (!camera.init()) + { + Serial.println("Error setting up the camera!"); + } +} + +void connectWiFi() +{ + while (WiFi.status() != WL_CONNECTED) + { + Serial.println("Connecting to WiFi.."); + WiFi.begin(SSID, PASSWORD); + delay(500); + } + + client.setCACert(CERTIFICATE); + Serial.println("Connected!"); +} + +void setup() +{ + Serial.begin(9600); + + while (!Serial) + ; // Wait for Serial to be ready + + delay(1000); + + connectWiFi(); + + setupCamera(); + + pinMode(WIO_KEY_C, INPUT_PULLUP); +} + +const float threshold = 0.3f; + +void detectStock(byte *buffer, uint32_t length) +{ + HTTPClient httpClient; + httpClient.begin(client, PREDICTION_URL); + httpClient.addHeader("Content-Type", "application/octet-stream"); + httpClient.addHeader("Prediction-Key", PREDICTION_KEY); + + int httpResponseCode = httpClient.POST(buffer, length); + + if (httpResponseCode == 200) + { + String result = httpClient.getString(); + + DynamicJsonDocument doc(1024); + deserializeJson(doc, result.c_str()); + + JsonObject obj = doc.as(); + JsonArray predictions = obj["predictions"].as(); + + for(JsonVariant prediction : predictions) + { + float probability = prediction["probability"].as(); + if (probability > threshold) + { + String tag = prediction["tagName"].as(); + char buff[32]; + sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0); + Serial.println(buff); + } + } + } + + httpClient.end(); +} + +void buttonPressed() +{ + camera.startCapture(); + + while (!camera.captureReady()) + delay(100); + + Serial.println("Image captured"); + + byte *buffer; + uint32_t length; + + if (camera.readImageToBuffer(&buffer, length)) + { + Serial.print("Image read to buffer with length "); + Serial.println(length); + + detectStock(buffer, length); + + delete (buffer); + } +} + +void loop() +{ + if (digitalRead(WIO_KEY_C) == LOW) + { + buttonPressed(); + delay(2000); + } + + delay(200); +} \ No newline at end of file diff --git a/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/test/README b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/test/README new file mode 100644 index 00000000..b94d0890 --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/code-detect/wio-terminal/fruit-quality-detector/test/README @@ -0,0 +1,11 @@ + +This directory is intended for PlatformIO Unit Testing and project tests. + +Unit Testing is a software testing method by which individual units of +source code, sets of one or more MCU program modules together with associated +control data, usage procedures, and operating procedures, are tested to +determine whether they are fit for use. Unit testing finds problems early +in the development cycle. + +More information about PlatformIO Unit Testing: +- https://docs.platformio.org/page/plus/unit-testing.html diff --git a/5-retail/lessons/2-check-stock-device/single-board-computer-object-detector.md b/5-retail/lessons/2-check-stock-device/single-board-computer-object-detector.md new file mode 100644 index 00000000..bee5b7df --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/single-board-computer-object-detector.md @@ -0,0 +1,70 @@ +# Call your object detector from your IoT device - Virtual IoT Hardware and Raspberry Pi + +Once your object detector has been published, it can be used from your IoT device. + +## Copy the image classifier project + +The majority of your stock detector is the same as the image classifier you created in a previous lesson. + +### Task - copy the image classifier project + +1. Create a folder called `stock-counter` either on your computer if you are using a virtual IoT device, or on your Raspberry Pi. If you are using a virtual IoT device make sure you set up a virtual environment. + +1. Set up the camera hardware. + + * If you are using a Raspberry Pi you will need to fit the PiCamera. You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape. + * If you are using a virtual IoT device then you will need to install CounterFit and the CounterFit PyCamera shim. If you are going to use still images, then capture some images that your object detector hasn't seen yet, if you are going to use your web cam make sure it is positioned in a way that can see the stock you are detecting. + +1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera. + +1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects. + +### Task - change the code from a classifier to an image detector + +1. Delete the three lines of code that classifies the image and processes the predictions: + + ```python + results = predictor.classify_image(project_id, iteration_name, image) + + for prediction in results.predictions: + print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%') + ``` + + Remove these three lines. + +1. Add the following code to detect objects in the image: + + ```python + results = predictor.detect_image(project_id, iteration_name, image) + + threshold = 0.3 + + predictions = (prediction for prediction in results.predictions if prediction.probability > threshold) + + for prediction in predictions: + print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%') + ``` + + This code calls the `detect_image` method on the predictor to run the object detector. It then gathers all the predictions with a probability above a threshold, printing them to the console. + + Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out. + +1. Run this code and it will capture an image, sending it to the object detector, and print out the detected objects. If you are using a virtual IoT device ensure you have an appropriate image set in CounterFit, or our web cam is selected. If you are using a Raspberry Pi, make sure your camera is pointing to objects on a shelf. + + ```output + pi@raspberrypi:~/stock-counter $ python3 app.py + tomato paste: 34.13% + tomato paste: 33.95% + tomato paste: 35.05% + tomato paste: 32.80% + ``` + + > 💁 You may need to adjust the `threshold` to an appropriate value for your images. + + You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision. + + ![4 cans of tomato paste on a shelf with predictions for the 4 detections of 35.8%, 33.5%, 25.7% and 16.6%](../../../images/custom-vision-stock-prediction.png) + +> 💁 You can find this code in the [code-detect/pi](code-detect/pi) or [code-detect/virtual-device](code-detect/virtual-device) folder. + +😀 Your stock counter program was a success! diff --git a/5-retail/lessons/2-check-stock-device/wio-terminal-object-detector.md b/5-retail/lessons/2-check-stock-device/wio-terminal-object-detector.md new file mode 100644 index 00000000..af4f2bcc --- /dev/null +++ b/5-retail/lessons/2-check-stock-device/wio-terminal-object-detector.md @@ -0,0 +1,72 @@ +# Call your object detector from your IoT device - Wio Terminal + +Once your object detector has been published, it can be used from your IoT device. + +## Copy the image classifier project + +The majority of your stock detector is the same as the image classifier you created in a previous lesson. + +### Task - copy the image classifier project + +1. Connect your ArduCam your Wio Terminal, following the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/wio-terminal-camera.md#task---connect-the-camera). + + You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape. + +1. Create a brand new Wio Terminal project using PlatformIO. Call this project `stock-counter`. + +1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera. + +1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects. + +### Task - change the code from a classifier to an image detector + +1. Rename the `classifyImage` function to `detectStock`, both the name of the function and the call in the `buttonPressed` function. + +1. Above the `detectStock` function, declare a threshold to filter out any detections that have a low probability: + + ```cpp + const float threshold = 0.3f; + ``` + + Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out. + +1. In the `detectStock` function, replace the contents of the `for` loop that loops through the predictions with the following: + + ```cpp + for(JsonVariant prediction : predictions) + { + float probability = prediction["probability"].as(); + if (probability > threshold) + { + String tag = prediction["tagName"].as(); + char buff[32]; + sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0); + Serial.println(buff); + } + } + ``` + + > 💁 Apart from the threshold, this code is the same as for the image classifier. One difference is the prediction URL that was called. Another difference is the results will return the location of the object, and this will be covered later in this lesson. + +1. Upload and run your code. Point the camera at objects on a shelf and press the C button. You will see the output in the serial monitor: + + ```output + Connecting to WiFi.. + Connected! + Image captured + Image read to buffer with length 17416 + tomato paste: 35.84% + tomato paste: 35.87% + tomato paste: 34.11% + tomato paste: 35.16% + ``` + + > 💁 You may need to adjust the `threshold` to an appropriate value for your images. + + You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision. + + ![4 cans of tomato paste on a shelf with predictions for the 4 detections of 35.8%, 33.5%, 25.7% and 16.6%](../../../images/custom-vision-stock-prediction.png) + +> 💁 You can find this code in the [code-detect/wio-terminal](code-detect/wio-terminal) folder. + +😀 Your stock counter program was a success! diff --git a/images/Diagrams.sketch b/images/Diagrams.sketch index 086198c5..3a26364d 100644 Binary files a/images/Diagrams.sketch and b/images/Diagrams.sketch differ diff --git a/images/custom-vision-object-detector-publish-button.png b/images/custom-vision-object-detector-publish-button.png new file mode 100644 index 00000000..f8dec7a5 Binary files /dev/null and b/images/custom-vision-object-detector-publish-button.png differ diff --git a/images/custom-vision-stock-prediction.png b/images/custom-vision-stock-prediction.png new file mode 100644 index 00000000..0bb62bd2 Binary files /dev/null and b/images/custom-vision-stock-prediction.png differ