commit
c820318a41
@ -1,9 +0,0 @@
|
|||||||
# Dummy File
|
|
||||||
|
|
||||||
This file acts as a placeholder for the `translations` folder. <br>
|
|
||||||
**Please remove this file after adding the first translation**
|
|
||||||
|
|
||||||
For the instructions, follow the directives in the [translations guide](https://github.com/microsoft/IoT-For-Beginners/blob/main/TRANSLATIONS.md) .
|
|
||||||
|
|
||||||
## THANK YOU
|
|
||||||
We truly appreciate your efforts!
|
|
@ -0,0 +1,20 @@
|
|||||||
|
# কৃষিকাজে IoT
|
||||||
|
|
||||||
|
জনসংখ্যা যেমন বাড়ছে, তেমনি কৃষির চাহিদাও বাড়ছে। কৃষিজমির পরিমাণ অতোটা পরিবর্তন না হলেও, জলবায়ুর পরিবর্তন ঠিকই হচ্ছে - যা কৃষকদের আরও বেশি সমস্যার মুখে ফেলে দিচ্ছে, বিশেষত সেই ২বিলিয়ন [জীবিকা নির্বাহী কৃষক](https://wikipedia.org/wiki/Subsistence_agriculture) যাদের ফসল বেড়ে ওঠার উপর নির্ভর করেই তাদের পরিবারের অন্নসংস্থান হয়। কোন ধরণের ফসল উৎপাদন করা যাবে, কখন কাজ শুরু করা উচিত, ফলনের বৃদ্ধি, শারিরীক শ্রমের পরিমাণ হ্রাস এবং কীটপতঙ্গগুলি সনাক্ত ও তাদেরকে বিনাশ করার বিষয়ে কৃষকদের অনেকাংশে সাহায্য করতে পারে আইওটি ।
|
||||||
|
|
||||||
|
এই ৬টি লেসনে আমরা শিখবো কীভাবে কৃষিকাজ উন্নত ও স্বয়ংক্রিয় করতে ইন্টারনেট অফ থিংস প্রয়োগ করা যায়।
|
||||||
|
|
||||||
|
> 💁 এই লেসনগুলোতে আমরা ক্লাউড রিসোর্স ব্যবহার করবো। যদি এই অধ্যায়ের সমস্ত পাঠ সম্পূর্ণ করা সম্ভব নাও হয়, তবুও [Clean up your project](../clean-up.md) অংশটি অবশ্যই দেখে নিতে হবে।
|
||||||
|
|
||||||
|
## বিষয়াবলী
|
||||||
|
|
||||||
|
1. [আইওটি দ্বারা উদ্ভিদ বৃদ্ধির পূর্বাভাস](lessons/1-predict-plant-growth/README.md)
|
||||||
|
1. [মাটির আর্দ্রতা সনাক্তকরণ](lessons/2-detect-soil-moisture/README.md)
|
||||||
|
1. [স্বয়ংক্রিয়ভাবে গাছে সেচকার্য](lessons/3-automated-plant-watering/README.md)
|
||||||
|
1. [উদ্ভিদকে ক্লাউড থেকে নিয়ন্ত্রণ](lessons/4-migrate-your-plant-to-the-cloud/README.md)
|
||||||
|
1. [ক্লাউড থেকে এপ্লিকেশন নিয়ন্ত্রণ](lessons/5-migrate-application-to-the-cloud/README.md)
|
||||||
|
1. [উদ্ভিদের নিরাপত্তা নিশ্চিতকরণ](lessons/6-keep-your-plant-secure/README.md)
|
||||||
|
|
||||||
|
## ক্রেডিট
|
||||||
|
|
||||||
|
♥️ প্রতিটি লেসনই ভালোবাসার সাথে তৈরী করেছেন [Jim Bennett](https://GitHub.com/JimBobBennett)
|
@ -1,9 +1,13 @@
|
|||||||
#
|
# Run other services on the edge
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
|
It's not just image classifiers that can be run on the edge, anything that can be packaged up into a container can be deployed to an IoT Edge device. Serverless code running as Azure Functions, such as the triggers you've created in earlier lessons can be run in containers, and therefor on IoT Edge.
|
||||||
|
|
||||||
|
Pick one of the previous lessons and try to run the Azure Functions app in an IoT Edge container. You can find a guide that shows how to do this using a different Functions app project in the [Tutorial: Deploy Azure Functions as IoT Edge modules on Microsoft docs](https://docs.microsoft.com/azure/iot-edge/tutorial-deploy-function?view=iotedge-2020-11&WT.mc_id=academic-17441-jabenn).
|
||||||
|
|
||||||
## Rubric
|
## Rubric
|
||||||
|
|
||||||
| Criteria | Exemplary | Adequate | Needs Improvement |
|
| Criteria | Exemplary | Adequate | Needs Improvement |
|
||||||
| -------- | --------- | -------- | ----------------- |
|
| -------- | --------- | -------- | ----------------- |
|
||||||
| | | | |
|
| Deploy an Azure Functions app to IoT Edge | Was able to deploy an Azure Functions app to IoT Edge and use it with an IoT device to run a trigger from IoT data | Was able to deploy a Functions App to IoT Edge, but was unable to get the trigger to fire | Was unable to deploy a Functions App to IoT Edge |
|
||||||
|
@ -0,0 +1,28 @@
|
|||||||
|
import io
|
||||||
|
import requests
|
||||||
|
import time
|
||||||
|
from picamera import PiCamera
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<URL>'
|
||||||
|
headers = {
|
||||||
|
'Content-Type' : 'application/octet-stream'
|
||||||
|
}
|
||||||
|
image.seek(0)
|
||||||
|
response = requests.post(prediction_url, headers=headers, data=image)
|
||||||
|
results = response.json()
|
||||||
|
|
||||||
|
for prediction in results['predictions']:
|
||||||
|
print(f'{prediction["tagName"]}:\t{prediction["probability"] * 100:.2f}%')
|
@ -0,0 +1,28 @@
|
|||||||
|
from counterfit_connection import CounterFitConnection
|
||||||
|
CounterFitConnection.init('127.0.0.1', 5000)
|
||||||
|
|
||||||
|
import io
|
||||||
|
import requests
|
||||||
|
from counterfit_shims_picamera import PiCamera
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<URL>'
|
||||||
|
headers = {
|
||||||
|
'Content-Type' : 'application/octet-stream'
|
||||||
|
}
|
||||||
|
image.seek(0)
|
||||||
|
response = requests.post(prediction_url, headers=headers, data=image)
|
||||||
|
results = response.json()
|
||||||
|
|
||||||
|
for prediction in results['predictions']:
|
||||||
|
print(f'{prediction["tagName"]}:\t{prediction["probability"] * 100:.2f}%')
|
@ -0,0 +1,5 @@
|
|||||||
|
.pio
|
||||||
|
.vscode/.browse.c_cpp.db*
|
||||||
|
.vscode/c_cpp_properties.json
|
||||||
|
.vscode/launch.json
|
||||||
|
.vscode/ipch
|
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
// See http://go.microsoft.com/fwlink/?LinkId=827846
|
||||||
|
// for the documentation about the extensions.json format
|
||||||
|
"recommendations": [
|
||||||
|
"platformio.platformio-ide"
|
||||||
|
]
|
||||||
|
}
|
@ -0,0 +1,39 @@
|
|||||||
|
|
||||||
|
This directory is intended for project header files.
|
||||||
|
|
||||||
|
A header file is a file containing C declarations and macro definitions
|
||||||
|
to be shared between several project source files. You request the use of a
|
||||||
|
header file in your project source file (C, C++, etc) located in `src` folder
|
||||||
|
by including it, with the C preprocessing directive `#include'.
|
||||||
|
|
||||||
|
```src/main.c
|
||||||
|
|
||||||
|
#include "header.h"
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Including a header file produces the same results as copying the header file
|
||||||
|
into each source file that needs it. Such copying would be time-consuming
|
||||||
|
and error-prone. With a header file, the related declarations appear
|
||||||
|
in only one place. If they need to be changed, they can be changed in one
|
||||||
|
place, and programs that include the header file will automatically use the
|
||||||
|
new version when next recompiled. The header file eliminates the labor of
|
||||||
|
finding and changing all the copies as well as the risk that a failure to
|
||||||
|
find one copy will result in inconsistencies within a program.
|
||||||
|
|
||||||
|
In C, the usual convention is to give header files names that end with `.h'.
|
||||||
|
It is most portable to use only letters, digits, dashes, and underscores in
|
||||||
|
header file names, and at most one dot.
|
||||||
|
|
||||||
|
Read more about using header files in official GCC documentation:
|
||||||
|
|
||||||
|
* Include Syntax
|
||||||
|
* Include Operation
|
||||||
|
* Once-Only Headers
|
||||||
|
* Computed Includes
|
||||||
|
|
||||||
|
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html
|
@ -0,0 +1,46 @@
|
|||||||
|
|
||||||
|
This directory is intended for project specific (private) libraries.
|
||||||
|
PlatformIO will compile them to static libraries and link into executable file.
|
||||||
|
|
||||||
|
The source code of each library should be placed in a an own separate directory
|
||||||
|
("lib/your_library_name/[here are source files]").
|
||||||
|
|
||||||
|
For example, see a structure of the following two libraries `Foo` and `Bar`:
|
||||||
|
|
||||||
|
|--lib
|
||||||
|
| |
|
||||||
|
| |--Bar
|
||||||
|
| | |--docs
|
||||||
|
| | |--examples
|
||||||
|
| | |--src
|
||||||
|
| | |- Bar.c
|
||||||
|
| | |- Bar.h
|
||||||
|
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
|
||||||
|
| |
|
||||||
|
| |--Foo
|
||||||
|
| | |- Foo.c
|
||||||
|
| | |- Foo.h
|
||||||
|
| |
|
||||||
|
| |- README --> THIS FILE
|
||||||
|
|
|
||||||
|
|- platformio.ini
|
||||||
|
|--src
|
||||||
|
|- main.c
|
||||||
|
|
||||||
|
and a contents of `src/main.c`:
|
||||||
|
```
|
||||||
|
#include <Foo.h>
|
||||||
|
#include <Bar.h>
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
PlatformIO Library Dependency Finder will find automatically dependent
|
||||||
|
libraries scanning project source files.
|
||||||
|
|
||||||
|
More information about PlatformIO Library Dependency Finder
|
||||||
|
- https://docs.platformio.org/page/librarymanager/ldf.html
|
@ -0,0 +1,26 @@
|
|||||||
|
; PlatformIO Project Configuration File
|
||||||
|
;
|
||||||
|
; Build options: build flags, source filter
|
||||||
|
; Upload options: custom upload port, speed and extra flags
|
||||||
|
; Library options: dependencies, extra library storages
|
||||||
|
; Advanced options: extra scripting
|
||||||
|
;
|
||||||
|
; Please visit documentation for the other options and examples
|
||||||
|
; https://docs.platformio.org/page/projectconf.html
|
||||||
|
|
||||||
|
[env:seeed_wio_terminal]
|
||||||
|
platform = atmelsam
|
||||||
|
board = seeed_wio_terminal
|
||||||
|
framework = arduino
|
||||||
|
lib_deps =
|
||||||
|
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
|
||||||
|
seeed-studio/Seeed Arduino FS @ 2.0.3
|
||||||
|
seeed-studio/Seeed Arduino SFUD @ 2.0.1
|
||||||
|
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
|
||||||
|
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
|
||||||
|
seeed-studio/Seeed Arduino RTC @ 2.0.0
|
||||||
|
bblanchon/ArduinoJson @ 6.17.3
|
||||||
|
build_flags =
|
||||||
|
-w
|
||||||
|
-DARDUCAM_SHIELD_V2
|
||||||
|
-DOV2640_CAM
|
@ -0,0 +1,160 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <ArduCAM.h>
|
||||||
|
#include <Wire.h>
|
||||||
|
|
||||||
|
class Camera
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
|
||||||
|
{
|
||||||
|
_format = format;
|
||||||
|
_image_size = image_size;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool init()
|
||||||
|
{
|
||||||
|
// Reset the CPLD
|
||||||
|
_arducam.write_reg(0x07, 0x80);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
_arducam.write_reg(0x07, 0x00);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
// Check if the ArduCAM SPI bus is OK
|
||||||
|
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
|
||||||
|
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Change MCU mode
|
||||||
|
_arducam.set_mode(MCU2LCD_MODE);
|
||||||
|
|
||||||
|
uint8_t vid, pid;
|
||||||
|
|
||||||
|
// Check if the camera module type is OV2640
|
||||||
|
_arducam.wrSensorReg8_8(0xff, 0x01);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
|
||||||
|
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
_arducam.OV2640_set_Light_Mode(Auto);
|
||||||
|
_arducam.OV2640_set_Special_effects(Normal);
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void startCapture()
|
||||||
|
{
|
||||||
|
_arducam.flush_fifo();
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
_arducam.start_capture();
|
||||||
|
}
|
||||||
|
|
||||||
|
bool captureReady()
|
||||||
|
{
|
||||||
|
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
|
||||||
|
{
|
||||||
|
if (!captureReady()) return false;
|
||||||
|
|
||||||
|
// Get the image file length
|
||||||
|
uint32_t length = _arducam.read_fifo_length();
|
||||||
|
buffer_length = length;
|
||||||
|
|
||||||
|
if (length >= MAX_FIFO_SIZE)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (length == 0)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// create the buffer
|
||||||
|
byte *buf = new byte[length];
|
||||||
|
|
||||||
|
uint8_t temp = 0, temp_last = 0;
|
||||||
|
int i = 0;
|
||||||
|
uint32_t buffer_pos = 0;
|
||||||
|
bool is_header = false;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
|
||||||
|
while (length--)
|
||||||
|
{
|
||||||
|
temp_last = temp;
|
||||||
|
temp = SPI.transfer(0x00);
|
||||||
|
//Read JPEG data from FIFO
|
||||||
|
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
}
|
||||||
|
if (is_header == true)
|
||||||
|
{
|
||||||
|
//Write image data to buffer if not full
|
||||||
|
if (i < 256)
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
|
||||||
|
i = 0;
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if ((temp == 0xD8) & (temp_last == 0xFF))
|
||||||
|
{
|
||||||
|
is_header = true;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp_last;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
|
||||||
|
// return the buffer
|
||||||
|
*buffer = buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
ArduCAM _arducam;
|
||||||
|
int _format;
|
||||||
|
int _image_size;
|
||||||
|
};
|
@ -0,0 +1,11 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
using namespace std;
|
||||||
|
|
||||||
|
// WiFi credentials
|
||||||
|
const char *SSID = "<SSID>";
|
||||||
|
const char *PASSWORD = "<PASSWORD>";
|
||||||
|
|
||||||
|
const char *PREDICTION_URL = "<PREDICTION_URL>";
|
@ -0,0 +1,123 @@
|
|||||||
|
#include <Arduino.h>
|
||||||
|
#include <ArduinoJson.h>
|
||||||
|
#include <HTTPClient.h>
|
||||||
|
#include <rpcWiFi.h>
|
||||||
|
#include "SD/Seeed_SD.h"
|
||||||
|
#include <Seeed_FS.h>
|
||||||
|
#include <SPI.h>
|
||||||
|
#include <WiFiClient.h>
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
|
#include "camera.h"
|
||||||
|
|
||||||
|
Camera camera = Camera(JPEG, OV2640_640x480);
|
||||||
|
|
||||||
|
WiFiClient client;
|
||||||
|
|
||||||
|
void setupCamera()
|
||||||
|
{
|
||||||
|
pinMode(PIN_SPI_SS, OUTPUT);
|
||||||
|
digitalWrite(PIN_SPI_SS, HIGH);
|
||||||
|
|
||||||
|
Wire.begin();
|
||||||
|
SPI.begin();
|
||||||
|
|
||||||
|
if (!camera.init())
|
||||||
|
{
|
||||||
|
Serial.println("Error setting up the camera!");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void connectWiFi()
|
||||||
|
{
|
||||||
|
while (WiFi.status() != WL_CONNECTED)
|
||||||
|
{
|
||||||
|
Serial.println("Connecting to WiFi..");
|
||||||
|
WiFi.begin(SSID, PASSWORD);
|
||||||
|
delay(500);
|
||||||
|
}
|
||||||
|
|
||||||
|
Serial.println("Connected!");
|
||||||
|
}
|
||||||
|
|
||||||
|
void setup()
|
||||||
|
{
|
||||||
|
Serial.begin(9600);
|
||||||
|
|
||||||
|
while (!Serial)
|
||||||
|
; // Wait for Serial to be ready
|
||||||
|
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
connectWiFi();
|
||||||
|
|
||||||
|
setupCamera();
|
||||||
|
|
||||||
|
pinMode(WIO_KEY_C, INPUT_PULLUP);
|
||||||
|
}
|
||||||
|
|
||||||
|
void classifyImage(byte *buffer, uint32_t length)
|
||||||
|
{
|
||||||
|
HTTPClient httpClient;
|
||||||
|
httpClient.begin(client, PREDICTION_URL);
|
||||||
|
httpClient.addHeader("Content-Type", "application/octet-stream");
|
||||||
|
|
||||||
|
int httpResponseCode = httpClient.POST(buffer, length);
|
||||||
|
|
||||||
|
if (httpResponseCode == 200)
|
||||||
|
{
|
||||||
|
String result = httpClient.getString();
|
||||||
|
|
||||||
|
DynamicJsonDocument doc(1024);
|
||||||
|
deserializeJson(doc, result.c_str());
|
||||||
|
|
||||||
|
JsonObject obj = doc.as<JsonObject>();
|
||||||
|
JsonArray predictions = obj["predictions"].as<JsonArray>();
|
||||||
|
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
String tag = prediction["tagName"].as<String>();
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
|
||||||
|
char buff[32];
|
||||||
|
sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0);
|
||||||
|
Serial.println(buff);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
httpClient.end();
|
||||||
|
}
|
||||||
|
|
||||||
|
void buttonPressed()
|
||||||
|
{
|
||||||
|
camera.startCapture();
|
||||||
|
|
||||||
|
while (!camera.captureReady())
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
Serial.println("Image captured");
|
||||||
|
|
||||||
|
byte *buffer;
|
||||||
|
uint32_t length;
|
||||||
|
|
||||||
|
if (camera.readImageToBuffer(&buffer, length))
|
||||||
|
{
|
||||||
|
Serial.print("Image read to buffer with length ");
|
||||||
|
Serial.println(length);
|
||||||
|
|
||||||
|
classifyImage(buffer, length);
|
||||||
|
|
||||||
|
delete (buffer);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void loop()
|
||||||
|
{
|
||||||
|
if (digitalRead(WIO_KEY_C) == LOW)
|
||||||
|
{
|
||||||
|
buttonPressed();
|
||||||
|
delay(2000);
|
||||||
|
}
|
||||||
|
|
||||||
|
delay(200);
|
||||||
|
}
|
@ -0,0 +1,11 @@
|
|||||||
|
|
||||||
|
This directory is intended for PlatformIO Unit Testing and project tests.
|
||||||
|
|
||||||
|
Unit Testing is a software testing method by which individual units of
|
||||||
|
source code, sets of one or more MCU program modules together with associated
|
||||||
|
control data, usage procedures, and operating procedures, are tested to
|
||||||
|
determine whether they are fit for use. Unit testing finds problems early
|
||||||
|
in the development cycle.
|
||||||
|
|
||||||
|
More information about PlatformIO Unit Testing:
|
||||||
|
- https://docs.platformio.org/page/plus/unit-testing.html
|
@ -0,0 +1,66 @@
|
|||||||
|
{
|
||||||
|
"content": {
|
||||||
|
"modulesContent": {
|
||||||
|
"$edgeAgent": {
|
||||||
|
"properties.desired": {
|
||||||
|
"schemaVersion": "1.1",
|
||||||
|
"runtime": {
|
||||||
|
"type": "docker",
|
||||||
|
"settings": {
|
||||||
|
"minDockerVersion": "v1.25",
|
||||||
|
"loggingOptions": "",
|
||||||
|
"registryCredentials": {
|
||||||
|
"ClassifierRegistry": {
|
||||||
|
"username": "<Container registry name>",
|
||||||
|
"password": "<Container Password>",
|
||||||
|
"address": "<Container registry name>.azurecr.io"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"systemModules": {
|
||||||
|
"edgeAgent": {
|
||||||
|
"type": "docker",
|
||||||
|
"settings": {
|
||||||
|
"image": "mcr.microsoft.com/azureiotedge-agent:1.1",
|
||||||
|
"createOptions": "{}"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"edgeHub": {
|
||||||
|
"type": "docker",
|
||||||
|
"status": "running",
|
||||||
|
"restartPolicy": "always",
|
||||||
|
"settings": {
|
||||||
|
"image": "mcr.microsoft.com/azureiotedge-hub:1.1",
|
||||||
|
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"modules": {
|
||||||
|
"ImageClassifier": {
|
||||||
|
"version": "1.0",
|
||||||
|
"type": "docker",
|
||||||
|
"status": "running",
|
||||||
|
"restartPolicy": "always",
|
||||||
|
"settings": {
|
||||||
|
"image": "<Container registry name>.azurecr.io/classifier:v1",
|
||||||
|
"createOptions": "{\"ExposedPorts\": {\"80/tcp\": {}},\"HostConfig\": {\"PortBindings\": {\"80/tcp\": [{\"HostPort\": \"80\"}]}}}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"$edgeHub": {
|
||||||
|
"properties.desired": {
|
||||||
|
"schemaVersion": "1.1",
|
||||||
|
"routes": {
|
||||||
|
"upstream": "FROM /messages/* INTO $upstream"
|
||||||
|
},
|
||||||
|
"storeAndForwardConfiguration": {
|
||||||
|
"timeToLiveSecs": 7200
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,54 @@
|
|||||||
|
# Classify an image using an IoT Edge based image classifier - Virtual IoT Hardware and Raspberry Pi
|
||||||
|
|
||||||
|
In this part of the lesson, you will use the Image Classifier running on the IoT Edge device.
|
||||||
|
|
||||||
|
## Use the IoT Edge classifier
|
||||||
|
|
||||||
|
The IoT device can be re-directed to use the IoT Edge image classifier. The URL for the Image Classifier is `http://<IP address or name>/image`, replacing `<IP address or name>` with the IP address or host name of the computer running IoT Edge.
|
||||||
|
|
||||||
|
The Python library for Custom Vision only works with cloud-hosted models, not models hosted on IoT Edge. This means you will need to use the REST API to call the classifier.
|
||||||
|
|
||||||
|
### Task - use the IoT Edge classifier
|
||||||
|
|
||||||
|
1. Open the `fruit-quality-detector` project in VS Code if it is not already open. If you are using a virtual IoT device, then make sure the virtual environment is activated.
|
||||||
|
|
||||||
|
1. Open the `app.py` file, and remove the import statements from `azure.cognitiveservices.vision.customvision.prediction` and `msrest.authentication`.
|
||||||
|
|
||||||
|
1. Add the following import at the top of the file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Delete all the code after the image is saved to a file, from `image_file.write(image.read())` to the end of the file.
|
||||||
|
|
||||||
|
1. Add the following code to the end of the file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
prediction_url = '<URL>'
|
||||||
|
headers = {
|
||||||
|
'Content-Type' : 'application/octet-stream'
|
||||||
|
}
|
||||||
|
image.seek(0)
|
||||||
|
response = requests.post(prediction_url, headers=headers, data=image)
|
||||||
|
results = response.json()
|
||||||
|
|
||||||
|
for prediction in results['predictions']:
|
||||||
|
print(f'{prediction["tagName"]}:\t{prediction["probability"] * 100:.2f}%')
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `<URL>` with the URL for your classifier.
|
||||||
|
|
||||||
|
This code makes a REST POST request to the classifier, sending the image as the body of the request. The results come back as JSON, and this is decoded to print out the probabilities.
|
||||||
|
|
||||||
|
1. Run your code, with your camera pointing at some fruit, or an appropriate image set, or fruit visible on your webcam if using virtual IoT hardware. You will see the output in the console:
|
||||||
|
|
||||||
|
```output
|
||||||
|
(.venv) ➜ fruit-quality-detector python app.py
|
||||||
|
ripe: 56.84%
|
||||||
|
unripe: 43.16%
|
||||||
|
```
|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-classify/pi](code-classify/pi) or [code-classify/virtual-iot-device](code-classify/virtual-iot-device) folder.
|
||||||
|
|
||||||
|
😀 Your fruit quality classifier program was a success!
|
@ -0,0 +1,52 @@
|
|||||||
|
# Classify an image using an IoT Edge based image classifier - Wio Terminal
|
||||||
|
|
||||||
|
In this part of the lesson, you will use the Image Classifier running on the IoT Edge device.
|
||||||
|
|
||||||
|
## Use the IoT Edge classifier
|
||||||
|
|
||||||
|
The IoT device can be re-directed to use the IoT Edge image classifier. The URL for the Image Classifier is `http://<IP address or name>/image`, replacing `<IP address or name>` with the IP address or host name of the computer running IoT Edge.
|
||||||
|
|
||||||
|
### Task - use the IoT Edge classifier
|
||||||
|
|
||||||
|
1. Open the `fruit-quality-detector` app project if it's not already open.
|
||||||
|
|
||||||
|
1. The image classifier is running as a REST API using HTTP, not HTTPS, so the call needs to use a WiFi client that works with HTTP calls only. This means the certificate is not needed. Delete the `CERTIFICATE` from the `config.h` file.
|
||||||
|
|
||||||
|
1. The prediction URL in the `config.h` file needs to be updated to the new URL. You can also delete the `PREDICTION_KEY` as this is not needed.
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
const char *PREDICTION_URL = "<URL>";
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `<URL>` with the URL for your classifier.
|
||||||
|
|
||||||
|
1. In `main.cpp`, change the include directive for the WiFi Client Secure to import the standard HTTP version:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
#include <WiFiClient.h>
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Change the declaration of `WiFiClient` to be the HTTP version:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
WiFiClient client;
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Select the line that sets the certificate on the WiFi client. Remove the line `client.setCACert(CERTIFICATE);` from the `connectWiFi` function.
|
||||||
|
|
||||||
|
1. In the `classifyImage` function, remove the `httpClient.addHeader("Prediction-Key", PREDICTION_KEY);` line that sets the prediction key in the header.
|
||||||
|
|
||||||
|
1. Upload and run your code. Point the camera at some fruit and press the C button. You will see the output in the serial monitor:
|
||||||
|
|
||||||
|
```output
|
||||||
|
Connecting to WiFi..
|
||||||
|
Connected!
|
||||||
|
Image captured
|
||||||
|
Image read to buffer with length 8200
|
||||||
|
ripe: 56.84%
|
||||||
|
unripe: 43.16%
|
||||||
|
```
|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-classify/wio-terminal](code-classify/wio-terminal) folder.
|
||||||
|
|
||||||
|
😀 Your fruit quality classifier program was a success!
|
@ -1,33 +1,173 @@
|
|||||||
# Check stock from an IoT device
|
# Check stock from an IoT device
|
||||||
|
|
||||||
Add a sketchnote if possible/appropriate
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Pre-lecture quiz
|
## Pre-lecture quiz
|
||||||
|
|
||||||
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/39)
|
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/39)
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
In this lesson you will learn about
|
In the previous lesson you learned about the different uses of object detection in retail. You also learned how to train an object detector to identify stock. In this lesson you will learn how to use your object detector from your IoT device to count stock.
|
||||||
|
|
||||||
In this lesson we'll cover:
|
In this lesson we'll cover:
|
||||||
|
|
||||||
* [Thing 1](#thing-1)
|
* [Stock counting](#stock-counting)
|
||||||
|
* [Call your object detector from your IoT device](#call-your-object-detector-from-your-iot-device)
|
||||||
|
* [Bounding boxes](#bounding-boxes)
|
||||||
|
* [Retrain the model](#retrain-the-model)
|
||||||
|
* [Count stock](#count-stock)
|
||||||
|
|
||||||
|
> 🗑 This is the last lesson in this project, so after completing this lesson and the assignment, don't forget to clean up your cloud services. You will need the services to complete the assignment, so make sure to complete that first.
|
||||||
|
>
|
||||||
|
> Refer to [the clean up your project guide](../../../clean-up.md) if necessary for instructions on how to do this.
|
||||||
|
|
||||||
|
## Stock counting
|
||||||
|
|
||||||
|
Object detectors can be used for stock checking, either counting stock or ensuring stock is where it should be. IoT devices with cameras can be deployed all around the store to monitor stock, starting with hot spots where having items restocked is important, such as areas where small numbers of high value items are stocked.
|
||||||
|
|
||||||
|
For example, if a camera is pointing at a set of shelves that can hold 8 cans of tomato paste, and an object detector only detects 7 cans, then one is missing and needs to be restocked.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the above image, an object detector has detected 7 cans of tomato paste on a shelf that can hold 8 cans. Not only can the IoT device send a notification of the need to restock, but it can even give an indication of the location of the missing item, important data if you are using robots to restock shelves.
|
||||||
|
|
||||||
|
> 💁 Depending on the store and popularity of the item, restocking probably wouldn't happen if only 1 can was missing. You would need to build an algorithm that determines when to restock based on your produce, customers and other criteria.
|
||||||
|
|
||||||
|
✅ In what other scenarios could you combine object detection and robots?
|
||||||
|
|
||||||
|
Sometimes the wrong stock can be on the shelves. This could be human error when restocking, or customers changing their mind on a purchase and putting an item back in the first available space. When this is a non-perishable item such as canned goods, this is an annoyance. If it is a perishable item such as frozen or chilled goods, this can mean that the product can no longer be sold as it might be impossible to tell how long the item was out of the freezer.
|
||||||
|
|
||||||
|
Object detection can be used to detect unexpected items, again alerting a human or robot to return the item as soon as it is detected.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the above image, a can of baby corn has been put on the shelf next to the tomato paste. The object detector has detected this, allowing the IoT device to notify a human or robot to return the can to it's correct location.
|
||||||
|
|
||||||
|
## Call your object detector from your IoT device
|
||||||
|
|
||||||
|
The object detector you trained in the last lesson can be called from your IoT device.
|
||||||
|
|
||||||
|
### Task - publish an iteration of your object detector
|
||||||
|
|
||||||
|
Iterations are published from the Custom Vision portal.
|
||||||
|
|
||||||
|
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `stock-detector` project.
|
||||||
|
|
||||||
|
1. Select the **Performance** tab from the options at the top
|
||||||
|
|
||||||
|
1. Select the latest iteration from the *Iterations* list on the side
|
||||||
|
|
||||||
|
1. Select the **Publish** button for the iteration
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
1. In the *Publish Model* dialog, set the *Prediction resource* to the `stock-detector-prediction` resource you created in the last lesson. Leave the name as `Iteration2`, and select the **Publish** button.
|
||||||
|
|
||||||
|
1. Once published, select the **Prediction URL** button. This will show details of the prediction API, and you will need these to call the model from your IoT device. The lower section is labelled *If you have an image file*, and this is the details you want. Take a copy of the URL that is shown which will be something like:
|
||||||
|
|
||||||
|
```output
|
||||||
|
https://<location>.api.cognitive.microsoft.com/customvision/v3.0/Prediction/<id>/detect/iterations/Iteration2/image
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `<location>` will be the location you used when creating your custom vision resource, and `<id>` will be a long ID made up of letters and numbers.
|
||||||
|
|
||||||
|
Also take a copy of the *Prediction-Key* value. This is a secure key that you have to pass when you call the model. Only applications that pass this key are allowed to use the model, any other applications are rejected.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
✅ When a new iteration is published, it will have a different name. How do you think you would change the iteration an IoT device is using?
|
||||||
|
|
||||||
|
### Task - call your object detector from your IoT device
|
||||||
|
|
||||||
|
Follow the relevant guide below to use the object detector from your IoT device:
|
||||||
|
|
||||||
|
* [Arduino - Wio Terminal](wio-terminal-object-detector.md)
|
||||||
|
* [Single-board computer - Raspberry Pi/Virtual device](single-board-computer-object-detector.md)
|
||||||
|
|
||||||
## Thing 1
|
## Bounding boxes
|
||||||
|
|
||||||
|
When you use the object detector, you not only get back the detected objects with their tags and probabilities, but you also get the bounding boxes of the objects. These define where the object detector detected the object with the given probability.
|
||||||
|
|
||||||
|
> 💁 A bounding box is a box that defines the area that contains the object detected, a box that defines the boundary for the object.
|
||||||
|
|
||||||
|
The results of a prediction in the **Predictions** tab in Custom Vision have the bounding boxes drawn on the image that was sent for prediction.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the image above, 4 cans of tomato paste were detected. In the results a red square is overlaid for each object that was detected in the image, indicating the bounding box for the image.
|
||||||
|
|
||||||
|
✅ Open the predictions in Custom Vision and check out the bounding boxes.
|
||||||
|
|
||||||
|
Bounding boxes are defined with 4 values - top, left, height and width. These values are on a scale of 0-1, representing the positions as a percentage of the size of the image. The origin (the 0,0 position) is the top left of the image, so the top value is the distance from the top, and the bottom of the bounding box is the top plus the height.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The above image is 600 pixels wide and 800 pixels tall. The bounding box starts at 320 pixels down, giving a top coordinate of 0.4 (800 x 0.4 = 320). From the left, the bounding box starts at 240 pixels across, giving a left coordinate of 0.4 (600 x 0.4 = 240). The height of the bounding box is 240 pixels, giving a height value of 0.3 (800 x 0.3 = 240). The width of the bounding box is 120 pixels, giving a width value of 0.2 (600 x 0.2 = 120).
|
||||||
|
|
||||||
|
| Coordinate | Value |
|
||||||
|
| ---------- | ----: |
|
||||||
|
| Top | 0.4 |
|
||||||
|
| Left | 0.4 |
|
||||||
|
| Height | 0.3 |
|
||||||
|
| Width | 0.2 |
|
||||||
|
|
||||||
|
Using percentage values from 0-1 means no matter what size the image is scaled to, the bounding box starts 0.4 of the way along and down, and is a 0.3 of the height and 0.2 of the width.
|
||||||
|
|
||||||
|
You can use bounding boxes combined with probabilities to evaluate how accurate a detection is. For example, an object detector can detect multiple objects that overlap, for example detecting one can inside another. Your code could look at the bounding boxes, understand that this is impossible, and ignore any objects that have a significant overlap with other objects.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the example above, one bounding box indicated a predicted can of tomato paste at 78.3%. A second bounding box is slightly smaller, and is inside the first bounding box with a probability of 64.3%. Your code can check the bounding boxes, see they overlap completely, and ignore the lower probability as there is no way one can can be inside another.
|
||||||
|
|
||||||
|
✅ Can you think of a situation where is it valid to detect one object inside another?
|
||||||
|
|
||||||
|
## Retrain the model
|
||||||
|
|
||||||
|
Like with the image classifier, you can retrain your model using data captured by your IoT device. Using this real-world data will ensure your model works well when used from your IoT device.
|
||||||
|
|
||||||
|
Unlike with the image classifier, you can't just tag an image. Instead you need to review every bounding box detected by the model. If the box is around the wrong thing then it needs to be deleted, if it is in the wrong location it needs to be adjusted.
|
||||||
|
|
||||||
|
### Task - retrain the model
|
||||||
|
|
||||||
|
1. Make sure you have captured a range of images using your IoT device.
|
||||||
|
|
||||||
|
1. From the **Predictions** tab, select an image. You will see red boxes indicating the bounding boxes of the detected objects.
|
||||||
|
|
||||||
|
1. Work through each bounding box. Select it first and you will see a pop-up showing the tag. Use the handles on the corners of the bounding box to adjust the size if necessary. If the tag is wrong, remove it with the **X** button and add the correct tag. If the bounding box doesn't contain an object, delete it with the trashcan button.
|
||||||
|
|
||||||
|
1. Close the editor when done and the image will move from the **Predictions** tab to the **Training Images** tab. Repeat the process for all the predictions.
|
||||||
|
|
||||||
|
1. Use the **Train** button to re-train your model. Once it has trained, publish the iteration and update your IoT device to use the URL of the new iteration.
|
||||||
|
|
||||||
|
1. Re-deploy your code and test your IoT device.
|
||||||
|
|
||||||
|
## Count stock
|
||||||
|
|
||||||
|
Using a combination of the number of objects detected and the bounding boxes, you can count the stock on a shelf.
|
||||||
|
|
||||||
|
### Task - count stock
|
||||||
|
|
||||||
|
Follow the relevant guide below to count stock using the results from the object detector from your IoT device:
|
||||||
|
|
||||||
|
* [Arduino - Wio Terminal](wio-terminal-count-stock.md)
|
||||||
|
* [Single-board computer - Raspberry Pi/Virtual device](single-board-computer-count-stock.md)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🚀 Challenge
|
## 🚀 Challenge
|
||||||
|
|
||||||
|
Can you detect incorrect stock? Train your model on multiple objects, then update your app to alert you if the wrong stock is detected.
|
||||||
|
|
||||||
|
Maybe even take this further and detect stock side by side on the same shelf, and see if something has been put in the wrong place by defining limits on the bounding boxes.
|
||||||
|
|
||||||
## Post-lecture quiz
|
## Post-lecture quiz
|
||||||
|
|
||||||
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/40)
|
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/40)
|
||||||
|
|
||||||
## Review & Self Study
|
## Review & Self Study
|
||||||
|
|
||||||
|
* Learn more about how to architect an end-to-end stock detection system from the [Out of stock detection at the edge pattern guide on Microsoft Docs](https://docs.microsoft.com/hybrid/app-solutions/pattern-out-of-stock-at-edge?WT.mc_id=academic-17441-jabenn)
|
||||||
|
* Learn other ways to build end-to-end retail solutions combining a range of IoT and cloud services by watching this [Behind the scenes of a retail solution - Hands On! video on YouTube](https://www.youtube.com/watch?v=m3Pc300x2Mw).
|
||||||
|
|
||||||
## Assignment
|
## Assignment
|
||||||
|
|
||||||
[](assignment.md)
|
[Use your object detector on the edge](assignment.md)
|
||||||
|
@ -1,9 +1,11 @@
|
|||||||
#
|
# Use your object detector on the edge
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
|
In the last project, you deployed your image classifier to the edge. Do the same with your object detector, exporting it as a compact model and running it on the edge, accessing the edge version from your IoT device.
|
||||||
|
|
||||||
## Rubric
|
## Rubric
|
||||||
|
|
||||||
| Criteria | Exemplary | Adequate | Needs Improvement |
|
| Criteria | Exemplary | Adequate | Needs Improvement |
|
||||||
| -------- | --------- | -------- | ----------------- |
|
| -------- | --------- | -------- | ----------------- |
|
||||||
| | | | |
|
| Deploy your object detector to the edge | Was able to use the correct compact domain, export the object detector and run it on the edge | Was able to use the correct compact domain, and export the object detector, but was unable to run it on the edge | Was unable to use the correct compact domain, export the object detector, and run it on the edge |
|
||||||
|
@ -0,0 +1,92 @@
|
|||||||
|
import io
|
||||||
|
import time
|
||||||
|
from picamera import PiCamera
|
||||||
|
|
||||||
|
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||||
|
from msrest.authentication import ApiKeyCredentials
|
||||||
|
|
||||||
|
from PIL import Image, ImageDraw, ImageColor
|
||||||
|
from shapely.geometry import Polygon
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<prediction_url>'
|
||||||
|
prediction_key = '<prediction key>'
|
||||||
|
|
||||||
|
parts = prediction_url.split('/')
|
||||||
|
endpoint = 'https://' + parts[2]
|
||||||
|
project_id = parts[6]
|
||||||
|
iteration_name = parts[9]
|
||||||
|
|
||||||
|
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||||
|
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||||
|
|
||||||
|
image.seek(0)
|
||||||
|
results = predictor.detect_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
threshold = 0.3
|
||||||
|
|
||||||
|
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
||||||
|
|
||||||
|
overlap_threshold = 0.002
|
||||||
|
|
||||||
|
def create_polygon(prediction):
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
|
||||||
|
|
||||||
|
to_delete = []
|
||||||
|
|
||||||
|
for i in range(0, len(predictions)):
|
||||||
|
polygon_1 = create_polygon(predictions[i])
|
||||||
|
|
||||||
|
for j in range(i+1, len(predictions)):
|
||||||
|
polygon_2 = create_polygon(predictions[j])
|
||||||
|
overlap = polygon_1.intersection(polygon_2).area
|
||||||
|
|
||||||
|
smallest_area = min(polygon_1.area, polygon_2.area)
|
||||||
|
|
||||||
|
if overlap > (overlap_threshold * smallest_area):
|
||||||
|
to_delete.append(predictions[i])
|
||||||
|
break
|
||||||
|
|
||||||
|
for d in to_delete:
|
||||||
|
predictions.remove(d)
|
||||||
|
|
||||||
|
print(f'Counted {len(predictions)} stock items')
|
||||||
|
|
||||||
|
|
||||||
|
with Image.open('image.jpg') as im:
|
||||||
|
draw = ImageDraw.Draw(im)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
left = scale_left * im.width
|
||||||
|
top = scale_top * im.height
|
||||||
|
right = scale_right * im.width
|
||||||
|
bottom = scale_bottom * im.height
|
||||||
|
|
||||||
|
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
|
||||||
|
|
||||||
|
im.save('image.jpg')
|
@ -0,0 +1,92 @@
|
|||||||
|
from counterfit_connection import CounterFitConnection
|
||||||
|
CounterFitConnection.init('127.0.0.1', 5000)
|
||||||
|
|
||||||
|
import io
|
||||||
|
from counterfit_shims_picamera import PiCamera
|
||||||
|
|
||||||
|
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||||
|
from msrest.authentication import ApiKeyCredentials
|
||||||
|
|
||||||
|
from PIL import Image, ImageDraw, ImageColor
|
||||||
|
from shapely.geometry import Polygon
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<prediction_url>'
|
||||||
|
prediction_key = '<prediction key>'
|
||||||
|
|
||||||
|
parts = prediction_url.split('/')
|
||||||
|
endpoint = 'https://' + parts[2]
|
||||||
|
project_id = parts[6]
|
||||||
|
iteration_name = parts[9]
|
||||||
|
|
||||||
|
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||||
|
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||||
|
|
||||||
|
image.seek(0)
|
||||||
|
results = predictor.detect_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
threshold = 0.3
|
||||||
|
|
||||||
|
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
||||||
|
|
||||||
|
overlap_threshold = 0.002
|
||||||
|
|
||||||
|
def create_polygon(prediction):
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
|
||||||
|
|
||||||
|
to_delete = []
|
||||||
|
|
||||||
|
for i in range(0, len(predictions)):
|
||||||
|
polygon_1 = create_polygon(predictions[i])
|
||||||
|
|
||||||
|
for j in range(i+1, len(predictions)):
|
||||||
|
polygon_2 = create_polygon(predictions[j])
|
||||||
|
overlap = polygon_1.intersection(polygon_2).area
|
||||||
|
|
||||||
|
smallest_area = min(polygon_1.area, polygon_2.area)
|
||||||
|
|
||||||
|
if overlap > (overlap_threshold * smallest_area):
|
||||||
|
to_delete.append(predictions[i])
|
||||||
|
break
|
||||||
|
|
||||||
|
for d in to_delete:
|
||||||
|
predictions.remove(d)
|
||||||
|
|
||||||
|
print(f'Counted {len(predictions)} stock items')
|
||||||
|
|
||||||
|
|
||||||
|
with Image.open('image.jpg') as im:
|
||||||
|
draw = ImageDraw.Draw(im)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
left = scale_left * im.width
|
||||||
|
top = scale_top * im.height
|
||||||
|
right = scale_right * im.width
|
||||||
|
bottom = scale_bottom * im.height
|
||||||
|
|
||||||
|
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
|
||||||
|
|
||||||
|
im.save('image.jpg')
|
@ -0,0 +1,5 @@
|
|||||||
|
.pio
|
||||||
|
.vscode/.browse.c_cpp.db*
|
||||||
|
.vscode/c_cpp_properties.json
|
||||||
|
.vscode/launch.json
|
||||||
|
.vscode/ipch
|
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
// See http://go.microsoft.com/fwlink/?LinkId=827846
|
||||||
|
// for the documentation about the extensions.json format
|
||||||
|
"recommendations": [
|
||||||
|
"platformio.platformio-ide"
|
||||||
|
]
|
||||||
|
}
|
@ -0,0 +1,39 @@
|
|||||||
|
|
||||||
|
This directory is intended for project header files.
|
||||||
|
|
||||||
|
A header file is a file containing C declarations and macro definitions
|
||||||
|
to be shared between several project source files. You request the use of a
|
||||||
|
header file in your project source file (C, C++, etc) located in `src` folder
|
||||||
|
by including it, with the C preprocessing directive `#include'.
|
||||||
|
|
||||||
|
```src/main.c
|
||||||
|
|
||||||
|
#include "header.h"
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Including a header file produces the same results as copying the header file
|
||||||
|
into each source file that needs it. Such copying would be time-consuming
|
||||||
|
and error-prone. With a header file, the related declarations appear
|
||||||
|
in only one place. If they need to be changed, they can be changed in one
|
||||||
|
place, and programs that include the header file will automatically use the
|
||||||
|
new version when next recompiled. The header file eliminates the labor of
|
||||||
|
finding and changing all the copies as well as the risk that a failure to
|
||||||
|
find one copy will result in inconsistencies within a program.
|
||||||
|
|
||||||
|
In C, the usual convention is to give header files names that end with `.h'.
|
||||||
|
It is most portable to use only letters, digits, dashes, and underscores in
|
||||||
|
header file names, and at most one dot.
|
||||||
|
|
||||||
|
Read more about using header files in official GCC documentation:
|
||||||
|
|
||||||
|
* Include Syntax
|
||||||
|
* Include Operation
|
||||||
|
* Once-Only Headers
|
||||||
|
* Computed Includes
|
||||||
|
|
||||||
|
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html
|
@ -0,0 +1,46 @@
|
|||||||
|
|
||||||
|
This directory is intended for project specific (private) libraries.
|
||||||
|
PlatformIO will compile them to static libraries and link into executable file.
|
||||||
|
|
||||||
|
The source code of each library should be placed in a an own separate directory
|
||||||
|
("lib/your_library_name/[here are source files]").
|
||||||
|
|
||||||
|
For example, see a structure of the following two libraries `Foo` and `Bar`:
|
||||||
|
|
||||||
|
|--lib
|
||||||
|
| |
|
||||||
|
| |--Bar
|
||||||
|
| | |--docs
|
||||||
|
| | |--examples
|
||||||
|
| | |--src
|
||||||
|
| | |- Bar.c
|
||||||
|
| | |- Bar.h
|
||||||
|
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
|
||||||
|
| |
|
||||||
|
| |--Foo
|
||||||
|
| | |- Foo.c
|
||||||
|
| | |- Foo.h
|
||||||
|
| |
|
||||||
|
| |- README --> THIS FILE
|
||||||
|
|
|
||||||
|
|- platformio.ini
|
||||||
|
|--src
|
||||||
|
|- main.c
|
||||||
|
|
||||||
|
and a contents of `src/main.c`:
|
||||||
|
```
|
||||||
|
#include <Foo.h>
|
||||||
|
#include <Bar.h>
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
PlatformIO Library Dependency Finder will find automatically dependent
|
||||||
|
libraries scanning project source files.
|
||||||
|
|
||||||
|
More information about PlatformIO Library Dependency Finder
|
||||||
|
- https://docs.platformio.org/page/librarymanager/ldf.html
|
@ -0,0 +1,26 @@
|
|||||||
|
; PlatformIO Project Configuration File
|
||||||
|
;
|
||||||
|
; Build options: build flags, source filter
|
||||||
|
; Upload options: custom upload port, speed and extra flags
|
||||||
|
; Library options: dependencies, extra library storages
|
||||||
|
; Advanced options: extra scripting
|
||||||
|
;
|
||||||
|
; Please visit documentation for the other options and examples
|
||||||
|
; https://docs.platformio.org/page/projectconf.html
|
||||||
|
|
||||||
|
[env:seeed_wio_terminal]
|
||||||
|
platform = atmelsam
|
||||||
|
board = seeed_wio_terminal
|
||||||
|
framework = arduino
|
||||||
|
lib_deps =
|
||||||
|
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
|
||||||
|
seeed-studio/Seeed Arduino FS @ 2.0.3
|
||||||
|
seeed-studio/Seeed Arduino SFUD @ 2.0.1
|
||||||
|
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
|
||||||
|
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
|
||||||
|
seeed-studio/Seeed Arduino RTC @ 2.0.0
|
||||||
|
bblanchon/ArduinoJson @ 6.17.3
|
||||||
|
build_flags =
|
||||||
|
-w
|
||||||
|
-DARDUCAM_SHIELD_V2
|
||||||
|
-DOV2640_CAM
|
@ -0,0 +1,160 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <ArduCAM.h>
|
||||||
|
#include <Wire.h>
|
||||||
|
|
||||||
|
class Camera
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
|
||||||
|
{
|
||||||
|
_format = format;
|
||||||
|
_image_size = image_size;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool init()
|
||||||
|
{
|
||||||
|
// Reset the CPLD
|
||||||
|
_arducam.write_reg(0x07, 0x80);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
_arducam.write_reg(0x07, 0x00);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
// Check if the ArduCAM SPI bus is OK
|
||||||
|
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
|
||||||
|
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Change MCU mode
|
||||||
|
_arducam.set_mode(MCU2LCD_MODE);
|
||||||
|
|
||||||
|
uint8_t vid, pid;
|
||||||
|
|
||||||
|
// Check if the camera module type is OV2640
|
||||||
|
_arducam.wrSensorReg8_8(0xff, 0x01);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
|
||||||
|
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
_arducam.OV2640_set_Light_Mode(Auto);
|
||||||
|
_arducam.OV2640_set_Special_effects(Normal);
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void startCapture()
|
||||||
|
{
|
||||||
|
_arducam.flush_fifo();
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
_arducam.start_capture();
|
||||||
|
}
|
||||||
|
|
||||||
|
bool captureReady()
|
||||||
|
{
|
||||||
|
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
|
||||||
|
{
|
||||||
|
if (!captureReady()) return false;
|
||||||
|
|
||||||
|
// Get the image file length
|
||||||
|
uint32_t length = _arducam.read_fifo_length();
|
||||||
|
buffer_length = length;
|
||||||
|
|
||||||
|
if (length >= MAX_FIFO_SIZE)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (length == 0)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// create the buffer
|
||||||
|
byte *buf = new byte[length];
|
||||||
|
|
||||||
|
uint8_t temp = 0, temp_last = 0;
|
||||||
|
int i = 0;
|
||||||
|
uint32_t buffer_pos = 0;
|
||||||
|
bool is_header = false;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
|
||||||
|
while (length--)
|
||||||
|
{
|
||||||
|
temp_last = temp;
|
||||||
|
temp = SPI.transfer(0x00);
|
||||||
|
//Read JPEG data from FIFO
|
||||||
|
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
}
|
||||||
|
if (is_header == true)
|
||||||
|
{
|
||||||
|
//Write image data to buffer if not full
|
||||||
|
if (i < 256)
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
|
||||||
|
i = 0;
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if ((temp == 0xD8) & (temp_last == 0xFF))
|
||||||
|
{
|
||||||
|
is_header = true;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp_last;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
|
||||||
|
// return the buffer
|
||||||
|
*buffer = buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
ArduCAM _arducam;
|
||||||
|
int _format;
|
||||||
|
int _image_size;
|
||||||
|
};
|
@ -0,0 +1,49 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
using namespace std;
|
||||||
|
|
||||||
|
// WiFi credentials
|
||||||
|
const char *SSID = "<SSID>";
|
||||||
|
const char *PASSWORD = "<PASSWORD>";
|
||||||
|
|
||||||
|
const char *PREDICTION_URL = "<PREDICTION_URL>";
|
||||||
|
const char *PREDICTION_KEY = "<PREDICTION_KEY>";
|
||||||
|
|
||||||
|
// Microsoft Azure DigiCert Global Root G2 global certificate
|
||||||
|
const char *CERTIFICATE =
|
||||||
|
"-----BEGIN CERTIFICATE-----\r\n"
|
||||||
|
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
|
||||||
|
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
|
||||||
|
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
|
||||||
|
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
|
||||||
|
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
|
||||||
|
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
|
||||||
|
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
|
||||||
|
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
|
||||||
|
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
|
||||||
|
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
|
||||||
|
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
|
||||||
|
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
|
||||||
|
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
|
||||||
|
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
|
||||||
|
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
|
||||||
|
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
|
||||||
|
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
|
||||||
|
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
|
||||||
|
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
|
||||||
|
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
|
||||||
|
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
|
||||||
|
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
|
||||||
|
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
|
||||||
|
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
|
||||||
|
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
|
||||||
|
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
|
||||||
|
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
|
||||||
|
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
|
||||||
|
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
|
||||||
|
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
|
||||||
|
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
|
||||||
|
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
|
||||||
|
"-----END CERTIFICATE-----\r\n";
|
@ -0,0 +1,223 @@
|
|||||||
|
#include <Arduino.h>
|
||||||
|
#include <ArduinoJson.h>
|
||||||
|
#include <HTTPClient.h>
|
||||||
|
#include <rpcWiFi.h>
|
||||||
|
#include "SD/Seeed_SD.h"
|
||||||
|
#include <Seeed_FS.h>
|
||||||
|
#include <SPI.h>
|
||||||
|
#include <vector>
|
||||||
|
#include <WiFiClientSecure.h>
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
|
#include "camera.h"
|
||||||
|
|
||||||
|
Camera camera = Camera(JPEG, OV2640_640x480);
|
||||||
|
|
||||||
|
WiFiClientSecure client;
|
||||||
|
|
||||||
|
void setupCamera()
|
||||||
|
{
|
||||||
|
pinMode(PIN_SPI_SS, OUTPUT);
|
||||||
|
digitalWrite(PIN_SPI_SS, HIGH);
|
||||||
|
|
||||||
|
Wire.begin();
|
||||||
|
SPI.begin();
|
||||||
|
|
||||||
|
if (!camera.init())
|
||||||
|
{
|
||||||
|
Serial.println("Error setting up the camera!");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void connectWiFi()
|
||||||
|
{
|
||||||
|
while (WiFi.status() != WL_CONNECTED)
|
||||||
|
{
|
||||||
|
Serial.println("Connecting to WiFi..");
|
||||||
|
WiFi.begin(SSID, PASSWORD);
|
||||||
|
delay(500);
|
||||||
|
}
|
||||||
|
|
||||||
|
client.setCACert(CERTIFICATE);
|
||||||
|
Serial.println("Connected!");
|
||||||
|
}
|
||||||
|
|
||||||
|
void setup()
|
||||||
|
{
|
||||||
|
Serial.begin(9600);
|
||||||
|
|
||||||
|
while (!Serial)
|
||||||
|
; // Wait for Serial to be ready
|
||||||
|
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
connectWiFi();
|
||||||
|
|
||||||
|
setupCamera();
|
||||||
|
|
||||||
|
pinMode(WIO_KEY_C, INPUT_PULLUP);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float threshold = 0.0f;
|
||||||
|
const float overlap_threshold = 0.20f;
|
||||||
|
|
||||||
|
struct Point {
|
||||||
|
float x, y;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct Rect {
|
||||||
|
Point topLeft, bottomRight;
|
||||||
|
};
|
||||||
|
|
||||||
|
float area(Rect rect)
|
||||||
|
{
|
||||||
|
return abs(rect.bottomRight.x - rect.topLeft.x) * abs(rect.bottomRight.y - rect.topLeft.y);
|
||||||
|
}
|
||||||
|
|
||||||
|
float overlappingArea(Rect rect1, Rect rect2)
|
||||||
|
{
|
||||||
|
float left = max(rect1.topLeft.x, rect2.topLeft.x);
|
||||||
|
float right = min(rect1.bottomRight.x, rect2.bottomRight.x);
|
||||||
|
float top = max(rect1.topLeft.y, rect2.topLeft.y);
|
||||||
|
float bottom = min(rect1.bottomRight.y, rect2.bottomRight.y);
|
||||||
|
|
||||||
|
|
||||||
|
if ( right > left && bottom > top )
|
||||||
|
{
|
||||||
|
return (right-left)*(bottom-top);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0.0f;
|
||||||
|
}
|
||||||
|
|
||||||
|
Rect rectFromBoundingBox(JsonVariant prediction)
|
||||||
|
{
|
||||||
|
JsonObject bounding_box = prediction["boundingBox"].as<JsonObject>();
|
||||||
|
|
||||||
|
float left = bounding_box["left"].as<float>();
|
||||||
|
float top = bounding_box["top"].as<float>();
|
||||||
|
float width = bounding_box["width"].as<float>();
|
||||||
|
float height = bounding_box["height"].as<float>();
|
||||||
|
|
||||||
|
Point topLeft = {left, top};
|
||||||
|
Point bottomRight = {left + width, top + height};
|
||||||
|
|
||||||
|
return {topLeft, bottomRight};
|
||||||
|
}
|
||||||
|
|
||||||
|
void processPredictions(std::vector<JsonVariant> &predictions)
|
||||||
|
{
|
||||||
|
std::vector<JsonVariant> passed_predictions;
|
||||||
|
|
||||||
|
for (int i = 0; i < predictions.size(); ++i)
|
||||||
|
{
|
||||||
|
Rect prediction_1_rect = rectFromBoundingBox(predictions[i]);
|
||||||
|
float prediction_1_area = area(prediction_1_rect);
|
||||||
|
bool passed = true;
|
||||||
|
|
||||||
|
for (int j = i + 1; j < predictions.size(); ++j)
|
||||||
|
{
|
||||||
|
Rect prediction_2_rect = rectFromBoundingBox(predictions[j]);
|
||||||
|
float prediction_2_area = area(prediction_2_rect);
|
||||||
|
|
||||||
|
float overlap = overlappingArea(prediction_1_rect, prediction_2_rect);
|
||||||
|
float smallest_area = min(prediction_1_area, prediction_2_area);
|
||||||
|
|
||||||
|
if (overlap > (overlap_threshold * smallest_area))
|
||||||
|
{
|
||||||
|
passed = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (passed)
|
||||||
|
{
|
||||||
|
passed_predictions.push_back(predictions[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for(JsonVariant prediction : passed_predictions)
|
||||||
|
{
|
||||||
|
String boundingBox = prediction["boundingBox"].as<String>();
|
||||||
|
String tag = prediction["tagName"].as<String>();
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
|
||||||
|
char buff[32];
|
||||||
|
sprintf(buff, "%s:\t%.2f%%\t%s", tag.c_str(), probability * 100.0, boundingBox.c_str());
|
||||||
|
Serial.println(buff);
|
||||||
|
}
|
||||||
|
|
||||||
|
Serial.print("Counted ");
|
||||||
|
Serial.print(passed_predictions.size());
|
||||||
|
Serial.println(" stock items.");
|
||||||
|
}
|
||||||
|
|
||||||
|
void detectStock(byte *buffer, uint32_t length)
|
||||||
|
{
|
||||||
|
HTTPClient httpClient;
|
||||||
|
httpClient.begin(client, PREDICTION_URL);
|
||||||
|
httpClient.addHeader("Content-Type", "application/octet-stream");
|
||||||
|
httpClient.addHeader("Prediction-Key", PREDICTION_KEY);
|
||||||
|
|
||||||
|
int httpResponseCode = httpClient.POST(buffer, length);
|
||||||
|
|
||||||
|
if (httpResponseCode == 200)
|
||||||
|
{
|
||||||
|
String result = httpClient.getString();
|
||||||
|
|
||||||
|
DynamicJsonDocument doc(1024);
|
||||||
|
deserializeJson(doc, result.c_str());
|
||||||
|
|
||||||
|
JsonObject obj = doc.as<JsonObject>();
|
||||||
|
JsonArray predictions = obj["predictions"].as<JsonArray>();
|
||||||
|
|
||||||
|
std::vector<JsonVariant> passed_predictions;
|
||||||
|
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
if (probability > threshold)
|
||||||
|
{
|
||||||
|
passed_predictions.push_back(prediction);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
processPredictions(passed_predictions);
|
||||||
|
}
|
||||||
|
|
||||||
|
httpClient.end();
|
||||||
|
}
|
||||||
|
|
||||||
|
void buttonPressed()
|
||||||
|
{
|
||||||
|
camera.startCapture();
|
||||||
|
|
||||||
|
while (!camera.captureReady())
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
Serial.println("Image captured");
|
||||||
|
|
||||||
|
byte *buffer;
|
||||||
|
uint32_t length;
|
||||||
|
|
||||||
|
if (camera.readImageToBuffer(&buffer, length))
|
||||||
|
{
|
||||||
|
Serial.print("Image read to buffer with length ");
|
||||||
|
Serial.println(length);
|
||||||
|
|
||||||
|
detectStock(buffer, length);
|
||||||
|
|
||||||
|
delete (buffer);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void loop()
|
||||||
|
{
|
||||||
|
if (digitalRead(WIO_KEY_C) == LOW)
|
||||||
|
{
|
||||||
|
buttonPressed();
|
||||||
|
delay(2000);
|
||||||
|
}
|
||||||
|
|
||||||
|
delay(200);
|
||||||
|
}
|
@ -0,0 +1,11 @@
|
|||||||
|
|
||||||
|
This directory is intended for PlatformIO Unit Testing and project tests.
|
||||||
|
|
||||||
|
Unit Testing is a software testing method by which individual units of
|
||||||
|
source code, sets of one or more MCU program modules together with associated
|
||||||
|
control data, usage procedures, and operating procedures, are tested to
|
||||||
|
determine whether they are fit for use. Unit testing finds problems early
|
||||||
|
in the development cycle.
|
||||||
|
|
||||||
|
More information about PlatformIO Unit Testing:
|
||||||
|
- https://docs.platformio.org/page/plus/unit-testing.html
|
@ -0,0 +1,40 @@
|
|||||||
|
import io
|
||||||
|
import time
|
||||||
|
from picamera import PiCamera
|
||||||
|
|
||||||
|
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||||
|
from msrest.authentication import ApiKeyCredentials
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
time.sleep(2)
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<prediction_url>'
|
||||||
|
prediction_key = '<prediction key>'
|
||||||
|
|
||||||
|
parts = prediction_url.split('/')
|
||||||
|
endpoint = 'https://' + parts[2]
|
||||||
|
project_id = parts[6]
|
||||||
|
iteration_name = parts[9]
|
||||||
|
|
||||||
|
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||||
|
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||||
|
|
||||||
|
image.seek(0)
|
||||||
|
results = predictor.detect_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
threshold = 0.3
|
||||||
|
|
||||||
|
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
@ -0,0 +1,40 @@
|
|||||||
|
from counterfit_connection import CounterFitConnection
|
||||||
|
CounterFitConnection.init('127.0.0.1', 5000)
|
||||||
|
|
||||||
|
import io
|
||||||
|
from counterfit_shims_picamera import PiCamera
|
||||||
|
|
||||||
|
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||||
|
from msrest.authentication import ApiKeyCredentials
|
||||||
|
|
||||||
|
camera = PiCamera()
|
||||||
|
camera.resolution = (640, 480)
|
||||||
|
camera.rotation = 0
|
||||||
|
|
||||||
|
image = io.BytesIO()
|
||||||
|
camera.capture(image, 'jpeg')
|
||||||
|
image.seek(0)
|
||||||
|
|
||||||
|
with open('image.jpg', 'wb') as image_file:
|
||||||
|
image_file.write(image.read())
|
||||||
|
|
||||||
|
prediction_url = '<prediction_url>'
|
||||||
|
prediction_key = '<prediction key>'
|
||||||
|
|
||||||
|
parts = prediction_url.split('/')
|
||||||
|
endpoint = 'https://' + parts[2]
|
||||||
|
project_id = parts[6]
|
||||||
|
iteration_name = parts[9]
|
||||||
|
|
||||||
|
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||||
|
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||||
|
|
||||||
|
image.seek(0)
|
||||||
|
results = predictor.detect_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
threshold = 0.3
|
||||||
|
|
||||||
|
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
@ -0,0 +1,5 @@
|
|||||||
|
.pio
|
||||||
|
.vscode/.browse.c_cpp.db*
|
||||||
|
.vscode/c_cpp_properties.json
|
||||||
|
.vscode/launch.json
|
||||||
|
.vscode/ipch
|
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
// See http://go.microsoft.com/fwlink/?LinkId=827846
|
||||||
|
// for the documentation about the extensions.json format
|
||||||
|
"recommendations": [
|
||||||
|
"platformio.platformio-ide"
|
||||||
|
]
|
||||||
|
}
|
@ -0,0 +1,39 @@
|
|||||||
|
|
||||||
|
This directory is intended for project header files.
|
||||||
|
|
||||||
|
A header file is a file containing C declarations and macro definitions
|
||||||
|
to be shared between several project source files. You request the use of a
|
||||||
|
header file in your project source file (C, C++, etc) located in `src` folder
|
||||||
|
by including it, with the C preprocessing directive `#include'.
|
||||||
|
|
||||||
|
```src/main.c
|
||||||
|
|
||||||
|
#include "header.h"
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Including a header file produces the same results as copying the header file
|
||||||
|
into each source file that needs it. Such copying would be time-consuming
|
||||||
|
and error-prone. With a header file, the related declarations appear
|
||||||
|
in only one place. If they need to be changed, they can be changed in one
|
||||||
|
place, and programs that include the header file will automatically use the
|
||||||
|
new version when next recompiled. The header file eliminates the labor of
|
||||||
|
finding and changing all the copies as well as the risk that a failure to
|
||||||
|
find one copy will result in inconsistencies within a program.
|
||||||
|
|
||||||
|
In C, the usual convention is to give header files names that end with `.h'.
|
||||||
|
It is most portable to use only letters, digits, dashes, and underscores in
|
||||||
|
header file names, and at most one dot.
|
||||||
|
|
||||||
|
Read more about using header files in official GCC documentation:
|
||||||
|
|
||||||
|
* Include Syntax
|
||||||
|
* Include Operation
|
||||||
|
* Once-Only Headers
|
||||||
|
* Computed Includes
|
||||||
|
|
||||||
|
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html
|
@ -0,0 +1,46 @@
|
|||||||
|
|
||||||
|
This directory is intended for project specific (private) libraries.
|
||||||
|
PlatformIO will compile them to static libraries and link into executable file.
|
||||||
|
|
||||||
|
The source code of each library should be placed in a an own separate directory
|
||||||
|
("lib/your_library_name/[here are source files]").
|
||||||
|
|
||||||
|
For example, see a structure of the following two libraries `Foo` and `Bar`:
|
||||||
|
|
||||||
|
|--lib
|
||||||
|
| |
|
||||||
|
| |--Bar
|
||||||
|
| | |--docs
|
||||||
|
| | |--examples
|
||||||
|
| | |--src
|
||||||
|
| | |- Bar.c
|
||||||
|
| | |- Bar.h
|
||||||
|
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
|
||||||
|
| |
|
||||||
|
| |--Foo
|
||||||
|
| | |- Foo.c
|
||||||
|
| | |- Foo.h
|
||||||
|
| |
|
||||||
|
| |- README --> THIS FILE
|
||||||
|
|
|
||||||
|
|- platformio.ini
|
||||||
|
|--src
|
||||||
|
|- main.c
|
||||||
|
|
||||||
|
and a contents of `src/main.c`:
|
||||||
|
```
|
||||||
|
#include <Foo.h>
|
||||||
|
#include <Bar.h>
|
||||||
|
|
||||||
|
int main (void)
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
PlatformIO Library Dependency Finder will find automatically dependent
|
||||||
|
libraries scanning project source files.
|
||||||
|
|
||||||
|
More information about PlatformIO Library Dependency Finder
|
||||||
|
- https://docs.platformio.org/page/librarymanager/ldf.html
|
@ -0,0 +1,26 @@
|
|||||||
|
; PlatformIO Project Configuration File
|
||||||
|
;
|
||||||
|
; Build options: build flags, source filter
|
||||||
|
; Upload options: custom upload port, speed and extra flags
|
||||||
|
; Library options: dependencies, extra library storages
|
||||||
|
; Advanced options: extra scripting
|
||||||
|
;
|
||||||
|
; Please visit documentation for the other options and examples
|
||||||
|
; https://docs.platformio.org/page/projectconf.html
|
||||||
|
|
||||||
|
[env:seeed_wio_terminal]
|
||||||
|
platform = atmelsam
|
||||||
|
board = seeed_wio_terminal
|
||||||
|
framework = arduino
|
||||||
|
lib_deps =
|
||||||
|
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
|
||||||
|
seeed-studio/Seeed Arduino FS @ 2.0.3
|
||||||
|
seeed-studio/Seeed Arduino SFUD @ 2.0.1
|
||||||
|
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
|
||||||
|
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
|
||||||
|
seeed-studio/Seeed Arduino RTC @ 2.0.0
|
||||||
|
bblanchon/ArduinoJson @ 6.17.3
|
||||||
|
build_flags =
|
||||||
|
-w
|
||||||
|
-DARDUCAM_SHIELD_V2
|
||||||
|
-DOV2640_CAM
|
@ -0,0 +1,160 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <ArduCAM.h>
|
||||||
|
#include <Wire.h>
|
||||||
|
|
||||||
|
class Camera
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
|
||||||
|
{
|
||||||
|
_format = format;
|
||||||
|
_image_size = image_size;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool init()
|
||||||
|
{
|
||||||
|
// Reset the CPLD
|
||||||
|
_arducam.write_reg(0x07, 0x80);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
_arducam.write_reg(0x07, 0x00);
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
// Check if the ArduCAM SPI bus is OK
|
||||||
|
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
|
||||||
|
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Change MCU mode
|
||||||
|
_arducam.set_mode(MCU2LCD_MODE);
|
||||||
|
|
||||||
|
uint8_t vid, pid;
|
||||||
|
|
||||||
|
// Check if the camera module type is OV2640
|
||||||
|
_arducam.wrSensorReg8_8(0xff, 0x01);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
|
||||||
|
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
|
||||||
|
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
_arducam.OV2640_set_Light_Mode(Auto);
|
||||||
|
_arducam.OV2640_set_Special_effects(Normal);
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void startCapture()
|
||||||
|
{
|
||||||
|
_arducam.flush_fifo();
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
_arducam.start_capture();
|
||||||
|
}
|
||||||
|
|
||||||
|
bool captureReady()
|
||||||
|
{
|
||||||
|
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
|
||||||
|
}
|
||||||
|
|
||||||
|
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
|
||||||
|
{
|
||||||
|
if (!captureReady()) return false;
|
||||||
|
|
||||||
|
// Get the image file length
|
||||||
|
uint32_t length = _arducam.read_fifo_length();
|
||||||
|
buffer_length = length;
|
||||||
|
|
||||||
|
if (length >= MAX_FIFO_SIZE)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (length == 0)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// create the buffer
|
||||||
|
byte *buf = new byte[length];
|
||||||
|
|
||||||
|
uint8_t temp = 0, temp_last = 0;
|
||||||
|
int i = 0;
|
||||||
|
uint32_t buffer_pos = 0;
|
||||||
|
bool is_header = false;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
|
||||||
|
while (length--)
|
||||||
|
{
|
||||||
|
temp_last = temp;
|
||||||
|
temp = SPI.transfer(0x00);
|
||||||
|
//Read JPEG data from FIFO
|
||||||
|
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
}
|
||||||
|
if (is_header == true)
|
||||||
|
{
|
||||||
|
//Write image data to buffer if not full
|
||||||
|
if (i < 256)
|
||||||
|
{
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
_arducam.CS_HIGH();
|
||||||
|
|
||||||
|
i = 0;
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
_arducam.CS_LOW();
|
||||||
|
_arducam.set_fifo_burst();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else if ((temp == 0xD8) & (temp_last == 0xFF))
|
||||||
|
{
|
||||||
|
is_header = true;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp_last;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
|
||||||
|
buf[buffer_pos] = temp;
|
||||||
|
buffer_pos++;
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_arducam.clear_fifo_flag();
|
||||||
|
|
||||||
|
_arducam.set_format(_format);
|
||||||
|
_arducam.InitCAM();
|
||||||
|
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||||
|
|
||||||
|
// return the buffer
|
||||||
|
*buffer = buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
ArduCAM _arducam;
|
||||||
|
int _format;
|
||||||
|
int _image_size;
|
||||||
|
};
|
@ -0,0 +1,49 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
using namespace std;
|
||||||
|
|
||||||
|
// WiFi credentials
|
||||||
|
const char *SSID = "<SSID>";
|
||||||
|
const char *PASSWORD = "<PASSWORD>";
|
||||||
|
|
||||||
|
const char *PREDICTION_URL = "<PREDICTION_URL>";
|
||||||
|
const char *PREDICTION_KEY = "<PREDICTION_KEY>";
|
||||||
|
|
||||||
|
// Microsoft Azure DigiCert Global Root G2 global certificate
|
||||||
|
const char *CERTIFICATE =
|
||||||
|
"-----BEGIN CERTIFICATE-----\r\n"
|
||||||
|
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
|
||||||
|
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
|
||||||
|
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
|
||||||
|
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
|
||||||
|
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
|
||||||
|
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
|
||||||
|
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
|
||||||
|
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
|
||||||
|
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
|
||||||
|
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
|
||||||
|
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
|
||||||
|
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
|
||||||
|
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
|
||||||
|
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
|
||||||
|
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
|
||||||
|
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
|
||||||
|
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
|
||||||
|
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
|
||||||
|
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
|
||||||
|
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
|
||||||
|
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
|
||||||
|
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
|
||||||
|
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
|
||||||
|
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
|
||||||
|
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
|
||||||
|
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
|
||||||
|
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
|
||||||
|
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
|
||||||
|
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
|
||||||
|
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
|
||||||
|
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
|
||||||
|
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
|
||||||
|
"-----END CERTIFICATE-----\r\n";
|
@ -0,0 +1,145 @@
|
|||||||
|
#include <Arduino.h>
|
||||||
|
#include <ArduinoJson.h>
|
||||||
|
#include <HTTPClient.h>
|
||||||
|
#include <list>
|
||||||
|
#include <rpcWiFi.h>
|
||||||
|
#include "SD/Seeed_SD.h"
|
||||||
|
#include <Seeed_FS.h>
|
||||||
|
#include <SPI.h>
|
||||||
|
#include <vector>
|
||||||
|
#include <WiFiClientSecure.h>
|
||||||
|
|
||||||
|
#include "config.h"
|
||||||
|
#include "camera.h"
|
||||||
|
|
||||||
|
Camera camera = Camera(JPEG, OV2640_640x480);
|
||||||
|
|
||||||
|
WiFiClientSecure client;
|
||||||
|
|
||||||
|
void setupCamera()
|
||||||
|
{
|
||||||
|
pinMode(PIN_SPI_SS, OUTPUT);
|
||||||
|
digitalWrite(PIN_SPI_SS, HIGH);
|
||||||
|
|
||||||
|
Wire.begin();
|
||||||
|
SPI.begin();
|
||||||
|
|
||||||
|
if (!camera.init())
|
||||||
|
{
|
||||||
|
Serial.println("Error setting up the camera!");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void connectWiFi()
|
||||||
|
{
|
||||||
|
while (WiFi.status() != WL_CONNECTED)
|
||||||
|
{
|
||||||
|
Serial.println("Connecting to WiFi..");
|
||||||
|
WiFi.begin(SSID, PASSWORD);
|
||||||
|
delay(500);
|
||||||
|
}
|
||||||
|
|
||||||
|
client.setCACert(CERTIFICATE);
|
||||||
|
Serial.println("Connected!");
|
||||||
|
}
|
||||||
|
|
||||||
|
void setup()
|
||||||
|
{
|
||||||
|
Serial.begin(9600);
|
||||||
|
|
||||||
|
while (!Serial)
|
||||||
|
; // Wait for Serial to be ready
|
||||||
|
|
||||||
|
delay(1000);
|
||||||
|
|
||||||
|
connectWiFi();
|
||||||
|
|
||||||
|
setupCamera();
|
||||||
|
|
||||||
|
pinMode(WIO_KEY_C, INPUT_PULLUP);
|
||||||
|
}
|
||||||
|
|
||||||
|
const float threshold = 0.3f;
|
||||||
|
|
||||||
|
void processPredictions(std::vector<JsonVariant> &predictions)
|
||||||
|
{
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
String tag = prediction["tagName"].as<String>();
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
|
||||||
|
char buff[32];
|
||||||
|
sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0);
|
||||||
|
Serial.println(buff);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void detectStock(byte *buffer, uint32_t length)
|
||||||
|
{
|
||||||
|
HTTPClient httpClient;
|
||||||
|
httpClient.begin(client, PREDICTION_URL);
|
||||||
|
httpClient.addHeader("Content-Type", "application/octet-stream");
|
||||||
|
httpClient.addHeader("Prediction-Key", PREDICTION_KEY);
|
||||||
|
|
||||||
|
int httpResponseCode = httpClient.POST(buffer, length);
|
||||||
|
|
||||||
|
if (httpResponseCode == 200)
|
||||||
|
{
|
||||||
|
String result = httpClient.getString();
|
||||||
|
|
||||||
|
DynamicJsonDocument doc(1024);
|
||||||
|
deserializeJson(doc, result.c_str());
|
||||||
|
|
||||||
|
JsonObject obj = doc.as<JsonObject>();
|
||||||
|
JsonArray predictions = obj["predictions"].as<JsonArray>();
|
||||||
|
|
||||||
|
std::vector<JsonVariant> passed_predictions;
|
||||||
|
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
if (probability > threshold)
|
||||||
|
{
|
||||||
|
passed_predictions.push_back(prediction);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
processPredictions(passed_predictions);
|
||||||
|
}
|
||||||
|
|
||||||
|
httpClient.end();
|
||||||
|
}
|
||||||
|
|
||||||
|
void buttonPressed()
|
||||||
|
{
|
||||||
|
camera.startCapture();
|
||||||
|
|
||||||
|
while (!camera.captureReady())
|
||||||
|
delay(100);
|
||||||
|
|
||||||
|
Serial.println("Image captured");
|
||||||
|
|
||||||
|
byte *buffer;
|
||||||
|
uint32_t length;
|
||||||
|
|
||||||
|
if (camera.readImageToBuffer(&buffer, length))
|
||||||
|
{
|
||||||
|
Serial.print("Image read to buffer with length ");
|
||||||
|
Serial.println(length);
|
||||||
|
|
||||||
|
detectStock(buffer, length);
|
||||||
|
|
||||||
|
delete (buffer);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void loop()
|
||||||
|
{
|
||||||
|
if (digitalRead(WIO_KEY_C) == LOW)
|
||||||
|
{
|
||||||
|
buttonPressed();
|
||||||
|
delay(2000);
|
||||||
|
}
|
||||||
|
|
||||||
|
delay(200);
|
||||||
|
}
|
@ -0,0 +1,11 @@
|
|||||||
|
|
||||||
|
This directory is intended for PlatformIO Unit Testing and project tests.
|
||||||
|
|
||||||
|
Unit Testing is a software testing method by which individual units of
|
||||||
|
source code, sets of one or more MCU program modules together with associated
|
||||||
|
control data, usage procedures, and operating procedures, are tested to
|
||||||
|
determine whether they are fit for use. Unit testing finds problems early
|
||||||
|
in the development cycle.
|
||||||
|
|
||||||
|
More information about PlatformIO Unit Testing:
|
||||||
|
- https://docs.platformio.org/page/plus/unit-testing.html
|
@ -0,0 +1,163 @@
|
|||||||
|
# Count stock from your IoT device - Virtual IoT Hardware and Raspberry Pi
|
||||||
|
|
||||||
|
A combination of the predictions and their bounding boxes can be used to count stock in an image
|
||||||
|
|
||||||
|
## Show bounding boxes
|
||||||
|
|
||||||
|
As a helpful debugging step you can not only print out the bounding boxes, but you can also draw them on the image that was written to disk when an image was captured.
|
||||||
|
|
||||||
|
### Task - print the bounding boxes
|
||||||
|
|
||||||
|
1. Ensure the `stock-counter` project is open in VS Code, and the virtual environment is activated if you are using a virtual IoT device.
|
||||||
|
|
||||||
|
1. Change the `print` statement in the `for` loop to the following to print the bounding boxes to the console:
|
||||||
|
|
||||||
|
```python
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%\t{prediction.bounding_box}')
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Run the app with the camera pointing at some stock on a shelf. The bounding boxes will be printed to the console, with left, top, width and height values from 0-1.
|
||||||
|
|
||||||
|
```output
|
||||||
|
pi@raspberrypi:~/stock-counter $ python3 app.py
|
||||||
|
tomato paste: 33.42% {'additional_properties': {}, 'left': 0.3455171, 'top': 0.09916268, 'width': 0.14175442, 'height': 0.29405564}
|
||||||
|
tomato paste: 34.41% {'additional_properties': {}, 'left': 0.48283678, 'top': 0.10242918, 'width': 0.11782813, 'height': 0.27467814}
|
||||||
|
tomato paste: 31.25% {'additional_properties': {}, 'left': 0.4923783, 'top': 0.35007596, 'width': 0.13668466, 'height': 0.28304994}
|
||||||
|
tomato paste: 31.05% {'additional_properties': {}, 'left': 0.36416405, 'top': 0.37494493, 'width': 0.14024884, 'height': 0.26880276}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task - draw bounding boxes on the image
|
||||||
|
|
||||||
|
1. The Pip package [Pillow](https://pypi.org/project/Pillow/) can be used to draw on images. Install this with the following command:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip3 install pillow
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are using a virtual IoT device, make sure to run this from inside the activated virtual environment.
|
||||||
|
|
||||||
|
1. Add the following import statement to the top of the `app.py` file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from PIL import Image, ImageDraw, ImageColor
|
||||||
|
```
|
||||||
|
|
||||||
|
This imports code needed to edit the image.
|
||||||
|
|
||||||
|
1. Add the following code to the end of the `app.py` file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
with Image.open('image.jpg') as im:
|
||||||
|
draw = ImageDraw.Draw(im)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
left = scale_left * im.width
|
||||||
|
top = scale_top * im.height
|
||||||
|
right = scale_right * im.width
|
||||||
|
bottom = scale_bottom * im.height
|
||||||
|
|
||||||
|
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
|
||||||
|
|
||||||
|
im.save('image.jpg')
|
||||||
|
```
|
||||||
|
|
||||||
|
This code opens the image that was saved earlier for editing. It then loops through the predictions getting the bounding boxes, and calculates the bottom right coordinate using the bounding box values from 0-1. These are then converted to image coordinates by multiplying by the relevant dimension of the image. For example, if the left value was 0.5 on an image that was 600 pixels wide, this would convert it to 300 (0.5 x 600 = 300).
|
||||||
|
|
||||||
|
Each bounding box is drawn on the image using a red line. Finally the edited image is saved, overwriting the original image.
|
||||||
|
|
||||||
|
1. Run the app with the camera pointing at some stock on a shelf. You will see the `image.jpg` file in the VS Code explorer, and you will be able to select it to see the bounding boxes.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Count stock
|
||||||
|
|
||||||
|
In the image shown above, the bounding boxes have a small overlap. If this overlap was much larger, then the bounding boxes may indicate the same object. To count the objects correctly, you need to ignore boxes with a significant overlap.
|
||||||
|
|
||||||
|
### Task - count stock ignoring overlap
|
||||||
|
|
||||||
|
1. The Pip package [Shapely](https://pypi.org/project/Shapely/) can be used to calculate the intersection. If you are using a Raspberry Pi, you will need to install a library dependency first:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install libgeos-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Install the Shapely Pip package:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip3 install shapely
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are using a virtual IoT device, make sure to run this from inside the activated virtual environment.
|
||||||
|
|
||||||
|
1. Add the following import statement to the top of the `app.py` file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from shapely.geometry import Polygon
|
||||||
|
```
|
||||||
|
|
||||||
|
This imports code needed to create polygons to calculate overlap.
|
||||||
|
|
||||||
|
1. Above the code that draws the bounding boxes, add the following code:
|
||||||
|
|
||||||
|
```python
|
||||||
|
overlap_threshold = 0.20
|
||||||
|
```
|
||||||
|
|
||||||
|
This defines the percentage overlap allowed before the bounding boxes are considered to be the same object. 0.20 defines a 20% overlap.
|
||||||
|
|
||||||
|
1. To calculate overlap using Shapely, the bounding boxes need to be converted into Shapely polygons. Add the following function to do this:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def create_polygon(prediction):
|
||||||
|
scale_left = prediction.bounding_box.left
|
||||||
|
scale_top = prediction.bounding_box.top
|
||||||
|
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
|
||||||
|
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
|
||||||
|
|
||||||
|
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates a polygon using the bounding box of a prediction.
|
||||||
|
|
||||||
|
1. The logic for removing overlapping objects involves comparing all bounding boxes and if any pairs of predictions have bounding boxes that overlap more than the threshold, delete one of the predictions. To compare all the predictions, you compare prediction 1 with 2, 3, 4, etc., then 2 with 3, 4, etc. The following code does this:
|
||||||
|
|
||||||
|
```python
|
||||||
|
to_delete = []
|
||||||
|
|
||||||
|
for i in range(0, len(predictions)):
|
||||||
|
polygon_1 = create_polygon(predictions[i])
|
||||||
|
|
||||||
|
for j in range(i+1, len(predictions)):
|
||||||
|
polygon_2 = create_polygon(predictions[j])
|
||||||
|
overlap = polygon_1.intersection(polygon_2).area
|
||||||
|
|
||||||
|
smallest_area = min(polygon_1.area, polygon_2.area)
|
||||||
|
|
||||||
|
if overlap > (overlap_threshold * smallest_area):
|
||||||
|
to_delete.append(predictions[i])
|
||||||
|
break
|
||||||
|
|
||||||
|
for d in to_delete:
|
||||||
|
predictions.remove(d)
|
||||||
|
|
||||||
|
print(f'Counted {len(predictions)} stock items')
|
||||||
|
```
|
||||||
|
|
||||||
|
The overlap is calculated using the Shapely `Polygon.intersection` method that returns a polygon that has the overlap. The area is then calculated from this polygon. This overlap threshold is not an absolute value, but needs to be a percentage of the bounding box, so the smallest bounding box is found, and the overlap threshold is used to calculate what area the overlap can be to not exceed the percentage overlap threshold of the smallest bounding box. If the overlap exceeds this, the prediction is marked for deletion.
|
||||||
|
|
||||||
|
Once a prediction has been marked for deletion it doesn't need to be checked again, so the inner loop breaks out to check the next prediction. You can't delete items from a list whilst iterating through it, so the bounding boxes that overlap more than the threshold are added to the `to_delete` list, then deleted at the end.
|
||||||
|
|
||||||
|
Finally the stock count is printed to the console. This could then be sent to an IoT service to alert if the stock levels are low. All of this code is before the bounding boxes are drawn, so you will see the stock predictions without overlaps on the generated images.
|
||||||
|
|
||||||
|
> 💁 This is very simplistic way to remove overlaps, just removing the first one in an overlapping pair. For production code, you would want to put more logic in here, such as considering the overlaps between multiple objects, or if one bounding box is contained by another.
|
||||||
|
|
||||||
|
1. Run the app with the camera pointing at some stock on a shelf. The output will indicate the number of bounding boxes without overlaps that exceed the threshold. Try adjusting the `overlap_threshold` value to see predictions being ignored.
|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-count/pi](code-count/pi) or [code-count/virtual-iot-device](code-count/virtual-iot-device) folder.
|
||||||
|
|
||||||
|
😀 Your stock counter program was a success!
|
@ -0,0 +1,74 @@
|
|||||||
|
# Call your object detector from your IoT device - Virtual IoT Hardware and Raspberry Pi
|
||||||
|
|
||||||
|
Once your object detector has been published, it can be used from your IoT device.
|
||||||
|
|
||||||
|
## Copy the image classifier project
|
||||||
|
|
||||||
|
The majority of your stock detector is the same as the image classifier you created in a previous lesson.
|
||||||
|
|
||||||
|
### Task - copy the image classifier project
|
||||||
|
|
||||||
|
1. Create a folder called `stock-counter` either on your computer if you are using a virtual IoT device, or on your Raspberry Pi. If you are using a virtual IoT device make sure you set up a virtual environment.
|
||||||
|
|
||||||
|
1. Set up the camera hardware.
|
||||||
|
|
||||||
|
* If you are using a Raspberry Pi you will need to fit the PiCamera. You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape.
|
||||||
|
* If you are using a virtual IoT device then you will need to install CounterFit and the CounterFit PyCamera shim. If you are going to use still images, then capture some images that your object detector hasn't seen yet, if you are going to use your web cam make sure it is positioned in a way that can see the stock you are detecting.
|
||||||
|
|
||||||
|
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera.
|
||||||
|
|
||||||
|
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects.
|
||||||
|
|
||||||
|
## Change the code from a classifier to an image detector
|
||||||
|
|
||||||
|
The code you used to classify images is very similar to the code to detect objects. The main difference is the method called on the Custom Vision SDK, and the results of the call.
|
||||||
|
|
||||||
|
### Task - change the code from a classifier to an image detector
|
||||||
|
|
||||||
|
1. Delete the three lines of code that classifies the image and processes the predictions:
|
||||||
|
|
||||||
|
```python
|
||||||
|
results = predictor.classify_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
for prediction in results.predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
||||||
|
```
|
||||||
|
|
||||||
|
Remove these three lines.
|
||||||
|
|
||||||
|
1. Add the following code to detect objects in the image:
|
||||||
|
|
||||||
|
```python
|
||||||
|
results = predictor.detect_image(project_id, iteration_name, image)
|
||||||
|
|
||||||
|
threshold = 0.3
|
||||||
|
|
||||||
|
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
|
||||||
|
|
||||||
|
for prediction in predictions:
|
||||||
|
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
||||||
|
```
|
||||||
|
|
||||||
|
This code calls the `detect_image` method on the predictor to run the object detector. It then gathers all the predictions with a probability above a threshold, printing them to the console.
|
||||||
|
|
||||||
|
Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out.
|
||||||
|
|
||||||
|
1. Run this code and it will capture an image, sending it to the object detector, and print out the detected objects. If you are using a virtual IoT device ensure you have an appropriate image set in CounterFit, or our web cam is selected. If you are using a Raspberry Pi, make sure your camera is pointing to objects on a shelf.
|
||||||
|
|
||||||
|
```output
|
||||||
|
pi@raspberrypi:~/stock-counter $ python3 app.py
|
||||||
|
tomato paste: 34.13%
|
||||||
|
tomato paste: 33.95%
|
||||||
|
tomato paste: 35.05%
|
||||||
|
tomato paste: 32.80%
|
||||||
|
```
|
||||||
|
|
||||||
|
> 💁 You may need to adjust the `threshold` to an appropriate value for your images.
|
||||||
|
|
||||||
|
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-detect/pi](code-detect/pi) or [code-detect/virtual-iot-device](code-detect/virtual-iot-device) folder.
|
||||||
|
|
||||||
|
😀 Your stock counter program was a success!
|
@ -0,0 +1,167 @@
|
|||||||
|
# Count stock from your IoT device - Wio Terminal
|
||||||
|
|
||||||
|
A combination of the predictions and their bounding boxes can be used to count stock in an image.
|
||||||
|
|
||||||
|
## Count stock
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the image shown above, the bounding boxes have a small overlap. If this overlap was much larger, then the bounding boxes may indicate the same object. To count the objects correctly, you need to ignore boxes with a significant overlap.
|
||||||
|
|
||||||
|
### Task - count stock ignoring overlap
|
||||||
|
|
||||||
|
1. Open your `stock-counter` project if it is not already open.
|
||||||
|
|
||||||
|
1. Above the `processPredictions` function, add the following code:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
const float overlap_threshold = 0.20f;
|
||||||
|
```
|
||||||
|
|
||||||
|
This defines the percentage overlap allowed before the bounding boxes are considered to be the same object. 0.20 defines a 20% overlap.
|
||||||
|
|
||||||
|
1. Below this, and above the `processPredictions` function, add the following code to calculate the overlap between two rectangles:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
struct Point {
|
||||||
|
float x, y;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct Rect {
|
||||||
|
Point topLeft, bottomRight;
|
||||||
|
};
|
||||||
|
|
||||||
|
float area(Rect rect)
|
||||||
|
{
|
||||||
|
return abs(rect.bottomRight.x - rect.topLeft.x) * abs(rect.bottomRight.y - rect.topLeft.y);
|
||||||
|
}
|
||||||
|
|
||||||
|
float overlappingArea(Rect rect1, Rect rect2)
|
||||||
|
{
|
||||||
|
float left = max(rect1.topLeft.x, rect2.topLeft.x);
|
||||||
|
float right = min(rect1.bottomRight.x, rect2.bottomRight.x);
|
||||||
|
float top = max(rect1.topLeft.y, rect2.topLeft.y);
|
||||||
|
float bottom = min(rect1.bottomRight.y, rect2.bottomRight.y);
|
||||||
|
|
||||||
|
|
||||||
|
if ( right > left && bottom > top )
|
||||||
|
{
|
||||||
|
return (right-left)*(bottom-top);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0.0f;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This code defines a `Point` struct to store points on the image, and a `Rect` struct to define a rectangle using a top left and bottom right coordinate. It then defines an `area` function that calculates the area of a rectangle from a top left and bottom right coordinate.
|
||||||
|
|
||||||
|
Next it defines a `overlappingArea` function that calculates the overlapping area of 2 rectangles. If they don't overlap, it returns 0.
|
||||||
|
|
||||||
|
1. Below the `overlappingArea` function, declare a function to convert a bounding box to a `Rect`:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
Rect rectFromBoundingBox(JsonVariant prediction)
|
||||||
|
{
|
||||||
|
JsonObject bounding_box = prediction["boundingBox"].as<JsonObject>();
|
||||||
|
|
||||||
|
float left = bounding_box["left"].as<float>();
|
||||||
|
float top = bounding_box["top"].as<float>();
|
||||||
|
float width = bounding_box["width"].as<float>();
|
||||||
|
float height = bounding_box["height"].as<float>();
|
||||||
|
|
||||||
|
Point topLeft = {left, top};
|
||||||
|
Point bottomRight = {left + width, top + height};
|
||||||
|
|
||||||
|
return {topLeft, bottomRight};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This takes a prediction from the object detector, extracts the bounding box and uses the values on the bounding box to define a rectangle. The right side is calculated from the left plus the width. The bottom is calculated as the top plus the height.
|
||||||
|
|
||||||
|
1. The predictions need to be compared to each other, and if 2 predictions have an overlap of more that the threshold, one of them needs to be deleted. The overlap threshold is a percentage, so needs to be multiplied by the size of the smallest bounding box to check that the overlap exceeds the given percentage of the bounding box, not the given percentage of the whole image. Start by deleting the content of the `processPredictions` function.
|
||||||
|
|
||||||
|
1. Add the following to the empty `processPredictions` function:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
std::vector<JsonVariant> passed_predictions;
|
||||||
|
|
||||||
|
for (int i = 0; i < predictions.size(); ++i)
|
||||||
|
{
|
||||||
|
Rect prediction_1_rect = rectFromBoundingBox(predictions[i]);
|
||||||
|
float prediction_1_area = area(prediction_1_rect);
|
||||||
|
bool passed = true;
|
||||||
|
|
||||||
|
for (int j = i + 1; j < predictions.size(); ++j)
|
||||||
|
{
|
||||||
|
Rect prediction_2_rect = rectFromBoundingBox(predictions[j]);
|
||||||
|
float prediction_2_area = area(prediction_2_rect);
|
||||||
|
|
||||||
|
float overlap = overlappingArea(prediction_1_rect, prediction_2_rect);
|
||||||
|
float smallest_area = min(prediction_1_area, prediction_2_area);
|
||||||
|
|
||||||
|
if (overlap > (overlap_threshold * smallest_area))
|
||||||
|
{
|
||||||
|
passed = false;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (passed)
|
||||||
|
{
|
||||||
|
passed_predictions.push_back(predictions[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This code declares a vector to store the predictions that don't overlap. It then loops through all the predictions, creating a `Rect` from the bounding box.
|
||||||
|
|
||||||
|
Next this code loops through the remaining predictions, starting at the one after the current prediction. This stops predictions being compared more than once - once 1 and 2 have been compared, there's no need to compare 2 with 1, only with 3, 4, etc.
|
||||||
|
|
||||||
|
For each pair of predictions the overlapping area is calculated. This is then compared to the area of the smallest bounding box - if the overlap exceeds the threshold percentage of the smallest bounding box, the prediction is marked as not passed. If after comparing all the overlap, the prediction passes the checks it is added to the `passed_predictions` collection.
|
||||||
|
|
||||||
|
> 💁 This is very simplistic way to remove overlaps, just removing the first one in an overlapping pair. For production code, you would want to put more logic in here, such as considering the overlaps between multiple objects, or if one bounding box is contained by another.
|
||||||
|
|
||||||
|
1. After this, add the following code to send details of the passed predictions to the serial monitor:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
for(JsonVariant prediction : passed_predictions)
|
||||||
|
{
|
||||||
|
String boundingBox = prediction["boundingBox"].as<String>();
|
||||||
|
String tag = prediction["tagName"].as<String>();
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
|
||||||
|
char buff[32];
|
||||||
|
sprintf(buff, "%s:\t%.2f%%\t%s", tag.c_str(), probability * 100.0, boundingBox.c_str());
|
||||||
|
Serial.println(buff);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This code loops through the passed predictions and prints their details to the serial monitor.
|
||||||
|
|
||||||
|
1. Below this, add code to print the number of counted items to the serial monitor:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
Serial.print("Counted ");
|
||||||
|
Serial.print(passed_predictions.size());
|
||||||
|
Serial.println(" stock items.");
|
||||||
|
```
|
||||||
|
|
||||||
|
This could then be sent to an IoT service to alert if the stock levels are low.
|
||||||
|
|
||||||
|
1. Upload and run your code. Point the camera at objects on a shelf and press the C button. Try adjusting the `overlap_threshold` value to see predictions being ignored.
|
||||||
|
|
||||||
|
```output
|
||||||
|
Connecting to WiFi..
|
||||||
|
Connected!
|
||||||
|
Image captured
|
||||||
|
Image read to buffer with length 17416
|
||||||
|
tomato paste: 35.84% {"left":0.395631,"top":0.215897,"width":0.180768,"height":0.359364}
|
||||||
|
tomato paste: 35.87% {"left":0.378554,"top":0.583012,"width":0.14824,"height":0.359382}
|
||||||
|
tomato paste: 34.11% {"left":0.699024,"top":0.592617,"width":0.124411,"height":0.350456}
|
||||||
|
tomato paste: 35.16% {"left":0.513006,"top":0.647853,"width":0.187472,"height":0.325817}
|
||||||
|
Counted 4 stock items.
|
||||||
|
```
|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-count/wio-terminal](code-count/wio-terminal) folder.
|
||||||
|
|
||||||
|
😀 Your stock counter program was a success!
|
@ -0,0 +1,102 @@
|
|||||||
|
# Call your object detector from your IoT device - Wio Terminal
|
||||||
|
|
||||||
|
Once your object detector has been published, it can be used from your IoT device.
|
||||||
|
|
||||||
|
## Copy the image classifier project
|
||||||
|
|
||||||
|
The majority of your stock detector is the same as the image classifier you created in a previous lesson.
|
||||||
|
|
||||||
|
### Task - copy the image classifier project
|
||||||
|
|
||||||
|
1. Connect your ArduCam your Wio Terminal, following the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/wio-terminal-camera.md#task---connect-the-camera).
|
||||||
|
|
||||||
|
You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape.
|
||||||
|
|
||||||
|
1. Create a brand new Wio Terminal project using PlatformIO. Call this project `stock-counter`.
|
||||||
|
|
||||||
|
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera.
|
||||||
|
|
||||||
|
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects.
|
||||||
|
|
||||||
|
## Change the code from a classifier to an image detector
|
||||||
|
|
||||||
|
The code you used to classify images is very similar to the code to detect objects. The main difference is the URL that is called that you obtained from Custom Vision, and the results of the call.
|
||||||
|
|
||||||
|
### Task - change the code from a classifier to an image detector
|
||||||
|
|
||||||
|
1. Add the following include directive to the top of the `main.cpp` file:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
#include <vector>
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Rename the `classifyImage` function to `detectStock`, both the name of the function and the call in the `buttonPressed` function.
|
||||||
|
|
||||||
|
1. Above the `detectStock` function, declare a threshold to filter out any detections that have a low probability:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
const float threshold = 0.3f;
|
||||||
|
```
|
||||||
|
|
||||||
|
Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out.
|
||||||
|
|
||||||
|
1. Above the `detectStock` function, declare a function to process the predictions:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
void processPredictions(std::vector<JsonVariant> &predictions)
|
||||||
|
{
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
String tag = prediction["tagName"].as<String>();
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
|
||||||
|
char buff[32];
|
||||||
|
sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0);
|
||||||
|
Serial.println(buff);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This takes a list of predictions and prints them to the serial monitor.
|
||||||
|
|
||||||
|
1. In the `detectStock` function, replace the contents of the `for` loop that loops through the predictions with the following:
|
||||||
|
|
||||||
|
```cpp
|
||||||
|
std::vector<JsonVariant> passed_predictions;
|
||||||
|
|
||||||
|
for(JsonVariant prediction : predictions)
|
||||||
|
{
|
||||||
|
float probability = prediction["probability"].as<float>();
|
||||||
|
if (probability > threshold)
|
||||||
|
{
|
||||||
|
passed_predictions.push_back(prediction);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
processPredictions(passed_predictions);
|
||||||
|
```
|
||||||
|
|
||||||
|
This loops through the predictions, comparing the probability to the threshold. All predictions that have a probability higher than the threshold are added to a `list` and passed to the `processPredictions` function.
|
||||||
|
|
||||||
|
1. Upload and run your code. Point the camera at objects on a shelf and press the C button. You will see the output in the serial monitor:
|
||||||
|
|
||||||
|
```output
|
||||||
|
Connecting to WiFi..
|
||||||
|
Connected!
|
||||||
|
Image captured
|
||||||
|
Image read to buffer with length 17416
|
||||||
|
tomato paste: 35.84%
|
||||||
|
tomato paste: 35.87%
|
||||||
|
tomato paste: 34.11%
|
||||||
|
tomato paste: 35.16%
|
||||||
|
```
|
||||||
|
|
||||||
|
> 💁 You may need to adjust the `threshold` to an appropriate value for your images.
|
||||||
|
|
||||||
|
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> 💁 You can find this code in the [code-detect/wio-terminal](code-detect/wio-terminal) folder.
|
||||||
|
|
||||||
|
😀 Your stock counter program was a success!
|
@ -0,0 +1,40 @@
|
|||||||
|
# Image attributions
|
||||||
|
|
||||||
|
* Bananas by abderraouf omara from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Brain by Icon Market from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Broadcast by RomStu from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Button by Dan Hetteix from the [Noun Project](https://thenounproject.com)
|
||||||
|
* C451B small-diaphragm condenser microphone by AKG Acoustics. [Harumphy](https://en.wikipedia.org/wiki/User:Harumphy) at [en.wikipedia](https://en.wikipedia.org/) / [Creative Commons Attribution-Share Alike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/deed.en)
|
||||||
|
* Calendar by Alice-vector from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Certificate by alimasykurm from the [Noun Project](https://thenounproject.com)
|
||||||
|
* chip by Astatine Lab from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Cloud by Debi Alpa Nugraha from the [Noun Project](https://thenounproject.com)
|
||||||
|
* container by ProSymbols from the [Noun Project](https://thenounproject.com)
|
||||||
|
* CPU by Icon Lauk from the [Noun Project](https://thenounproject.com)
|
||||||
|
* database by Icons Bazaar from the [Noun Project](https://thenounproject.com)
|
||||||
|
* dial by Jamie Dickinson from the [Noun Project](https://thenounproject.com)
|
||||||
|
* GPS by mim studio from the [Noun Project](https://thenounproject.com)
|
||||||
|
* heater by Pascal Heß from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Idea by Pause08 from the [Noun Project](https://thenounproject.com)
|
||||||
|
* IoT by Adrien Coquet from the [Noun Project](https://thenounproject.com)
|
||||||
|
* LED by abderraouf omara from the [Noun Project](https://thenounproject.com)
|
||||||
|
* ldr by Eucalyp from the [Noun Project](https://thenounproject.com)
|
||||||
|
* lightbulb by Maxim Kulikov from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Microcontroller by Template from the [Noun Project](https://thenounproject.com)
|
||||||
|
* mobile phone by Alice-vector from the [Noun Project](https://thenounproject.com)
|
||||||
|
* motor by Bakunetsu Kaito from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Patti Smith singing into a Shure SM58 (dynamic cardioid type) microphone. Beni Köhler / [Creative Commons Attribution-Share Alike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/deed.en)
|
||||||
|
* Plant by Alex Muravev from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Plant Cell by Léa Lortal from the [Noun Project](https://thenounproject.com)
|
||||||
|
* probe by Adnen Kadri from the [Noun Project](https://thenounproject.com)
|
||||||
|
* ram by Atif Arshad from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Raspberry Pi 4. Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
|
||||||
|
* recording by Aybige Speaker from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Satellite by Noura Mbarki from the [Noun Project](https://thenounproject.com)
|
||||||
|
* smart sensor by Andrei Yushchenko from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Speaker by Gregor Cresnar from the [Noun Project](https://thenounproject.com)
|
||||||
|
* switch by Chattapat from the [Noun Project](https://thenounproject.com)*
|
||||||
|
* Temperature by Vectors Market from the [Noun Project](https://thenounproject.com)
|
||||||
|
* tomato by parkjisun from the Noun Project from the [Noun Project](https://thenounproject.com)
|
||||||
|
* Watering Can by Daria Moskvina from the [Noun Project](https://thenounproject.com)
|
||||||
|
* weather by Adrien Coquet from the [Noun Project](https://thenounproject.com)
|
Binary file not shown.
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 299 KiB |
After Width: | Height: | Size: 48 KiB |
After Width: | Height: | Size: 51 KiB |
After Width: | Height: | Size: 55 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue