Merge branch 'microsoft:main' into BN-Farm

pull/151/head
Mohammad Iftekher (Iftu) Ebne Jalal 4 years ago committed by GitHub
commit 0498748dde
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,16 @@
# Memulai dengan IoT
Pada bagian ini, Anda akan diperkenalkan dengan Internet of Things, dan mempelajari konsep dasar termasuk membangung proyek IoT 'Hello World' pertama Anda yang terhubung ke *cloud*. Proyek ini merupakan lampu malam yang akan menyala saat tingkat pencahayaan diukur dengan penurunan sensor. This project is a nightlight that lights up as light levels measured by a sensor drop.
![Lampu LED terhubung ke WIO menyala dan mati saat tingkat pencahayaan berubah](../../images/wio-running-assignment-1-1.gif)
## Topik
1. [Pengenalan IoT](lessons/1-introduction-to-iot/README.md)
2. [Lebih dalam dengan IoT](lessons/2-deeper-dive/README.md)
3. [Berinteraksi dengan dunia menggunakan sensor dan aktuator](lessons/3-sensors-and-actuators/README.md)
4. [Menghubungkan perangkat Anda ke Internet](lessons/4-connect-internet/README.md)
## Kredit
Semua pelajaran ditulis dengan ♥️ oleh [Jim Bennett](https://GitHub.com/JimBobBennett)

@ -0,0 +1,99 @@
# Pengenalan IoT
![Ikhtisar catatan sketsa dari pelajaran ini](../../../../sketchnotes/lesson-1.png)
> Sketsa dibuat oleh [Nitya Narasimhan](https://github.com/nitya). Klik gambar untuk versi yang lebih besar.
## Kuis prakuliah
[Kuis prakuliah](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/1)
## Pengantar
Pelajaran ini mencakup beberapa topok pengantar mengenai Internet of Things, dan membuat Anda dapat mempersiapkan dan mengatur perangkat keras Anda.
Dalam pelajaran ini kita akan membahas:
* [Apa itu 'Internet of Things'?](#apa-itu-internet-of-things)
* [Perangkat IoT](#perangkat-iot)
* [Mengatur Perangkat Anda](#set-up-your-device)
* [Penerapan dari IoT](#applications-of-iot)
* [Contoh Perangkat IoT yang Mungkin Anda Punya di Sekitar](#examples-of-iot-devices-you-may-have-around-you)
## Apa itu 'Internet of Things'?
Istilah 'Internet of Things' diciptakan oleh [Kevin Ashton](https://wikipedia.org/wiki/Kevin_Ashton) pada tahun 1999, yang merujuk pada menghubungkan Internet ke dunia fisik melalui sensor. Sejak saat itu, istilah IoT digunakan untuk menggambarkan perangkat apa pun yang berinteraksi dengan dunia fisik di sekitarnya, baik dengan mengumpulkan data dari sensor, atau menyediakan interaksi dunia nyata melalui aktuator (perangkat yang melakukan sesuatu seperti menyalakan sakelar atau menyalakan LED), dan terhubung ke perangkat lain atau Internet.
> **Sensor** mengumpulkan informasi dari lingkungan, seperti mengukur kecepatan, suhu, atau lokasi.
>
> **Aktuator** mengubah sinyal listrik menjadi interaksi pada lingkungan seperti memicu sakelar, menyalakan lampu, membuat suara, atau mengirim *control signal* ke perangkat keras lain, misalnya untuk menyalakan soket listrik.
IoT sebagai suatu bidang teknologi lebih dari sekadar perangkat. Hal ini mencakup layanan berbasis cloud yang dapat memproses data sensor, atau mengirim permintaan ke aktuator yang terhubung ke perangkat IoT. IoT juga mencakup perangkat yang tidak memiliki atau tidak memerlukan konektivitas Internet, sering disebut sebagai *edge devices* atau perangkat tepi. Perangkat tepi adalah perangkat yang dapat memproses dan merespons data sensor itu sendiri, biasanya menggunakan model AI yang dilatih di cloud.
IoT merupakan bidang teknologi yang berkembang pesat. Diperkirakan pada akhir tahun 2020, 30 miliar perangkat IoT dikerahkan dan terhubung ke Internet. Jika melihat ke masa depan, diperkirakan pada tahun 2025, perangkat IoT akan mengumpulkan hampir 80 zettabytes data atau 80 triliun gigabyte. Banyak sekali bukan?
![Grafik yang menunjukkan perangkat IoT aktif dari waktu ke waktu, dengan tren meningkat dari di bawah 5 miliar pada tahun 2015 menjadi lebih dari 30 miliar pada tahun 2025](../../../../images/connected-iot-devices.svg)
✅ Lakukan sedikit riset: Berapa banyak data yang dihasilkan oleh perangkat IoT yang benar-benar digunakan, dan berapa banyak yang terbuang? Mengapa begitu banyak data yang diabaikan?
Data ini adalah kunci kesuksesan IoT. Untuk menjadi pengembang IoT yang sukses, Anda perlu memahami data yang perlu Anda kumpulkan, cara mengumpulkannya, cara membuat keputusan berdasarkan data tersebut, dan cara menggunakan keputusan tersebut untuk berinteraksi dengan lingkungan fisik jika diperlukan.
## Perangkat IoT
Huruf **T** di IoT adalah singkatan dari **Things** - perangkat yang berinteraksi dengan lingkungan fisik di sekitarnya baik dengan mengumpulkan data dari sensor atau menyediakan interaksi dunia nyata melalui aktuator.
Perangkat untuk produksi atau penggunaan komersial, seperti pelacak kebugaran konsumen, atau pengontrol mesin industri, biasanya dibuat khusus. Mereka menggunakan papan sirkuit khusus, bahkan mungkin prosesor khusus, yang dirancang untuk memenuhi kebutuhan tugas tertentu, apakah itu cukup kecil untuk muat di pergelangan tangan, atau cukup kuat untuk bekerja di lingkungan pabrik dengan suhu tinggi, stres tinggi, atau getaran tinggi.
Sebagai pengembang yang belajar tentang IoT atau membuat prototipe perangkat, Anda harus mulai dengan *developer kit* atau perangkat pengembang. Perangkat tersebut adalah perangkat IoT untuk tujuan umum yang dirancang untuk digunakan pengembang, seringkali dengan fitur yang tidak akan Anda miliki di perangkat produksi, seperti satu set pin eksternal untuk menghubungkan sensor atau aktuator, perangkat keras untuk mendukung debugging, atau sumber daya tambahan yang akan menambah biaya yang tidak perlu saat melakukan produksi manufaktur.
Perangkat pengembang ini biasanya terbagi dalam dua kategori - mikrokontroler dan komputer papan tunggal. Perangkat tersebut akan diperkenalkan di sini, dan kita akan membahas lebih detail di pelajaran berikutnya.
> 💁 Ponsel Anda juga dapat dianggap sebagai perangkat IoT tujuan umum, dengan sensor dan aktuator bawaan, dengan berbagai aplikasi yang menggunakan sensor dan aktuator dengan cara yang berbeda dengan layanan cloud yang berbeda. Anda bahkan dapat menemukan beberapa tutorial IoT yang menggunakan aplikasi ponsel sebagai perangkat IoT.
### Mikrokontroler
Mikrokontroler atau Pengendali mikro (juga disebut sebagai MCU, kependekan dari microcontroller unit) adalah komputer kecil yang terdiri dari:
🧠 Satu atau lebih unit pemrosesan pusat (CPU) - 'otak' mikrokontroler yang menjalankan program Anda
💾 Memori (RAM dan memori program) - tempat program, data, dan variabel Anda disimpan
🔌 Koneksi input/output (I/O) yang dapat diprogram - untuk berbicara dengan periferal eksternal (perangkat yang terhubung) seperti sensor dan aktuator
Mikrokontroler biasanya merupakan perangkat komputasi berbiaya rendah, dengan harga rata-rata untuk yang digunakan dalam perangkat keras khusus turun menjadi sekitar US$0,50, dan beberapa perangkat bahkan semurah US$0,03. Perangkat pengembang dapat ditemukan mulai dari harga US$4, dengan biaya meningkat karena Anda menambahkan lebih banyak fitur. [Wio Terminal](https://www.seeedstudio.com/Wio-Terminal-p-4509.html), perangkat pengembang mikrokontroler dari [Seeed studios](https://www.seeedstudio.com) yang memiliki sensor , aktuator, WiFi, dan layar berharga sekitar US$30.
![sebuah terminal wio](../../../../images/wio-terminal.png)
> 💁 Saat mencari mikrokontroler di Internet, berhati-hatilah saat mencari istilah **MCU** karena ini akan mengembalikan banyak hasil untuk Marvel Cinematic Universe, bukan mikrokontroler.
Mikrokontroler dirancang untuk diprogram untuk melakukan sejumlah tugas yang sangat spesifik, daripada menjadi komputer dengan tujuan umum seperti PC atau Mac. Kecuali untuk skenario yang sangat spesifik, Anda tidak dapat menghubungkan monitor, keyboard, dan mouse dan menggunakannya untuk tugas umum.
Perangkat pengembang mikrokontroler biasanya dilengkapi dengan sensor dan aktuator tambahan. Sebagian besar papan (board) akan memiliki satu atau lebih LED yang dapat Anda program, bersama dengan perangkat lain seperti steker standar untuk menambahkan lebih banyak sensor atau aktuator menggunakan berbagai ekosistem pabrikan atau sensor bawaan (biasanya yang paling populer seperti sensor suhu). Beberapa mikrokontroler memiliki konektivitas nirkabel bawaan seperti Bluetooth atau WiFi atau memiliki mikrokontroler tambahan di papan untuk menambahkan konektivitas ini.
> 💁 Mikrokontroler biasanya diprogram dalam bahasa C/C++.
### Komputer papan tunggal
Komputer papan tunggal adalah perangkat komputasi kecil yang memiliki semua elemen komputer lengkap yang terdapat pada satu papan kecil. Ini adalah perangkat yang memiliki spesifikasi yang mirip dengan desktop atau laptop PC atau Mac, menjalankan sistem operasi lengkap, tetapi berukuran kecil, menggunakan lebih sedikit daya, dan jauh lebih murah.
![Raspberry Pi 4](../../../images/raspberry-pi-4.jpg)
***Raspberry Pi 4. Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)***
Raspberry Pi adalah salah satu komputer papan tunggal yang paling populer.
Seperti mikrokontroler, komputer papan tunggal memiliki CPU, memori dan pin input/output, tetapi mereka memiliki fitur tambahan seperti chip grafis untuk memungkinkan Anda menghubungkan monitor, output audio, dan port USB untuk menghubungkan mouse keyboard dan USB standar lainnya. perangkat seperti webcam atau penyimpanan eksternal. Program disimpan di kartu SD atau hard drive bersama dengan sistem operasi, bukan chip memori yang terpasang di papan.
> 🎓 Anda dapat menganggap komputer papan tunggal sebagai versi PC atau Mac yang lebih kecil dan lebih murah, dengan tambahan pin GPIO (general-purpose input/output) untuk berinteraksi dengan sensor dan aktuator.
Komputer papan tunggal adalah komputer berfitur lengkap, sehingga dapat diprogram dalam bahasa apa pun. Perangkat IoT biasanya diprogram dengan Python.
### Pilihan perangkat keras untuk sisa pelajaran
Semua pelajaran selanjutnya mencakup tugas menggunakan perangkat IoT untuk berinteraksi dengan dunia fisik dan berkomunikasi dengan cloud. Setiap pelajaran mendukung 3 pilihan perangkat - Arduino (menggunakan Terminal Seeed Studios Wio), atau komputer papan tunggal, baik perangkat fisik (Raspberry Pi 4) atau komputer papan tunggal virtual yang berjalan di PC atau Mac Anda.
Anda dapat membaca tentang perangkat keras yang diperlukan untuk menyelesaikan semua tugas di [panduan perangkat keras](../../../hardware.md).
> 💁 Anda tidak perlu membeli perangkat keras IoT apa pun untuk menyelesaikan tugas, Anda dapat melakukan semuanya menggunakan komputer papan tunggal virtual.
Perangkat keras mana yang Anda pilih terserah Anda - itu tergantung pada apa yang Anda miliki di rumah di sekolah Anda, dan bahasa pemrograman apa yang Anda ketahui atau rencanakan untuk dipelajari. Kedua varian perangkat keras akan menggunakan ekosistem sensor yang sama, jadi jika Anda memulai pada salah satu perangkat, Anda dapat dengan mudah melakukannya pada perangkat lain tanpa harus mengganti sebagian besar perangkat pengembang. Komputer papan tunggal virtual akan setara dengan pembelajaran di Raspberry Pi, dengan sebagian besar kode dapat ditransfer ke Pi jika Anda akhirnya mendapatkan perangkat dan sensor.

@ -0,0 +1,201 @@
# Wio Terminal
[সীড স্টুডিও](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) এর Wio Terminal একটি আরডুইনো সাপোর্টেড মাইক্রোকন্ট্রোলার, যাতে ওয়াইফাই সংযোগ এবং কিছু সেন্সর ও অ্যাকচুয়েটর বিল্ট-ইন রয়েছে। এছাড়াও এর সাথে রয়েছে কিছু পোর্ট, অতিরিক্ত সেন্সর ও অ্যাকচুয়েটর সংযোগ এবং এটি নির্মাণ করা হয়েছে একটি হার্ডওয়্যার ইকোসিস্টেম ব্যবহার করে যার নাম
[Grove](https://www.seeedstudio.com/category/Grove-c-1003.html).
![A Seeed studios Wio Terminal](../../../images/wio-terminal.png)
## সেটআপ
Wio Terminal ব্যবহার করার জন্য, আমাদের কিছু ফ্রি সটওয়্যার নিজেদের কম্পিউটার এ ইনস্টল করতে হবে। আমাদের অবশ্যই ওয়াইফাই সংযোগদানের পূর্বে Wio Terminal ফার্মওয়্যারটি আপডেট করে নিতে হবে।
### কাজের সেটআপ
প্রথমেই আমরা আমাদের প্রয়োজনীয় সটওয়্যারগুলো এবং ফার্মওয়ারটি আপডেট করে নেব।
১. ভিজুয়াল স্টুডিও কোড (ভি এস কোড) ইনস্টল করতে হবে । এটি একটি এডিটর যার সাহায্যে আমরা আমাদের ডিভাইস কোড লিখতে পারি সি/সি++ ভাষায়। বিস্তারিত জানতে [VS Code documentation](https://code.visualstudio.com?WT.mc_id=academic-17441-jabenn) টি পড়ে নেয়া যেতে পারে।
> 💁 আরডুইনো ডেভলপমেন্ট এর জন্য আর একটি ভালো আই.ডি.ই হলো [Arduino IDE](https://www.arduino.cc/en/software). এই IDE টির সাথে কাজ করার পূর্ব অভিজ্ঞতা থাকলে ভি এস কোড ও platformIO এর পরিবর্তে একেও ব্যাবহার করা যেতে পারে। তবে, এখানে আমরা ভি এস কোডের উপর ভিত্তি করেই কাজ করবো।
২. এরপর ভি এস কোড platformIO এক্সটেনশনটি ইনস্টল করতে হবে। এই এক্সটেনশনটি ভি এস কোডে ইনস্টল করতে [PlatformIO extension documentation](https://marketplace.visualstudio.com/items?itemName=platformio.platformio-ide&WT.mc_id=academic-17441-jabenn) এ দেওয়া দিকির্দেশনাগুলো পড়ে দেখতে পারেন। এটি একটি ভি এস কোড এক্সটেনশন যা সি/সি++ ভাষায় মাইক্রোকন্ট্রোলার প্রোগ্রামিংকে সাপোর্ট করে। এই এক্সটেনশনটি মাইক্রোসফট সি/সি++ এর উপর নির্ভর করে , সি অথবা সি++ ভাষা নিয়ে কাজ করার জন্য। উল্লেখ্য, এই সি/সি++ এক্সটেনশন সয়ংক্রিয়ভাবে ইনস্টল হয়ে যায় যখন কেউ platformIO ইনস্টল করে।
1. এখন, আমরা আমাদের Wio Terminal কে কম্পিউটার এর সাথে সংযুক্ত করব। এটির নিচের দিকে একটি ইউএসবি-সি পোর্ট আছে, সেটিকে আমরা আমাদের কম্পিউটার এর ইউএসবি পোর্ট এর সাথে সংযোগ দিব। উইও টার্মিনালে ইউএসবি-সি ও ইউএসবি-এ ক্যাবল থাকে। যদি আমাদের কম্পিউটারে শুধু ইউএসবি-সি পোর্ট থেকে, তাহলে আমাদের হয় ইউএসবি-সি ক্যাবল অথবা ইউএসবি-এ ক্যাবলের প্রয়োজন হবে ইউএসবি-সি অ্যাডাপ্টার এ সংযোগ দেওয়ার জন্য।
1. [Wio Terminal Wiki WiFi Overview documentation](https://wiki.seeedstudio.com/Wio-Terminal-Network-Overview/) এ উল্লেখিত দিকনির্দেশনা গুলোকে মেনে আমরা আমাদের উইও টার্মিনাল সেটআপ ও ফার্মওয়্যার আপডেট করে ফেলি।
+## হ্যালো ওয়ার্ল্ড
প্রথাগতভাবে, কোনো নতুন প্রোগ্রামিং ল্যাঙ্গুয়েজ অথবা টেকনোলজি নিয়ে কাজ শুরু করার সময় আমরা একটি "Hello World" application লিখি, একটি ছোট application যা আউটপুট হিসেবে `"Hello World"` লেখাটি দেখায়। এতে করে আমরা বুঝি যে আমাদের প্রোগ্রামটিতে সকল টুল সঠিকভাবে কাজ করছে।
আমাদের Wio Terminal এর হেলো ওয়ার্ল্ড অ্যাপটি এটি নিশ্চিত করবে যে আমাদের ভিজুয়াল স্টুডিও কোড platformIO এর সাথে সঠিকভাবে ইনস্টল করা হয়েছে এবং এখন এটি microcontroller development এর জন্য প্রস্তুত।
### platformIO প্রজেক্ট তৈরী
আমাদের প্রথম কাজ হলো platformIO ব্যাবহার করে একটি নতুন প্রজেক্ট তৈরী করা যা Wio terminal এর জন্য কনফিগার করা।
#### কাজ- platformIO প্রজেক্ট তৈরী
একটি platformIO প্রজেক্ট তৈরী করি।
১. Wio terminal কে কম্পিউটারের সাথে সংযোগ দেই।
২. ভি এস কোড launch করি
৩. আমরা platformIO আইকনটি সাইড মেন্যু বারে দেখতে পাবো:
![The Platform IO menu option](../../../images/vscode-platformio-menu.png)
এই মেন্যু আইটেমটি সিলেক্ট করে, সিলেক্ট করি *PIO Home -> Open*
![The Platform IO open option](../../../images/vscode-platformio-home-open.png)
. Welcome স্ক্রীন থেকে **+ New Project** বাটনটিতে ক্লিক করি।
![The new project button](../../../images/vscode-platformio-welcome-new-button.png)
৫. প্রজেক্টটিকে *Project Wizard* এ configure করি
1. প্রজেক্টটিকে `nightlight` নাম দেই।
1. *Board* dropdown থেকে, `WIO` লিখে বোর্ডগুলোকে ফিল্টার করি, *Seeeduino Wio Terminal* সিলেক্ট করি।
1. Leave the *Framework* as *Arduino*
1. হয় *Use default location* কে টিক অবস্থায় ছেড়ে দেই অথবা সেটিকে টিক না দিয়ে আমাদের প্রজেক্টটির জন্য যেকোনো location সিলেক্ট করি।
1. **Finish** বাটনটিতে ক্লিক করি।
![The completed project wizard](../../../images/vscode-platformio-nightlight-project-wizard.png)
platformIO এখন wio terminal এর কোডগুলোকে compile করার জন্য প্রয়োজনীয় কম্পনেন্টস ডাউনলোড করে নেবে এবং আমাদের প্রজেক্টটি create করে নেবে। পুরো প্রক্রয়াটি সম্পন্ন হতে কয়েক মিনিট সময় লাগতে পারে।
### platformIO প্রজেক্টটি investigate করে দেখা
ভি এস কোড এক্সপ্লোরার আমাদের কিছু ফাইল এবং ফোল্ডার দেখাবে যা platformIO wizerd দ্বারা তৈরি হয়েছে।
#### ফোল্ডারস
* `.pio` - এই ফোল্ডারটি কিছু temporary ডাটা বহন করে যা platformIO এর প্রয়জন হতে পারে, যেমন: libraries অথবা compiled code, এটা delete করার সাথে সাথে আবার পুনঃনির্মিতো হয়। U আমরা প্রজেক্টটি কোনো সাইট 
* `.vscode` - এই ফোল্ডারটি ভি এস কোড ও platformIO দ্বারা ব্যবহৃত configuration গুলোকে বহন করে। এটা delete করার সাথে সাথে আবার পুনঃনির্মিতো হয়। প্রজেক্টটি কোনো সাইট যেমন GitHub এ share করতে এর কোনো সোর্স কোড কন্ট্রোল অ্যাড করতে হবে না।
* `include` - এই ফোল্ডারটি এক্সটার্নাল হেডার ফাইল বহনের জন্য রয়েছে যা আমাদের কোডে অতিরিক্ত library যোগের সময় দরকার হয়। আমাদের কাজগুলোতে আমরা এই ফোল্ডারটি ব্যাবহার করব না।
* `lib` - এই ফোল্ডারটি কিছু এক্সটার্নাল libraries বহন করবে যা আমরা আমাদের কোড থেকে কল করব। আমাদের কাজগুলোতে আমরা এই ফোল্ডারটি ব্যাবহার করব না।
* `src` - এই ফোল্ডারটি আমাদের main সোর্স কোডটিকে বহন করবে, যা কিনা একটি সিংগেল ফাইল - main.cpp
* `test` - এই ফোল্ডারটি সেই স্থান যেখানে আমরা আমাদের কোডের ইউনিট টেস্ট গুলোকে রাখবো।
#### ফাইলস
* `main.cpp` - src ফোল্ডারে অবস্থিত এই ফাইলটি আমাদের অ্যাপ্লিকেশন এর entry point হিসেবে কাজ করবে। আমরা ফাইলটি খুলে দেখব, এটি বহন করে:
```cpp
#include <Arduino.h>
void setup() {
// put your setup code here, to run once:
}
void loop() {
// put your main code here, to run repeatedly:
}
```
যখন ডিভাইসটি কাজ শুরু করে, Arduino framework টি সেটআপ ফাংশনটি একবার রান করে, এরপর নিরন্তর এটিকে রান করতে থেকে যতক্ষণ পর্যন্ত ডিভাইসটি বন্ধ না হয় 
* `.gitignore` - এটি সেই ফাইল ও ডিরেক্টরিগুলোকে লিস্ট করে রাখে, যেগুলোকে আমরা আমাদের কোড git source code control এ যুক্ত করার সময় ইগনোর করবো, যেমন: কোনো GitHub repository তে আপলোড করার সময়।
* `platformio.ini` - এই ফাইলে আমাদের ডিভাইসের এবং অ্যাপের configuration গুলো রয়েছে । এটি খুললে দেখা যাবে: 
```ini
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
```
`[env:seeed_wio_terminal]` সেকশনটিতে wio terminal এর configuration আছে। আমরা একের অধিক `env` সেকশন রাখতে পারি যেন আমাদের কোডকে একের অধিক board এর জন্য compile করা যায়।
Project wizerd থেকে আরো কিছু value যা configuration ম্যাচ করে:
* `platform = atmelsam` Wio terminal যে হার্ডওয়্যারটি ব্যাবহার করে তাকে ডিফাইন করে (an ATSAMD51-based microcontroller)
* `board = seeed_wio_terminal` মাইক্রোকন্ট্রোলার এর টাইপ কে ডিফাইন করে (the Wio Terminal)
* `framework = arduino` আমাদের প্রজেক্টটি Arduino framework ব্যাবহার করে সেটি ডিফাইন করে।
### হ্যালো ওয়ার্ল্ড অ্যাপটি লিখি
এখন আমরা হ্যালো ওয়ার্ল্ড অ্যাপটি লিখার জন্য প্রস্তুত হয়েছি।
#### কাজ - হ্যালো ওয়ার্ল্ড অ্যাপটি লিখা
হ্যালো ওয়ার্ল্ড অ্যাপটি লিখি।
1. `main.cpp` ফাইলটি ভি এস কোড থেকে ওপেন করি।
1. কোডটি এমনভাবে লিখি যেনো এটি নিম্নোক্ত কোডটির সাথে মিলে যায়:
```cpp
#include <Arduino.h>
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
}
void loop()
{
Serial.println("Hello World");
delay(5000);
}
```
`setup` ফাংশনটি একটি connection কে initialize করে সিরিয়াল পোর্ট এর সাথে, সেই usb পোর্টটি যেটি আমাদের কম্পিউটারকে wio terminal এর সাথে সংযুক্ত করেছে। `9600` প্যারামিটারটি হলো [baud rate](https://wikipedia.org/wiki/Symbol_rate) (যা সিম্বল রেট হিসেবেও পরিচিত) সিরিয়াল পোর্ট এর মধ্য দিয়ে যাওয়া ডাটার speed (bits per second). এই সেটিং দ্বারা আমরা বোঝাই ৯৬০০ bits ( এবং ১) ডাটা পাঠানো হচ্ছে প্রতি সেকেন্ডে। এরপর এটি সিরিয়াল পোর্টটি ready state এ যাওয়ার জন্য wait করে। 
+ `loop` ফাংশনটি `Hello World!` লাইনটির character গুলো এবং একটি new line character সিরিয়াল পোর্টে পাঠায়। এরপর, এটি ৫০০০ মিলি সেকেন্ড সময়ের জন্য sleep state এ যায়। Loop শেষ হওয়ার পর, এটি আবার রান করে এবং চলতে থাকে যতক্ষণ পর্যন্ত মাইক্রোকন্ট্রোলারটি ON থাকে।
1. কোডটি বিল্ড করে wio terminal এ আপলোড করি
1. ভি এস কোড command palette ওপেন করি।
1. 1. টাইপ করি `PlatformIO Upload` আপলোড অপশনটি খুঁজে পাওয়ার জন্য, এরপর *PlatformIO: Upload* সিলেক্ট করি।
![The PlatformIO upload option in the command palette](../../../images/vscode-platformio-upload-command-palette.png)
যদি দরকার হয়, platformIO এখন অটোমেটিক ভাবে কোডটিকে বিল্ড করবে, আপলোড করার পূর্বে।
1. কোডটি কম্পাইল হয়ে wio terminal এ আপলোড হয়ে যাবে 
> 💁 আমরা যদি MacOS ব্যাবহার করে থাকি, একটি *DISK NOT EJECTED PROPERLY* notification দেখতে পাবো। এটা এজন্যে দেখায় যে, wio terminal টি মাউন্টেড হয় ড্রাইভ হিসেবে যা কিনা ফ্লাশিং প্রসেসের একটি পার্ট, এবং এটি বিচ্ছিন্ন হয়ে যায় যখন compiled code টি আমদর ডিভাইস এ লেখা। আমরা এই নোটিফিকেশনটি ইগনোর করতে পারি।
⚠️ আমরা যদি error দেখতে পাই যে আপলোড পোর্ট unavailable, প্রথমত, আমাদের দেখতে হবে wio টার্মিনালটি আমাদের কম্পিউটারের সাথে সংযুক্ত আছে কিনা এবং স্ক্রীন এর বামদিকের সুইচটি অন করা আছে কিনা। নিচের দিকের সবুজ লাইটটি অন থাকতে হবে। এরপরও যদি error আসে, আমরা on/off সুইটটিকে দুবার নিচের দিকে টানবো এমনভাবে যেনো আমাদের wio terminal টি bootloader mode এ যায়। এরপর, আবার আপলোড করবো।
wio terminal এর একটি serial monitor থাকে যা wio terminal থেকে usb পোর্ট এর মাধ্যমে কতটুকু ডাটা পাঠানো হয়েছে তা দেখে। আমরা `Serial.println("Hello World");` কমান্ডটির মাধ্যমে কতটুকু ডাটা পাঠানো হয়েছে তা মনিটর করতে পারবো।
1. ভি এস কোড command palette ওপেন করি
1. `PlatformIO Serial` টাইপ করি serial monitor অপশনটি খুঁজে পাওয়া জন্য, সিলেক্ট *PlatformIO: Serial Monitor*
![The PlatformIO Serial Monitor option in the command palette](../../../images/vscode-platformio-serial-monitor-command-palette.png)
এখন একটি নতুন টার্মিনাল ওপেন হবে যেখানে সিরিয়াল পোর্টের মাধ্যমে যত ডাটা পাঠানো হয়েছে তা দেখা যাবে:
```output
> Executing task: platformio device monitor <
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- Miniterm on /dev/cu.usbmodem101 9600,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
Hello World
Hello World
```
serial monitor এ প্রতি ৫ সেকেন্ডে `Hello World` প্রিন্ট হবে।
> 💁 আমরা উক্ত কোডটি [code/wio-terminal](code/wio-terminal) ফোল্ডারে খুঁজে পাবো। 
😀 আমাদের 'হ্যালো ওয়ার্ল্ড' লেখাটি সফল হলো!!

@ -1,8 +1,8 @@
# Interact with the physical world with sensors and actuators
Add a sketchnote if possible/appropriate
![A sketchnote overview of this lesson](../../../sketchnotes/lesson-3.png)
![Embed a video here if available](video-url)
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
## Pre-lecture quiz

@ -0,0 +1,227 @@
# <div dir="rtl">تفاعل مع العالم المادي باستخدام المستشعرات والمحركات</div>
## <div dir="rtl"> اختبار ما قبل المحاضرة </div>
[<div dir="rtl"> اختبار ما قبل المحاضرة </div>](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/5)
## <div dir="rtl"> المقدمة </div>
<div dir="rtl">
يقدم هذا الدرس اثنين من المفاهيم الهامة لجهاز إنترنت الأشياء الخاص بك - أجهزة الاستشعار والمشغلات. ستحصل أيضًا على تدريب عملي مع كليهما ، بإضافة مستشعر الضوء إلى مشروع إنترنت الأشياء الخاص بك ، ثم إضافة مؤشر LED يتم التحكم فيه بواسطة مستويات الإضاءة ، مما يؤدي إلى بناء ضوء ليلي بشكل فعال.
</br>
سنغطي في هذا الدرس:
* [ما هي المستشعرات؟](#what-are-sensors)
* [استخدم جهاز استشعار](#use-a-sensor)
* [أنواع أجهزة الاستشعار](#sensor-types)
* [ما هي المحركات؟](#what-are-actuators)
* [استخدم مشغل](#use-an-actuator)
* [أنواع المحركات](#actuator-types)
## ما هي المستشعرات؟
أجهزة الاستشعار هي أجهزة تستشعر العالم المادي - أي أنها تقيس خاصية واحدة أو أكثر من حولها وترسل المعلومات إلى جهاز إنترنت الأشياء. تغطي المستشعرات مجموعة كبيرة من الأجهزة حيث يوجد الكثير من الأشياء التي يمكن قياسها ، من الخصائص الطبيعية مثل درجة حرارة الهواء إلى التفاعلات الفيزيائية مثل الحركة.
تتضمن بعض المستشعرات الشائعة ما يلي:
* مستشعرات درجة الحرارة - تستشعر درجة حرارة الهواء أو درجة حرارة ما يتم غمرها فيه. بالنسبة للهواة والمطورين ، غالبًا ما يتم دمجها مع ضغط الهواء والرطوبة في مستشعر واحد.
* الأزرار - تشعر بها عند الضغط عليها.
* مستشعرات الضوء - تكتشف مستويات الضوء ويمكن أن تكون لألوان معينة أو ضوء الأشعة فوق البنفسجية أو ضوء الأشعة تحت الحمراء أو الضوء المرئي العام.
* الكاميرات - تستشعر التمثيل المرئي للعالم من خلال التقاط صورة أو دفق الفيديو.
* مقاييس التسارع - تستشعر هذه الحركة في اتجاهات متعددة.
* الميكروفونات - تستشعر الأصوات ، سواء كانت مستويات صوت عامة أو صوت اتجاهي.
✅ قم ببعض البحث. ما المستشعرات الموجودة بهاتفك؟
تشترك جميع المستشعرات في شيء واحد - فهي تحول كل ما تشعر به إلى إشارة كهربائية يمكن تفسيرها بواسطة جهاز إنترنت الأشياء. تعتمد كيفية تفسير هذه الإشارة الكهربائية على المستشعر ، بالإضافة إلى بروتوكول الاتصال المستخدم للتواصل مع جهاز إنترنت الأشياء.
## استخدم جهاز استشعار
اتبع الدليل ذي الصلة أدناه لإضافة مستشعر إلى جهاز إنترنت الأشياء الخاص بك:
* [Arduino - Wio Terminal](wio-terminal-sensor.md)
* [كمبيوتر ذو لوحة واحدة - Raspberry Pi](pi-sensor.md)
* [كمبيوتر ذو لوحة واحدة - جهاز افتراضي](virtual-device-sensor.md)
## أنواع أجهزة الاستشعار
المستشعرات إما قياسية أو رقمية.
### المستعرات القياسية
بعض المستشعرات الأساسية هي أجهزة استشعار تمثيلية. تتلقى هذه المستشعرات فولت من جهاز إنترنت الأشياء ، وتقوم مكونات المستشعر بضبط هذا الفولت ، ويتم قياس الفولت الذي يتم إرجاعه من المستشعر لإعطاء قيمة المستشعر.
> 🎓 الفولت هو مقياس لمقدار الدفع لنقل الكهرباء من مكان إلى آخر ، مثل من طرف موجب للبطارية إلى الطرف السالب. على سبيل المثال ، بطارية AA القياسية هي 1.5 فولت (V هو رمز فولت) ويمكنها دفع الكهرباء بقوة 1.5 فولت من طرفها الموجب إلى الطرف السالب. تتطلب الأجهزة الكهربائية المختلفة فولتًا مختلفًا للعمل ، على سبيل المثال ، يمكن لمصباح LED أن يضيء بفولت يتراوح بين 2-3 فولت ، لكن المصباح الخيطي 100 وات يحتاج إلى 240 فولت. يمكنك قراءة المزيد عن الفولت على <a href="https://wikipedia.org/wiki/Voltage">صفحة الفولت على ويكيبيديا</a>
أحد الأمثلة على ذلك هو مقياس الفولت. هذا قرص يمكنك تدويره بين موضعين ويقيس المستشعر الدوران.
![A potentiometer set to a mid point being sent 5 volts returning 3.8 volts](../../../../images/potentiometer.png)
***A potentiometer. Microcontroller by Template / dial by Jamie Dickinson - all from the [Noun Project](https://thenounproject.com)***
سيرسل جهاز إنترنت الأشياء إشارة كهربائية إلى مقياس الفولت ، مثل 5 فولت . عندما يتم ضبط مقياس الفولت فإنه يغير الفولت الذي يخرج من الجانب الآخر. تخيل أن لديك مقياس فولت مُصنَّف على أنه قرص يمتد من 0 إلى <a href="https://wikipedia.org/wiki/Up_to_eleven">11</a> ، مثل مقبض الصوت في مكبر الصوت. عندما يكون مقياس الفولت في وضع إيقاف التشغيل الكامل (0) ، فسيخرج 0 فولت (0 فولت). عندما يكون في وضع التشغيل الكامل (11) ، سيخرج 5 فولت (5 فولت).
> 🎓 هذا تبسيط مفرط ، ويمكنك قراءة المزيد عن مقاييس الفولت والمقاومات المتغيرة على <a href="https://wikipedia.org/wiki/Potentiometer">potentiometer Wikipedia page</a>
يتم بعد ذلك قراءة الفولت الذي يخرج من المستشعر بواسطة جهاز إنترنت الأشياء ، ويمكن للجهاز الاستجابة له. اعتمادًا على المستشعر ، يمكن أن يكون هذا الفولت قيمة عشوائية أو يمكن تعيينه إلى وحدة قياسية. على سبيل المثال ، يقوم مستشعر درجة الحرارة التناظرية المستند إلى <a href="https://wikipedia.org/wiki/Thermistor">thermistor</a> بتغيير مقاومته اعتمادًا على درجة الحرارة. يمكن بعد ذلك تحويل فولت الخرج إلى درجة حرارة بوحدة كلفن ، وبالتالي إلى درجة مئوية أو درجة فهرنهايت ، عن طريق الحسابات في الكود.
✅ ما الذي يحدث برأيك إذا قام المستشعر بإرجاع فولت أعلى مما تم إرساله (على سبيل المثال قادم من مصدر طاقة خارجي)؟ ⛔️ لا تختبر ذلك.
#### التحويل القياسي إلى الرقمي
أجهزة إنترنت الأشياء رقمية - لا يمكنها العمل مع القيم التناظرية ، فهي تعمل فقط مع 0 و 1. هذا يعني أنه يجب تحويل قيم المستشعرات التناظرية إلى إشارة رقمية قبل معالجتها. تحتوي العديد من أجهزة إنترنت الأشياء على محولات من التناظرية إلى الرقمية (ADC) لتحويل المدخلات التناظرية إلى تمثيلات رقمية لقيمتها. يمكن أن تعمل المستشعرات أيضًا مع ADC عبر لوحة موصل. على سبيل المثال ، في نظام Seeed Grove البيئي مع Raspberry Pi ، تتصل المستشعرات التناظرية بمنافذ محددة على "قبعة" مثبتة على Pi متصلة بدبابيس GPIO الخاصة بـ Pi ، وتحتوي هذه القبعة على ADC لتحويل الجهد إلى إشارة رقمية التي يمكن إرسالها من دبابيس GPIO الخاصة بـ Pi.
تخيل أن لديك مستشعر ضوء تناظري متصل بجهاز إنترنت الأشياء يستخدم 3.3 فولت ويعيد قيمة 1 فولت. لا يعني هذا 1V أي شيء في العالم الرقمي ، لذلك يجب تحويله. سيتم تحويل الجهد إلى قيمة تمثيلية باستخدام مقياس يعتمد على الجهاز والمستشعر. أحد الأمثلة على ذلك هو مستشعر الضوء Seeed Grove الذي ينتج قيمًا من 0 إلى 1023. بالنسبة لهذا المستشعر الذي يعمل عند 3.3 فولت ، سيكون خرج 1 فولت بقيمة 300. لا يمكن لجهاز إنترنت الأشياء التعامل مع 300 كقيمة تناظرية ، لذلك سيتم تحويل القيمة إلى "0000000100101100" ، التمثيل الثنائي 300 بواسطة Grove قبعة. ثم تتم معالجة ذلك بواسطة جهاز إنترنت الأشياء.
✅ إذا كنت لا تعرف النظام الثنائي ، فقم بإجراء قدر صغير من البحث لمعرفة كيفية تمثيل الأرقام بالأصفار والآحاد. تعتبر <a href="https://www.bbc.co.uk/bitesize/guides/zwsbwmn/revision/1">مقدمة BBC Bitesize للدرس الثنائي</a> مكانًا رائعًا للبدء.
من منظور الترميز ، يتم التعامل مع كل هذا عادةً بواسطة المكتبات التي تأتي مع أجهزة الاستشعار ، لذلك لا داعي للقلق بشأن هذا التحويل بنفسك. بالنسبة لمستشعر الضوء Grove ، يمكنك استخدام مكتبة Python واستدعاء خاصية "light" ، أو استخدام مكتبة Arduino واستدعاء "analogRead" للحصول على قيمة 300.
### المستشعرات الرقمية
تكتشف المستشعرات الرقمية ، مثل المستشعرات التناظرية ، العالم من حولها باستخدام التغيرات في الجهد الكهربائي. الفرق هو أنهم يخرجون إشارة رقمية ، إما عن طريق قياس حالتين فقط أو باستخدام ADC مدمج. أصبحت المستشعرات الرقمية أكثر شيوعًا لتجنب الحاجة إلى استخدام ADC إما في لوحة الموصل أو على جهاز إنترنت الأشياء نفسه.
أبسط مستشعر رقمي هو زر أو مفتاح. هذا جهاز استشعار بحالتين ، يعمل أو لا يعمل.
![A button is sent 5 volts. When not pressed it returns 0 volts, when pressed it returns 5 volts](../../../../images/button.png)
***A button. Microcontroller by Template / Button by Dan Hetteix - all from the [Noun Project](https://thenounproject.com)***
يمكن أن تقيس الدبابيس الموجودة على أجهزة إنترنت الأشياء مثل دبابيس GPIO هذه الإشارة مباشرة على أنها 0 أو 1. إذا كان الجهد المرسل هو نفس الجهد الذي تم إرجاعه ، فإن القيمة المقروءة هي 1 ، وإلا فإن القيمة المقروءة هي 0. ليست هناك حاجة للتحويل الإشارة ، يمكن أن تكون 1 أو 0 فقط.
> 💁 الفولتية لا تكون دقيقة أبدًا خاصة وأن المكونات الموجودة في المستشعر سيكون لها بعض المقاومة ، لذلك عادة ما يكون هناك تفاوت. على سبيل المثال ، تعمل دبابيس GPIO على Raspberry Pi على 3.3 فولت ، وتقرأ إشارة عودة أعلى من 1.8 فولت على أنها 1 ، وأقل من 1.8 فولت مثل 0.
* 3.3 فولت يذهب إلى الزر. الزر مغلق حتى يخرج 0 فولت ، مما يعطي القيمة 0
* 3.3 فولت يذهب إلى الزر. الزر في وضع التشغيل بحيث يخرج 3.3 فولت ، مما يعطي القيمة 1
تقوم المستشعرات الرقمية الأكثر تقدمًا بقراءة القيم التناظرية ، ثم تحويلها باستخدام ADC المدمجة إلى إشارات رقمية. على سبيل المثال ، سيظل مستشعر درجة الحرارة الرقمي يستخدم مزدوجًا حراريًا بنفس طريقة المستشعر التناظري ، وسيظل يقيس التغير في الجهد الناتج عن مقاومة المزدوجة الحرارية عند درجة الحرارة الحالية. بدلاً من إرجاع القيمة التناظرية والاعتماد على الجهاز أو لوحة الموصل للتحويل إلى إشارة رقمية ، ستقوم وحدة ADC المدمجة في المستشعر بتحويل القيمة وإرسالها كسلسلة من 0 و 1 إلى جهاز إنترنت الأشياء. يتم إرسال هذه القيم من 0 و 1 بنفس طريقة إرسال الإشارة الرقمية للزر حيث يمثل 1 جهدًا كاملاً و 0 يمثل 0 فولت.
![A digital temperature sensor converting an analog reading to binary data with 0 as 0 volts and 1 as 5 volts before sending it to an IoT device](../../../../images/temperature-as-digital.png)
***A digital temperature sensor. Temperature by Vectors Market / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
يتيح إرسال البيانات الرقمية لأجهزة الاستشعار أن تصبح أكثر تعقيدًا وإرسال بيانات أكثر تفصيلاً ، حتى البيانات المشفرة لأجهزة الاستشعار الآمنة. مثال واحد هو الكاميرا. هذا مستشعر يلتقط صورة ويرسلها كبيانات رقمية تحتوي على تلك الصورة ، عادة بتنسيق مضغوط مثل JPEG ، ليقرأها جهاز إنترنت الأشياء. يمكنه حتى دفق الفيديو عن طريق التقاط الصور وإرسال إما إطار الصورة الكامل بإطار أو بث فيديو مضغوط.
## ما هي المحركات؟
المشغلات هي عكس المستشعرات - فهي تقوم بتحويل الإشارة الكهربائية من جهاز إنترنت الأشياء الخاص بك إلى تفاعل مع العالم المادي مثل إصدار الضوء أو الصوت أو تحريك المحرك.
تتضمن بعض المحركات الشائعة ما يلي:
* LED - ينبعث منها ضوء عند تشغيله
* مكبر الصوت - يصدر صوتًا بناءً على الإشارة المرسلة إليهم ، من الجرس الأساسي إلى مكبر الصوت الذي يمكنه تشغيل الموسيقى
* محرك متدرج - يقوم بتحويل الإشارة إلى مقدار محدد من الدوران ، مثل تدوير القرص بزاوية 90 درجة
* الترحيل - هذه هي المفاتيح التي يمكن تشغيلها أو إيقاف تشغيلها بواسطة إشارة كهربائية. إنها تسمح بجهد صغير من جهاز إنترنت الأشياء لتشغيل الفولتية الأكبر.
* الشاشات - هذه مشغلات أكثر تعقيدًا وتعرض معلومات على شاشة متعددة الأجزاء. تختلف الشاشات من شاشات LED البسيطة إلى شاشات الفيديو عالية الدقة.
✅ قم ببعض البحث. ما هي المشغلات التي يمتلكها هاتفك؟
## استخدام مشغل
اتبع الدليل ذي الصلة أدناه لإضافة مشغل إلى جهاز إنترنت الأشياء الخاص بك ، والذي يتحكم فيه المستشعر ، لإنشاء ضوء ليلي لإنترنت الأشياء. سيجمع مستويات الضوء من مستشعر الضوء ، ويستخدم مشغلًا على شكل LED لإصدار الضوء عندما يكون مستوى الضوء المكتشف منخفضًا جدًا.
![A flow chart of the assignment showing light levels being read and checked, and the LED begin controlled](../../../../images/assignment-1-flow.png)
***A flow chart of the assignment showing light levels being read and checked, and the LED begin controlled. ldr by Eucalyp / LED by abderraouf omara - all from the [Noun Project](https://thenounproject.com)***
* [Arduino - Wio Terminal](wio-terminal-actuator.md)
* [كمبيوتر ذو لوحة واحدة - Raspberry Pi](pi-actuator.md)
* [كمبيوتر ذو لوحة واحدة - Virtual device](virtual-device-actuator.md)
## أنواع المحرك
مثل المستشعرات ، تكون المحركات إما قياسية أو رقمية.
### المحركات القياسية
تأخذ المشغلات القياسية إشارة قياسية وتحولها إلى نوع من التفاعل ، حيث يتغير التفاعل بناءً على الجهد المزود.
أحد الأمثلة هو الضوء الخافت ، مثل الذي قد يكون لديك في منزلك. يحدد مقدار الجهد المقدم للضوء مدى سطوعه.
![A light dimmed at a low voltage and brighter at a higher voltage](../../../../images/dimmable-light.png)
***A light controlled by the voltage output of an IoT device. Idea by Pause08 / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
كما هو الحال مع المستشعرات ، يعمل جهاز إنترنت الأشياء الفعلي على الإشارات الرقمية وليس التناظرية. هذا يعني لإرسال إشارة تناظرية ، يحتاج جهاز إنترنت الأشياء إلى محول رقمي إلى تناظري (DAC) ، إما على جهاز إنترنت الأشياء مباشرة ، أو على لوحة الموصل. سيؤدي هذا إلى تحويل 0 و 1 من جهاز إنترنت الأشياء إلى جهد تناظري يمكن أن يستخدمه المشغل.
✅ ما الذي يحدث برأيك إذا أرسل جهاز إنترنت الأشياء جهدًا أعلى مما يستطيع المشغل تحمله؟ ⛔️ لا تختبر ذلك.
#### تعديل عرض النبض
هناك خيار آخر لتحويل الإشارات الرقمية من جهاز إنترنت الأشياء إلى إشارة تمثيلية وهو تعديل عرض النبضة. يتضمن هذا إرسال الكثير من النبضات الرقمية القصيرة التي تعمل كما لو كانت إشارة تمثيلية.
على سبيل المثال ، يمكنك استخدام PWM للتحكم في سرعة المحرك.
تخيل أنك تتحكم في محرك مزود بمصدر 5 فولت. تقوم بإرسال نبضة قصيرة إلى المحرك الخاص بك ، حيث تقوم بتحويل الجهد إلى الجهد العالي (5 فولت) لمدة مائتي ثانية (0.02 ثانية). في ذلك الوقت ، يمكن لمحركك أن يدور عُشر الدوران ، أو 36 درجة. ثم تتوقف الإشارة مؤقتًا لمدة مائتي ثانية (0.02 ثانية) ، لإرسال إشارة منخفضة (0 فولت). كل دورة تشغيل ثم إيقاف تستمر 0.04 ثانية. ثم تتكرر الدورة.
![Pule width modulation rotation of a motor at 150 RPM](../../../../images/pwm-motor-150rpm.png)
***PWM rotation of a motor at 150RPM. motor by Bakunetsu Kaito / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
هذا يعني أنه في ثانية واحدة لديك 25 نبضة 5 فولت من 0.02 ثانية والتي تقوم بتدوير المحرك ، يتبع كل منها توقف مؤقت بمقدار 0.02 ثانية بمقدار 0 فولت لا يقوم بتدوير المحرك. تقوم كل نبضة بتدوير المحرك بمقدار عُشر الدوران ، مما يعني أن المحرك يكمل 2.5 دورة في الثانية. لقد استخدمت إشارة رقمية لتدوير المحرك بمعدل 2.5 دورة في الثانية ، أو 150 <a href="https://wikipedia.org/wiki/Revolutions_per_minute">دورة في الدقيقة</a> ، وهو مقياس غير قياسي لسرعة الدوران).
```output
25 نبضة في الثانية × 0.1 دورة لكل نبضة = 2.5 دورة في الثانية
2.5 دورة في الثانية × 60 ثانية في الدقيقة = 150 دورة في الدقيقة
```
> 🎓 عندما تكون إشارة PWM قيد التشغيل لمدة نصف الوقت ، وإيقاف تشغيلها لنصف المدة ، يشار إليها على أنها <a href="https://wikipedia.org/wiki/Duty_cycle">50٪ دورة عمل</a>. يتم قياس دورات التشغيل كنسبة مئوية من الوقت تكون فيه الإشارة في حالة التشغيل مقارنة بحالة إيقاف التشغيل.
![Pule width modulation rotation of a motor at 75 RPM](../../../../images/pwm-motor-75rpm.png)
***PWM rotation of a motor at 75RPM. motor by Bakunetsu Kaito / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
يمكنك تغيير سرعة المحرك عن طريق تغيير حجم النبضات. على سبيل المثال ، باستخدام نفس المحرك ، يمكنك الحفاظ على نفس وقت الدورة عند 0.04 ثانية ، مع خفض نبضة التشغيل إلى النصف إلى 0.01 ثانية ، وزيادة نبضة الإيقاف إلى 0.03 ثانية. لديك نفس عدد النبضات في الثانية (25) ، ولكن كل نبضة تساوي نصف الطول. نبضة بطول نصف تدير المحرك بمقدار عشرين من الدوران ، وعند 25 نبضة في الثانية ستكمل 1.25 دورة في الثانية أو 75 دورة في الدقيقة. من خلال تغيير سرعة النبض لإشارة رقمية ، تكون قد خفضت سرعة المحرك التناظري إلى النصف.
```output
25 نبضة في الثانية × 0.05 دورة لكل نبضة = 1.25 دورة في الثانية
1.25 دورة في الثانية × 60 ثانية في الدقيقة = 75 دورة في الدقيقة
```
✅ كيف تحافظ على سلاسة دوران المحرك ، خاصة عند السرعات المنخفضة؟ هل ستستخدم عددًا صغيرًا من النبضات الطويلة مع فترات توقف طويلة أم الكثير من النبضات القصيرة جدًا مع فترات توقف قصيرة جدًا؟
> 💁 تستخدم بعض المستشعرات أيضًا PWM لتحويل الإشارات التناظرية إلى إشارات رقمية.
> 🎓 يمكنك قراءة المزيد عن تعديل عرض النبض على <a href="https://wikipedia.org/wiki/Pulse-width_modulation">صفحة تعديل عرض النبض على ويكيبيديا</a>.
### المشغلات الرقمية
المشغلات الرقمية ، مثل المستشعرات الرقمية ، إما لها حالتان يتم التحكم فيهما بجهد مرتفع أو منخفض أو تحتوي على DAC مدمجة بحيث يمكنها تحويل إشارة رقمية إلى إشارة تمثيلية.
أحد المشغلات الرقمية البسيطة هو LED. عندما يرسل الجهاز إشارة رقمية بقيمة 1 ، يتم إرسال جهد عالي يضيء مؤشر LED. عند إرسال إشارة رقمية بقيمة 0 ، ينخفض الجهد إلى 0 فولت وينطفئ مؤشر LED.
![A LED is off at 0 volts and on at 5V](../../../../images/led.png)
***An LED turning on and off depending on voltage. LED by abderraouf omara / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
✅ ما هي المشغلات البسيطة الأخرى ذات الحالتين التي يمكنك التفكير فيها؟ أحد الأمثلة على ذلك هو الملف اللولبي ، وهو مغناطيس كهربائي يمكن تنشيطه للقيام بأشياء مثل تحريك مسمار قفل الباب / فتح قفل الباب.
تتطلب المحركات الرقمية الأكثر تقدمًا ، مثل الشاشات ، إرسال البيانات الرقمية بتنسيقات معينة. عادةً ما تأتي مع مكتبات تسهل إرسال البيانات الصحيحة للتحكم فيها.
---
## 🚀 التحدي
كان التحدي في الدرسين الأخيرين هو سرد أكبر عدد ممكن من أجهزة إنترنت الأشياء الموجودة في منزلك أو مدرستك أو مكان عملك وتحديد ما إذا كانت مبنية على وحدات تحكم دقيقة أو أجهزة كمبيوتر أحادية اللوحة ، أو حتى مزيج من الاثنين معًا.
لكل جهاز أدرجته ، ما المستشعرات والمشغلات التي يتصلون بها؟ ما هو الغرض من كل حساس ومشغل متصل بهذه الأجهزة؟
## اختبار ما بعد المحاضرة
<a href="https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/6">اختبار ما بعد المحاضرة</a>
## مراجعة ودراسة ذاتية
* اقرأ عن الكهرباء والدوائر على <a href="http://www.thinglearn.com/essentials/">ThingLearn</a>.
* اقرأ عن الأنواع المختلفة من مستشعرات درجة الحرارة في <a href="https://www.seeedstudio.com/blog/2019/10/14/temperature-sensors-for-arduino-projects/">دليل مستشعرات درجة الحرارة في الاستوديوها</a>
* اقرأ عن مصابيح LED على <a href="https://wikipedia.org/wiki/Light-emitting_diode">صفحة Wikipedia LED</a>
## الواجب
[أجهزة الاستشعار والمحركات البحثية](assignment.md)
</div>

@ -0,0 +1,21 @@
<div dir="rtl">
# بحث مستشعرات و مشغلات
## تعليمات
غطى هذا الدرس أجهزة الاستشعار والمحركات. ابحث وأوصف مستشعرًا ومشغلًا واحدًا يمكن استخدامه مع مجموعة أدوات تطوير إنترنت الأشياء ، بما في ذلك:
* ماذا يفعل
* الأجهزة الإلكترونية / الأجهزة المستخدمة بالداخل
* هل هو تناظري أم رقمي
* ما هي وحدات ونطاق المدخلات أو القياسات
## الموضوع
| المعايير | نموذجي | كافية | يحتاج إلى تحسين |
| -------- | --------- | -------- | ----------------- |
| وصف جهاز استشعار | وصف جهاز استشعار بما في ذلك تفاصيل عن جميع الأقسام الأربعة المذكورة أعلاه. | وصف جهاز استشعار ، ولكنه كان قادرًا فقط على توفير 2-3 من الأقسام أعلاه | وصف جهاز استشعار ، لكنه كان قادرًا فقط على توفير 1 من الأقسام أعلاه |
| وصف المشغل | وصف المشغل بما في ذلك التفاصيل لجميع الأقسام الأربعة المذكورة أعلاه. | وصف مشغل ، لكنه كان قادرًا فقط على توفير 2-3 من الأقسام أعلاه | وصف مشغل ، لكنه كان قادرًا فقط على توفير 1 من الأقسام أعلاه |
</div>

@ -0,0 +1,17 @@
# সেন্সর এবং অ্যাকচুয়েটর সংক্রান্ত গবেষণা
## নির্দেশনা
এই পাঠটিতে সেন্সর এবং অ্যাকচুয়েটর আলোচনা হয়েছে। একটি আইওটি ডেভলাপার কিটে ব্যবহার করা যেতে পারে এমন একটি সেন্সর এবং একটি অ্যাকচুয়েটর বর্ণনা করতে হবে, যেখানে উল্লেখ থাকবে:
* এটি কী কাজ করে
* ভিতরে ব্যবহৃত ইলেকট্রনিক্স/হার্ডওয়্যার
* এটি কি অ্যানালগ নাকি ডিজিটাল
* ইনপুট বা পরিমাপের একক কী এবং যন্ত্রটির ব্যবহার্য সীমা (range) কতটুকু
## এসাইনমেন্ট মূল্যায়ন মানদন্ড
| ক্রাইটেরিয়া | দৃষ্টান্তমূলক ব্যখ্যা (সর্বোত্তম) | পর্যাপ্ত ব্যখ্যা (মাঝারি) | আরো উন্নতির প্রয়োজন (নিম্ন) |
| -------- | --------- | -------- | ----------------- |
| একটি সেন্সর সংক্রান্ত বর্ণনা | উপরে তালিকাভুক্ত 4 টি বিভাগের বিশদ ব্যখ্যা সহ সেন্সর বর্ণিত হয়েছে | একটি সেন্সর বর্ণিত হয়েছ, তবে উপরের তালিকা থেকে কেবল 2-3টি বিষয় ব্যখ্যা করতে সক্ষম হয়েছে | একটি সেন্সর বর্ণিত হয়েছ, তবে উপরের তালিকা থেকে কেবল 1টি বিষয় ব্যখ্যা করতে সক্ষম হয়েছে |
| একটি অ্যাকচুয়েটর সংক্রান্ত বর্ণনা | উপরে তালিকাভুক্ত 4 টি বিভাগের বিশদ ব্যখ্যা সহ অ্যাকচুয়েটর বর্ণিত হয়েছে | একটি অ্যাকচুয়েটর বর্ণিত হয়েছ, তবে উপরের তালিকা থেকে কেবল 2-3টি বিষয় ব্যখ্যা করতে সক্ষম হয়েছে | একটি অ্যাকচুয়েটর বর্ণিত হয়েছ, তবে উপরের তালিকা থেকে কেবল 1টি বিষয় ব্যখ্যা করতে সক্ষম হয়েছে |

@ -1,8 +1,8 @@
# Connect your device to the Internet
Add a sketchnote if possible/appropriate
![A sketchnote overview of this lesson](../../../sketchnotes/lesson-4.png)
![Embed a video here if available](video-url)
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
## Pre-lecture quiz

@ -1,9 +1,5 @@
# Predict plant growth with IoT
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/9)

@ -1,9 +1,5 @@
# Detect soil moisture
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/11)

@ -1,9 +1,5 @@
# Automated plant watering
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/13)

@ -1,9 +1,5 @@
# Migrate your plant to the cloud
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/15)

@ -1,9 +1,5 @@
# Migrate your application logic to the cloud
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/17)

@ -1,9 +1,5 @@
# Keep your plant secure
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/19)

@ -1,9 +1,5 @@
# Location tracking
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/21)

@ -1,9 +1,5 @@
# Store location data
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/23)
@ -18,6 +14,7 @@ In this lesson we'll cover:
* [Structured and unstructured data](#structured-and-unstructured-data)
* [Send GPS data to an IoT Hub](#send-gps-data-to-an-iot-hub)
* [Hot, warm, and cold paths](#hot-warm-and-cold-paths)
* [Handle GPS events using serverless code](#handle-gps-events-using-serverless-code)
* [Azure Storage Accounts](#azure-storage-accounts)
* [Connect your serverless code to storage](#connect-your-serverless-code-to-storage)
@ -44,6 +41,8 @@ Imagine you were adding IoT devices to a fleet of vehicles for a large commercia
This data can change constantly. For example, if the IoT device is in a truck cab, then the data it sends may change as the trailer changes, for example only sending temperature data when a refrigerated trailer is used.
✅ What other IoT data might be captured? Think about the kinds of loads trucks can carry, as well as maintenance data.
This data varies from vehicle to vehicle, but it all gets sent to the same IoT service for processing. The IoT service needs to be able to process this unstructured data, storing it in a way that allows it to be searched or analyzed, but works with different structures to this data.
### SQL vs NoSQL storage
@ -58,10 +57,14 @@ The first databases were Relational Database Management Systems (RDBMS), or rela
For example, if you stored a users personal details in a table, you would have some kind of internal unique ID per user that is used in a row in a table that contains the users name and address. If you then wanted to store other details about that user, such as their purchases, in another table, you would have one column in the new table for that users ID. When you look up a user, you can use their ID to get their personal details from one table, and their purchases from another.
SQL databases are ideal for storing structured data, and for when you want to ensure the data matches your schema. Some well known SQL databases are Microsoft SQL Server, MySQL, and PostgreSQL.
SQL databases are ideal for storing structured data, and for when you want to ensure the data matches your schema.
✅ If you haven't used SQL before, take a moment to read up on it on the [SQL page on Wikipedia](https://wikipedia.org/wiki/SQL).
Some well known SQL databases are Microsoft SQL Server, MySQL, and PostgreSQL.
✅ Do some research: Read up on some of these SQL databases and their capabilities.
#### NoSQL database
NoSQL databases are called NoSQL because they don't have the same rigid structure of SQL databases. They are also known as document databases as they can store unstructured data such as documents.
@ -74,6 +77,8 @@ NoSQL database do not have a pre-defined schema that limits how data is stored,
Some well known NoSQL databases include Azure CosmosDB, MongoDB, and CouchDB.
✅ Do some research: Read up on some of these NoSQL databases and their capabilities.
In this lesson, you will be using NoSQL storage to store IoT data.
## Send GPS data to an IoT Hub
@ -136,9 +141,33 @@ message = Message(json.dumps(message_json))
Run your device code and ensure messages are flowing into IoT Hub using the `az iot hub monitor-events` CLI command.
## Hot, warm, and cold paths
Data that flows from an IoT device to the cloud is not always processed in real time. Some data needs real time processing, other data can be processed a short time later, and other data can be processed much later. The flow of data to different services that process the data at different times is referred to hot, warm and cold paths.
### Hot path
The hot path refers to data that needs to be processed in real time or near real time. You would use hot path data for alerts, such as getting alerts that a vehicle is approaching a depot, or that the temperature in a refrigerated truck is too high.
To use hot path data, your code would respond to events as soon as they are received by your cloud services.
### Warm path
The warm path refers to data that can be processed a short while after being received, for example for reporting or short term analytics. You would use warm path data for daily reports on vehicle mileage, using data gathered the previous day.
Warm path data is stored once it is received by the cloud service inside some kind of storage that can be quickly accessed.
### Cold path
THe cold path refers to historic data, storing data for the long term to be processed whenever needed. For example, you could use the cold path to get annual mileage reports for vehicles, or run analytics on routes to find the most optimal route to reduce fuel costs.
Cold path data is stored in data warehouses - databases designed for storing large amounts of data that will never change and can be queried quickly and easily. You would normally have a regular job in your cloud application that would run at a regular time each day, week, or month to move data from warm path storage into the data warehouse.
✅ Think about the data you have captured so far in these lessons. Is it hot, warm or cold path data?
## Handle GPS events using serverless code
Once data is flowing into your IoT Hub, you can write some serverless code to listen for events published to the Event-Hub compatible endpoint.
Once data is flowing into your IoT Hub, you can write some serverless code to listen for events published to the Event-Hub compatible endpoint. This is the warm path - this data will be stored and used in the next lesson for reporting on the journey.
![Sending GPS telemetry from an IoT device to IoT Hub, then to Azure Functions via an event hub trigger](../../../images/gps-telemetry-iot-hub-functions.png)

@ -1,8 +1,6 @@
# Visualize location data
Add a sketchnote if possible/appropriate
This video gives an overview of OAzure Maps with IoT, a service that will be covered in this lesson.
This video gives an overview of Azure Maps with IoT, a service that will be covered in this lesson.
[![Azure Maps - The Microsoft Azure Enterprise Location Platform](https://img.youtube.com/vi/P5i2GFTtb2s/0.jpg)](https://www.youtube.com/watch?v=P5i2GFTtb2s)

@ -1,7 +1,5 @@
# Geofences
Add a sketchnote if possible/appropriate
This video gives an overview of geofences and how to use them in Azure Maps, topics that will be covered in this lesson:
[![Geofencing with Azure Maps from the Microsoft Developer IoT show](https://img.youtube.com/vi/nsrgYhaYNVY/0.jpg)](https://www.youtube.com/watch?v=nsrgYhaYNVY)
@ -41,7 +39,7 @@ There are many reasons why you would want to know that a vehicle is inside or ou
* Preparation for unloading - getting a notification that a vehicle has arrived on-site allows a crew to be prepared to unload the vehicle, reducing vehicle waiting time. This can allow a driver to make more deliveries in a day with less waiting time.
* Tax compliance - some countries, such as New Zealand, charge road taxes for diesel vehicles based on the vehicle weight when driving on public roads only. Using geofences allows you to track the mileage driven on public roads as opposed to private roads on sites such as farms or logging areas.
* Monitoring theft - if a vehicle should only remain in a certain area such as on a farm, and it leaves the geofence, it might be being stolen.
* Monitoring theft - if a vehicle should only remain in a certain area such as on a farm, and it leaves the geofence, it might have been stolen.
* Location compliance - some parts of a work site, farm or factory may be off-limits to certain vehicles, such as keeping vehicles that carry artificial fertilizers and pesticides away from fields growing organic produce. If a geofence is entered, then a vehicle is outside of compliance and the driver can be notified.
✅ Can you think of other uses for geofences?
@ -212,7 +210,7 @@ For example, imagine GPS readings showing a vehicle was driving along a road tha
![A GPS trail showing a vehicle passing the Microsoft campus on the 520, with GPS readings along the road except for one on the campus, inside a geofence](../../../images/geofence-crossing-inaccurate-gps.png)
In the above image, there is a geofence over part of the Microsoft campus. The red line shows a truck driving along the 520, with circles to show the GPS readings. Most of these are accurate and along the 520, with one inaccurate reading inside the geofence. The is no way that reading can be correct - there are no roads for the truck to suddenly divert from the 520 onto campus, then back onto the 520. The code that checks this geofence will need to take the previous readings into consideration before acting on the results of the geofence test.
In the above image, there is a geofence over part of the Microsoft campus. The red line shows a truck driving along the 520, with circles to show the GPS readings. Most of these are accurate and along the 520, with one inaccurate reading inside the geofence. There is no way that reading can be correct - there are no roads for the truck to suddenly divert from the 520 onto campus, then back onto the 520. The code that checks this geofence will need to take the previous readings into consideration before acting on the results of the geofence test.
✅ What additional data would you need to check to see if a GPS reading could be considered correct?
@ -237,7 +235,7 @@ In the above image, there is a geofence over part of the Microsoft campus. The r
1. Use curl to make a GET request to this URL:
```sh
curl --request GET <URL>
curl --request GET '<URL>'
```
> 💁 If you get a response code of `BadRequest`, with an error of:
@ -255,7 +253,7 @@ In the above image, there is a geofence over part of the Microsoft campus. The r
"geometries": [
{
"deviceId": "gps-sensor",
"udId": "1ffb2047-6757-8c29-2c3d-da44cec55ff9",
"udId": "7c3776eb-da87-4c52-ae83-caadf980323a",
"geometryId": "1",
"distance": 999.0,
"nearestLat": 47.645875,

@ -1,6 +1,6 @@
# Manufacturing and processing - using IoT to improve the processing of food
Once food reaches a central hub or processing plant, it isn't always just shipped out to supermarkets. A lot of the time the food goes through a number of processing steps, such as sorting by quality. This is a process that used to be manual - it would start in the field when pickers would only pick ripe fruit, then at the factory the fruit would be ride a conveyer belt and employees would manually remove any bruised or rotten fruit. Having picked and sorted strawberries myself as a summer job during school, I can testify that this isn't a fun job.
Once food reaches a central hub or processing plant, it isn't always just shipped out to supermarkets. A lot of the time the food goes through a number of processing steps, such as sorting by quality. This is a process that used to be manual - it would start in the field when pickers would only pick ripe fruit, then at the factory the fruit would ride a conveyer belt and employees would manually remove any bruised or rotten fruit. Having picked and sorted strawberries myself as a summer job during school, I can testify that this isn't a fun job.
More modern setups rely on IoT for sorting. Some of the earliest devices like the sorters from [Weco](https://wecotek.com) use optical sensors to detect the quality of produce, rejecting green tomatoes for example. These can be deployed in harvesters on the farm itself, or in processing plants.
@ -10,7 +10,7 @@ As advances happen in Artificial Intelligence (AI) and Machine Learning (ML), th
In these 4 lessons you'll learn how to train image-based AI models to detect fruit quality, how to use these from an IoT device, and how to run these on the edge - that is on an IoT device rather than in the cloud.
> 💁 These lessons will use some cloud resources. If you don't complete all the lessons in this project, make sure you [Clean up your project](../clean-up.md).
> 💁 These lessons will use some cloud resources. If you don't complete all the lessons in this project, make sure you [clean up your project](../clean-up.md).
## Topics

@ -1,7 +1,5 @@
# Train a fruit quality detector
Add a sketchnote if possible/appropriate
This video gives an overview of the Azure Custom Vision service, a service that will be covered in this lesson.
[![Custom Vision Machine Learning Made Easy | The Xamarin Show](https://img.youtube.com/vi/TETcDLJlWR4/0.jpg)](https://www.youtube.com/watch?v=TETcDLJlWR4)
@ -60,11 +58,11 @@ Traditional programming is where you take data, apply an algorithm to the data,
![Traditional development takes input and an algorithm and gives output. Machine learning uses input and output data to train a model, and this model can take new input data to generate new output](../../../images/traditional-vs-ml.png)
Machine learning turns this around - you start with data and known outputs, and the machine learning tools work out the algorithm. You can then take that algorithm, called a *machine learning model*, and input new data and get new output.
Machine learning turns this around - you start with data and known outputs, and the machine learning algorithm learns from the data. You can then take that trained algorithm, called a *machine learning model* or *model*, and input new data and get new output.
> 🎓 The process of a machine learning tool generating a model is called *training*. The inputs and known outputs are called *training data*.
> 🎓 The process of a machine learning algorithm learning from the data is called *training*. The inputs and known outputs are called *training data*.
For example, you could give a model millions of pictures of unripe bananas as input training data, with the training output set as `unripe`, and millions of ripe banana pictures as training data with the output set as `ripe`. The ML tools will then generate a model. You then give this model a new picture of a banana and it will predict if the new picture is a ripe or an unripe banana.
For example, you could give a model millions of pictures of unripe bananas as input training data, with the training output set as `unripe`, and millions of ripe banana pictures as training data with the output set as `ripe`. The ML algorithm will then create a model based off this data. You then give this model a new picture of a banana and it will predict if the new picture is a ripe or an unripe banana.
> 🎓 The results of ML models are called *predictions*
@ -74,6 +72,8 @@ ML models don't give a binary answer, instead they give probabilities. For examp
The ML model used to detect images like this is called an *image classifier* - it is given labelled images, and then classifies new images based off these labels.
> 💁 This is an over-simplification, and there are many other ways to train models that don't always need labelled outputs, such as unsupervised learning. If you want to learn more about ML, check out [ML for beginners, a 24 lesson curriculum on Machine Learning](https://aka.ms/ML-beginners).
## Train an image classifier
To successfully train an image classifier you need millions of images. As it turns out, once you have an image classifier trained on millions or billions of assorted images, you can re-use it and re-train it using a small set of images and get great results, using a process called *transfer learning*.
@ -122,6 +122,8 @@ To use Custom Vision, you first need to create two cognitive services resources
Replace `<location>` with the location you used when creating the Resource Group.
This will create a Custom Vision training resource in your Resource Group. It will be called `fruit-quality-detector-training` and use the `F0` sku, which is the free tier. The `--yes` option means you agree to the terms and conditions of the cognitive services.
> 💁 Use `S0` sku if you already have a free account using any of the Cognitive Services.
1. Use the following command to create a free Custom Vision prediction resource:

@ -1,7 +1,5 @@
# Check fruit quality from an IoT device
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
@ -66,7 +64,7 @@ When you are happy with an iteration, you can publish it to make it available to
Iterations are published from the Custom Vision portal.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `fruit-detector` project.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `fruit-quality-detector` project.
1. Select the **Performance** tab from the options at the top

@ -2,8 +2,6 @@
<!-- This lesson is still under development -->
Add a sketchnote if possible/appropriate
This video gives an overview of running image classifiers on IoT devices, the topic that is covered in this lesson.
[![Custom Vison AI on Azure IoT Edge](https://img.youtube.com/vi/_K5fqGLO8us/0.jpg)](https://www.youtube.com/watch?v=_K5fqGLO8us)

@ -1,9 +1,5 @@
# Trigger fruit quality detection from a sensor
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/35)

@ -1,7 +1,5 @@
# Train a stock detector
Add a sketchnote if possible/appropriate
This video gives an overview of Object Detection the Azure Custom Vision service, a service that will be covered in this lesson.
[![Custom Vision 2 - Object Detection Made Easy | The Xamarin Show](https://img.youtube.com/vi/wtTYSyBUpFc/0.jpg)](https://www.youtube.com/watch?v=wtTYSyBUpFc)
@ -56,7 +54,7 @@ Object detection involves training a model to recognize objects. Instead of givi
When you then use it to predict images, instead of getting back a list of tags and percentages, you get back a list of detected objects, with their bounding box and the probability that the object matches the assigned tag.
> 🎓 *Bounding boxes* are the boxes around an object. They are given using coordinates relative to the image as a whole on a scale of 0-1. For example, if the image is 800 pixels wide, by 600 tall and the object it detected between 400 and 600 pixels along, and 150 and 300 pixels down, the bounding box would have a top/left coordinate of 0.5,0.25, with a width of 0.25 and a height of 0.25. That way no matter what size the image is scaled to, the bounding box starts half way along, and a quarter of the way down, and is a quarter of the width and the height.
> 🎓 *Bounding boxes* are the boxes around an object.
![Object detection of cashew nuts and tomato paste](../../../images/object-detector-cashews-tomato.png)

@ -1,33 +1,169 @@
# Check stock from an IoT device
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/39)
## Introduction
In this lesson you will learn about
In the previous lesson you learned about the different uses of object detection in retail. You also learned how to train an object detector to identify stock. In this lesson you will learn how to use your object detector from your IoT device to count stock.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Stock counting](#stock-counting)
* [Call your object detector from your IoT device](#call-your-object-detector-from-your-iot-device)
* [Bounding boxes](#bounding-boxes)
* [Retrain the model](#retrain-the-model)
* [Count stock](#count-stock)
## Stock counting
Object detectors can be used for stock checking, either counting stock or ensuring stock is where it should be. IoT devices with cameras can be deployed all around the store to monitor stock, starting with hot spots where having items restocked is important, such as areas where small numbers of high value items are stocked.
For example, if a camera is pointing at a set of shelves that can hold 8 cans of tomato paste, and an object detector only detects 7 cans, then one is missing and needs to be restocked.
![7 cans of tomato paste on a shelf, 4 on the top row, 3 on top](../../../images/stock-7-cans-tomato-paste.png)
In the above image, an object detector has detected 7 cans of tomato paste on a shelf that can hold 8 cans. Not only can the IoT device send a notification of the need to restock, but it can even give an indication of the location of the missing item, important data if you are using robots to restock shelves.
> 💁 Depending on the store and popularity of the item, restocking probably wouldn't happen if only 1 can was missing. You would need to build an algorithm that determines when to restock based on your produce, customers and other criteria.
✅ In what other scenarios could you combine object detection and robots?
Sometimes the wrong stock can be on the shelves. This could be human error when restocking, or customers changing their mind on a purchase and putting an item back in the first available space. When this is a non-perishable item such as canned goods, this is an annoyance. If it is a perishable item such as frozen or chilled goods, this can mean that the product can no longer be sold as it might be impossible to tell how long the item was out of the freezer.
Object detection can be used to detect unexpected items, again alerting a human or robot to return the item as soon as it is detected.
![A rogue can of baby corn on the tomato paste shelf](../../../images/stock-rogue-corn.png)
In the above image, a can of baby corn has been put on the shelf next to the tomato paste. The object detector has detected this, allowing the IoT device to notify a human or robot to return the can to it's correct location.
## Call your object detector from your IoT device
The object detector you trained in the last lesson can be called from your IoT device.
### Task - publish an iteration of your object detector
Iterations are published from the Custom Vision portal.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `stock-detector` project.
1. Select the **Performance** tab from the options at the top
1. Select the latest iteration from the *Iterations* list on the side
1. Select the **Publish** button for the iteration
![The publish button](../../../images/custom-vision-object-detector-publish-button.png)
1. In the *Publish Model* dialog, set the *Prediction resource* to the `stock-detector-prediction` resource you created in the last lesson. Leave the name as `Iteration2`, and select the **Publish** button.
1. Once published, select the **Prediction URL** button. This will show details of the prediction API, and you will need these to call the model from your IoT device. The lower section is labelled *If you have an image file*, and this is the details you want. Take a copy of the URL that is shown which will be something like:
```output
https://<location>.api.cognitive.microsoft.com/customvision/v3.0/Prediction/<id>/detect/iterations/Iteration2/image
```
Where `<location>` will be the location you used when creating your custom vision resource, and `<id>` will be a long ID made up of letters and numbers.
Also take a copy of the *Prediction-Key* value. This is a secure key that you have to pass when you call the model. Only applications that pass this key are allowed to use the model, any other applications are rejected.
![The prediction API dialog showing the URL and key](../../../images/custom-vision-prediction-key-endpoint.png)
✅ When a new iteration is published, it will have a different name. How do you think you would change the iteration an IoT device is using?
### Task - call your object detector from your IoT device
Follow the relevant guide below to use the object detector from your IoT device:
* [Arduino - Wio Terminal](wio-terminal-object-detector.md)
* [Single-board computer - Raspberry Pi/Virtual device](single-board-computer-object-detector.md)
## Bounding boxes
## Thing 1
When you use the object detector, you not only get back the detected objects with their tags and probabilities, but you also get the bounding boxes of the objects. These define where the object detector detected the object with the given probability.
> 💁 A bounding box is a box that defines the area that contains the object detected, a box that defines the boundary for the object.
The results of a prediction in the **Predictions** tab in Custom Vision have the bounding boxes drawn on the image that was sent for prediction.
![4 cans of tomato paste on a shelf with predictions for the 4 detections of 35.8%, 33.5%, 25.7% and 16.6%](../../../images/custom-vision-stock-prediction.png)
In the image above, 4 cans of tomato paste were detected. In the results a red square is overlaid for each object that was detected in the image, indicating the bounding box for the image.
✅ Open the predictions in Custom Vision and check out the bounding boxes.
Bounding boxes are defined with 4 values - top, left, height and width. These values are on a scale of 0-1, representing the positions as a percentage of the size of the image. The origin (the 0,0 position) is the top left of the image, so the top value is the distance from the top, and the bottom of the bounding box is the top plus the height.
![A bounding box around a can of tomato paste](../../../images/bounding-box.png)
The above image is 600 pixels wide and 800 pixels tall. The bounding box starts at 320 pixels down, giving a top coordinate of 0.4 (800 x 0.4 = 320). From the left, the bounding box starts at 240 pixels across, giving a left coordinate of 0.4 (600 x 0.4 = 240). The height of the bounding box is 240 pixels, giving a height value of 0.3 (800 x 0.3 = 240). The width of the bounding box is 120 pixels, giving a width value of 0.2 (600 x 0.2 = 120).
| Coordinate | Value |
| ---------- | ----: |
| Top | 0.4 |
| Left | 0.4 |
| Height | 0.3 |
| Width | 0.2 |
Using percentage values from 0-1 means no matter what size the image is scaled to, the bounding box starts 0.4 of the way along and down, and is a 0.3 of the height and 0.2 of the width.
You can use bounding boxes combined with probabilities to evaluate how accurate a detection is. For example, an object detector can detect multiple objects that overlap, for example detecting one can inside another. Your code could look at the bounding boxes, understand that this is impossible, and ignore any objects that have a significant overlap with other objects.
![Two bonding boxes overlapping a can of tomato paste](../../../images/overlap-object-detection.png)
In the example above, one bounding box indicated a predicted can of tomato paste at 78.3%. A second bounding box is slightly smaller, and is inside the first bounding box with a probability of 64.3%. You code can check the bounding boxes, see they overlap completely, and ignore the lower probability as there is no way one can can be inside another.
✅ Can you think of a situation where is it valid to detect one object inside another?
## Retrain the model
Like with the image classifier, you can retrain your model using data captured by your IoT device. Using this real-world data will ensure your model works well when used from your IoT device.
Unlike with the image classifier, you can't just tag an image. Instead you need to review every bounding box detected by the model. If the box is around the wrong thing then it needs to be deleted, if it is in the wrong location it needs to be adjusted.
### Task - retrain the model
1. Make sure you have captured a range of images using your IoT device.
1. From the **Predictions** tab, select an image. You will see red boxes indicating the bounding boxes of the detected objects.
1. Work through each bounding box. Select it first and you will see a pop-up showing the tag. Use the handles on the corners of the bounding box to adjust the size if necessary. If the tag is wrong, remove it with the **X** button and add the correct tag. If the bounding box doesn't contain an object, delete it with the trashcan button.
1. Close the editor when done and the image will move from the **Predictions** tab to the **Training Images** tab. Repeat the process for all the predictions.
1. Use the **Train** button to re-train your model. Once it has trained, publish the iteration and update your IoT device to use the URL of the new iteration.
1. Re-deploy your code and test your IoT device.
## Count stock
Using a combination of the number of objects detected and the bounding boxes, you can count the stock on a shelf.
### Task - count stock
Follow the relevant guide below to count stock using the results from the object detector from your IoT device:
* [Arduino - Wio Terminal](wio-terminal-count-stock.md)
* [Single-board computer - Raspberry Pi/Virtual device](single-board-computer-count-stock.md)
---
## 🚀 Challenge
Can you detect incorrect stock? Train your model on multiple objects, then update your app to alert you if the wrong stock is detected.
Maybe even take this further and detect stock side by side on the same shelf, and see if something has been put in the wrong place bu defining limits on the bounding boxes.
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/40)
## Review & Self Study
* Learn more about how to architect an end-to-end stock detection system from the [Out of stock detection at the edge pattern guide on Microsoft Docs](https://docs.microsoft.com/hybrid/app-solutions/pattern-out-of-stock-at-edge?WT.mc_id=academic-17441-jabenn)
* Learn other ways to build end-to-end retail solutions combining a range of IoT and cloud services by watching this [Behind the scenes of a retail solution - Hands On! video on YouTube](https://www.youtube.com/watch?v=m3Pc300x2Mw).
## Assignment
[](assignment.md)
[Use your object detector on the edge](assignment.md)

@ -1,9 +1,11 @@
#
# Use your object detector on the edge
## Instructions
In the last project, you deployed your image classifier to the edge. Do the same with your object detector, exporting it as a compact model and running it on the edge, accessing the edge version from your IoT device.
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |
| Deploy your object detector to the edge | Was able to use the correct compact domain, export the object detector and run it on the edge | Was able to use the correct compact domain, and export the object detector, but was unable to run it on the edge | Was unable to use the correct compact domain, export the object detector, and run it on the edge |

@ -0,0 +1,92 @@
import io
import time
from picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
from PIL import Image, ImageDraw, ImageColor
from shapely.geometry import Polygon
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.detect_image(project_id, iteration_name, image)
threshold = 0.3
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
for prediction in predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
overlap_threshold = 0.002
def create_polygon(prediction):
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
to_delete = []
for i in range(0, len(predictions)):
polygon_1 = create_polygon(predictions[i])
for j in range(i+1, len(predictions)):
polygon_2 = create_polygon(predictions[j])
overlap = polygon_1.intersection(polygon_2).area
smallest_area = min(polygon_1.area, polygon_2.area)
if overlap > (overlap_threshold * smallest_area):
to_delete.append(predictions[i])
break
for d in to_delete:
predictions.remove(d)
print(f'Counted {len(predictions)} stock items')
with Image.open('image.jpg') as im:
draw = ImageDraw.Draw(im)
for prediction in predictions:
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
left = scale_left * im.width
top = scale_top * im.height
right = scale_right * im.width
bottom = scale_bottom * im.height
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
im.save('image.jpg')

@ -0,0 +1,92 @@
from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000)
import io
from counterfit_shims_picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
from PIL import Image, ImageDraw, ImageColor
from shapely.geometry import Polygon
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.detect_image(project_id, iteration_name, image)
threshold = 0.3
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
for prediction in predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
overlap_threshold = 0.002
def create_polygon(prediction):
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
to_delete = []
for i in range(0, len(predictions)):
polygon_1 = create_polygon(predictions[i])
for j in range(i+1, len(predictions)):
polygon_2 = create_polygon(predictions[j])
overlap = polygon_1.intersection(polygon_2).area
smallest_area = min(polygon_1.area, polygon_2.area)
if overlap > (overlap_threshold * smallest_area):
to_delete.append(predictions[i])
break
for d in to_delete:
predictions.remove(d)
print(f'Counted {len(predictions)} stock items')
with Image.open('image.jpg') as im:
draw = ImageDraw.Draw(im)
for prediction in predictions:
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
left = scale_left * im.width
top = scale_top * im.height
right = scale_right * im.width
bottom = scale_bottom * im.height
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
im.save('image.jpg')

@ -0,0 +1,5 @@
.pio
.vscode/.browse.c_cpp.db*
.vscode/c_cpp_properties.json
.vscode/launch.json
.vscode/ipch

@ -0,0 +1,7 @@
{
// See http://go.microsoft.com/fwlink/?LinkId=827846
// for the documentation about the extensions.json format
"recommendations": [
"platformio.platformio-ide"
]
}

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,26 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
build_flags =
-w
-DARDUCAM_SHIELD_V2
-DOV2640_CAM

@ -0,0 +1,160 @@
#pragma once
#include <ArduCAM.h>
#include <Wire.h>
class Camera
{
public:
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
{
_format = format;
_image_size = image_size;
}
bool init()
{
// Reset the CPLD
_arducam.write_reg(0x07, 0x80);
delay(100);
_arducam.write_reg(0x07, 0x00);
delay(100);
// Check if the ArduCAM SPI bus is OK
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
{
return false;
}
// Change MCU mode
_arducam.set_mode(MCU2LCD_MODE);
uint8_t vid, pid;
// Check if the camera module type is OV2640
_arducam.wrSensorReg8_8(0xff, 0x01);
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
{
return false;
}
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
_arducam.OV2640_set_Light_Mode(Auto);
_arducam.OV2640_set_Special_effects(Normal);
delay(1000);
return true;
}
void startCapture()
{
_arducam.flush_fifo();
_arducam.clear_fifo_flag();
_arducam.start_capture();
}
bool captureReady()
{
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
}
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
{
if (!captureReady()) return false;
// Get the image file length
uint32_t length = _arducam.read_fifo_length();
buffer_length = length;
if (length >= MAX_FIFO_SIZE)
{
return false;
}
if (length == 0)
{
return false;
}
// create the buffer
byte *buf = new byte[length];
uint8_t temp = 0, temp_last = 0;
int i = 0;
uint32_t buffer_pos = 0;
bool is_header = false;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
while (length--)
{
temp_last = temp;
temp = SPI.transfer(0x00);
//Read JPEG data from FIFO
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_HIGH();
}
if (is_header == true)
{
//Write image data to buffer if not full
if (i < 256)
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
else
{
_arducam.CS_HIGH();
i = 0;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
}
}
else if ((temp == 0xD8) & (temp_last == 0xFF))
{
is_header = true;
buf[buffer_pos] = temp_last;
buffer_pos++;
i++;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
}
_arducam.clear_fifo_flag();
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
// return the buffer
*buffer = buf;
}
private:
ArduCAM _arducam;
int _format;
int _image_size;
};

@ -0,0 +1,49 @@
#pragma once
#include <string>
using namespace std;
// WiFi credentials
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";
const char *PREDICTION_URL = "<PREDICTION_URL>";
const char *PREDICTION_KEY = "<PREDICTION_KEY>";
// Microsoft Azure DigiCert Global Root G2 global certificate
const char *CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
"-----END CERTIFICATE-----\r\n";

@ -0,0 +1,223 @@
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <rpcWiFi.h>
#include "SD/Seeed_SD.h"
#include <Seeed_FS.h>
#include <SPI.h>
#include <vector>
#include <WiFiClientSecure.h>
#include "config.h"
#include "camera.h"
Camera camera = Camera(JPEG, OV2640_640x480);
WiFiClientSecure client;
void setupCamera()
{
pinMode(PIN_SPI_SS, OUTPUT);
digitalWrite(PIN_SPI_SS, HIGH);
Wire.begin();
SPI.begin();
if (!camera.init())
{
Serial.println("Error setting up the camera!");
}
}
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
client.setCACert(CERTIFICATE);
Serial.println("Connected!");
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
setupCamera();
pinMode(WIO_KEY_C, INPUT_PULLUP);
}
const float threshold = 0.0f;
const float overlap_threshold = 0.20f;
struct Point {
float x, y;
};
struct Rect {
Point topLeft, bottomRight;
};
float area(Rect rect)
{
return abs(rect.bottomRight.x - rect.topLeft.x) * abs(rect.bottomRight.y - rect.topLeft.y);
}
float overlappingArea(Rect rect1, Rect rect2)
{
float left = max(rect1.topLeft.x, rect2.topLeft.x);
float right = min(rect1.bottomRight.x, rect2.bottomRight.x);
float top = max(rect1.topLeft.y, rect2.topLeft.y);
float bottom = min(rect1.bottomRight.y, rect2.bottomRight.y);
if ( right > left && bottom > top )
{
return (right-left)*(bottom-top);
}
return 0.0f;
}
Rect rectFromBoundingBox(JsonVariant prediction)
{
JsonObject bounding_box = prediction["boundingBox"].as<JsonObject>();
float left = bounding_box["left"].as<float>();
float top = bounding_box["top"].as<float>();
float width = bounding_box["width"].as<float>();
float height = bounding_box["height"].as<float>();
Point topLeft = {left, top};
Point bottomRight = {left + width, top + height};
return {topLeft, bottomRight};
}
void processPredictions(std::vector<JsonVariant> &predictions)
{
std::vector<JsonVariant> passed_predictions;
for (int i = 0; i < predictions.size(); ++i)
{
Rect prediction_1_rect = rectFromBoundingBox(predictions[i]);
float prediction_1_area = area(prediction_1_rect);
bool passed = true;
for (int j = i + 1; j < predictions.size(); ++j)
{
Rect prediction_2_rect = rectFromBoundingBox(predictions[j]);
float prediction_2_area = area(prediction_2_rect);
float overlap = overlappingArea(prediction_1_rect, prediction_2_rect);
float smallest_area = min(prediction_1_area, prediction_2_area);
if (overlap > (overlap_threshold * smallest_area))
{
passed = false;
break;
}
}
if (passed)
{
passed_predictions.push_back(predictions[i]);
}
}
for(JsonVariant prediction : passed_predictions)
{
String boundingBox = prediction["boundingBox"].as<String>();
String tag = prediction["tagName"].as<String>();
float probability = prediction["probability"].as<float>();
char buff[32];
sprintf(buff, "%s:\t%.2f%%\t%s", tag.c_str(), probability * 100.0, boundingBox.c_str());
Serial.println(buff);
}
Serial.print("Counted ");
Serial.print(passed_predictions.size());
Serial.println(" stock items.");
}
void detectStock(byte *buffer, uint32_t length)
{
HTTPClient httpClient;
httpClient.begin(client, PREDICTION_URL);
httpClient.addHeader("Content-Type", "application/octet-stream");
httpClient.addHeader("Prediction-Key", PREDICTION_KEY);
int httpResponseCode = httpClient.POST(buffer, length);
if (httpResponseCode == 200)
{
String result = httpClient.getString();
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
JsonArray predictions = obj["predictions"].as<JsonArray>();
std::vector<JsonVariant> passed_predictions;
for(JsonVariant prediction : predictions)
{
float probability = prediction["probability"].as<float>();
if (probability > threshold)
{
passed_predictions.push_back(prediction);
}
}
processPredictions(passed_predictions);
}
httpClient.end();
}
void buttonPressed()
{
camera.startCapture();
while (!camera.captureReady())
delay(100);
Serial.println("Image captured");
byte *buffer;
uint32_t length;
if (camera.readImageToBuffer(&buffer, length))
{
Serial.print("Image read to buffer with length ");
Serial.println(length);
detectStock(buffer, length);
delete (buffer);
}
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW)
{
buttonPressed();
delay(2000);
}
delay(200);
}

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -0,0 +1,40 @@
import io
import time
from picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.detect_image(project_id, iteration_name, image)
threshold = 0.3
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
for prediction in predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')

@ -0,0 +1,40 @@
from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000)
import io
from counterfit_shims_picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.detect_image(project_id, iteration_name, image)
threshold = 0.3
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
for prediction in predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')

@ -0,0 +1,5 @@
.pio
.vscode/.browse.c_cpp.db*
.vscode/c_cpp_properties.json
.vscode/launch.json
.vscode/ipch

@ -0,0 +1,7 @@
{
// See http://go.microsoft.com/fwlink/?LinkId=827846
// for the documentation about the extensions.json format
"recommendations": [
"platformio.platformio-ide"
]
}

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,26 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
build_flags =
-w
-DARDUCAM_SHIELD_V2
-DOV2640_CAM

@ -0,0 +1,160 @@
#pragma once
#include <ArduCAM.h>
#include <Wire.h>
class Camera
{
public:
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
{
_format = format;
_image_size = image_size;
}
bool init()
{
// Reset the CPLD
_arducam.write_reg(0x07, 0x80);
delay(100);
_arducam.write_reg(0x07, 0x00);
delay(100);
// Check if the ArduCAM SPI bus is OK
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
{
return false;
}
// Change MCU mode
_arducam.set_mode(MCU2LCD_MODE);
uint8_t vid, pid;
// Check if the camera module type is OV2640
_arducam.wrSensorReg8_8(0xff, 0x01);
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
{
return false;
}
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
_arducam.OV2640_set_Light_Mode(Auto);
_arducam.OV2640_set_Special_effects(Normal);
delay(1000);
return true;
}
void startCapture()
{
_arducam.flush_fifo();
_arducam.clear_fifo_flag();
_arducam.start_capture();
}
bool captureReady()
{
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
}
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
{
if (!captureReady()) return false;
// Get the image file length
uint32_t length = _arducam.read_fifo_length();
buffer_length = length;
if (length >= MAX_FIFO_SIZE)
{
return false;
}
if (length == 0)
{
return false;
}
// create the buffer
byte *buf = new byte[length];
uint8_t temp = 0, temp_last = 0;
int i = 0;
uint32_t buffer_pos = 0;
bool is_header = false;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
while (length--)
{
temp_last = temp;
temp = SPI.transfer(0x00);
//Read JPEG data from FIFO
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_HIGH();
}
if (is_header == true)
{
//Write image data to buffer if not full
if (i < 256)
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
else
{
_arducam.CS_HIGH();
i = 0;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
}
}
else if ((temp == 0xD8) & (temp_last == 0xFF))
{
is_header = true;
buf[buffer_pos] = temp_last;
buffer_pos++;
i++;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
}
_arducam.clear_fifo_flag();
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
// return the buffer
*buffer = buf;
}
private:
ArduCAM _arducam;
int _format;
int _image_size;
};

@ -0,0 +1,49 @@
#pragma once
#include <string>
using namespace std;
// WiFi credentials
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";
const char *PREDICTION_URL = "<PREDICTION_URL>";
const char *PREDICTION_KEY = "<PREDICTION_KEY>";
// Microsoft Azure DigiCert Global Root G2 global certificate
const char *CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
"-----END CERTIFICATE-----\r\n";

@ -0,0 +1,145 @@
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <list>
#include <rpcWiFi.h>
#include "SD/Seeed_SD.h"
#include <Seeed_FS.h>
#include <SPI.h>
#include <vector>
#include <WiFiClientSecure.h>
#include "config.h"
#include "camera.h"
Camera camera = Camera(JPEG, OV2640_640x480);
WiFiClientSecure client;
void setupCamera()
{
pinMode(PIN_SPI_SS, OUTPUT);
digitalWrite(PIN_SPI_SS, HIGH);
Wire.begin();
SPI.begin();
if (!camera.init())
{
Serial.println("Error setting up the camera!");
}
}
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
client.setCACert(CERTIFICATE);
Serial.println("Connected!");
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
setupCamera();
pinMode(WIO_KEY_C, INPUT_PULLUP);
}
const float threshold = 0.3f;
void processPredictions(std::vector<JsonVariant> &predictions)
{
for(JsonVariant prediction : predictions)
{
String tag = prediction["tagName"].as<String>();
float probability = prediction["probability"].as<float>();
char buff[32];
sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0);
Serial.println(buff);
}
}
void detectStock(byte *buffer, uint32_t length)
{
HTTPClient httpClient;
httpClient.begin(client, PREDICTION_URL);
httpClient.addHeader("Content-Type", "application/octet-stream");
httpClient.addHeader("Prediction-Key", PREDICTION_KEY);
int httpResponseCode = httpClient.POST(buffer, length);
if (httpResponseCode == 200)
{
String result = httpClient.getString();
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
JsonArray predictions = obj["predictions"].as<JsonArray>();
std::vector<JsonVariant> passed_predictions;
for(JsonVariant prediction : predictions)
{
float probability = prediction["probability"].as<float>();
if (probability > threshold)
{
passed_predictions.push_back(prediction);
}
}
processPredictions(passed_predictions);
}
httpClient.end();
}
void buttonPressed()
{
camera.startCapture();
while (!camera.captureReady())
delay(100);
Serial.println("Image captured");
byte *buffer;
uint32_t length;
if (camera.readImageToBuffer(&buffer, length))
{
Serial.print("Image read to buffer with length ");
Serial.println(length);
detectStock(buffer, length);
delete (buffer);
}
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW)
{
buttonPressed();
delay(2000);
}
delay(200);
}

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -0,0 +1,163 @@
# Count stock from your IoT device - Virtual IoT Hardware and Raspberry Pi
A combination of the predictions and their bounding boxes can be used to count stock in an image
## Show bounding boxes
As a helpful debugging step you can not only print out the bounding boxes, but you can also draw them on the image that was written to disk when an image was captured.
### Task - print the bounding boxes
1. Ensure the `stock-counter` project is open in VS Code, and the virtual environment is activated if you are using a virtual IoT device.
1. Change the `print` statement in the `for` loop to the following to print the bounding boxes to the console:
```python
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%\t{prediction.bounding_box}')
```
1. Run the app with the camera pointing at some stock on a shelf. The bounding boxes will be printed to the console, with left, top, width and height values from 0-1.
```output
pi@raspberrypi:~/stock-counter $ python3 app.py
tomato paste: 33.42% {'additional_properties': {}, 'left': 0.3455171, 'top': 0.09916268, 'width': 0.14175442, 'height': 0.29405564}
tomato paste: 34.41% {'additional_properties': {}, 'left': 0.48283678, 'top': 0.10242918, 'width': 0.11782813, 'height': 0.27467814}
tomato paste: 31.25% {'additional_properties': {}, 'left': 0.4923783, 'top': 0.35007596, 'width': 0.13668466, 'height': 0.28304994}
tomato paste: 31.05% {'additional_properties': {}, 'left': 0.36416405, 'top': 0.37494493, 'width': 0.14024884, 'height': 0.26880276}
```
### Task - draw bounding boxes on the image
1. The Pip package [Pillow](https://pypi.org/project/Pillow/) can be used to draw on images. Install this with the following command:
```sh
pip3 install pillow
```
If you are using a virtual IoT device, make sure to run this from inside the activated virtual environment.
1. Add the following import statement to the top of the `app.py` file:
```python
from PIL import Image, ImageDraw, ImageColor
```
This imports code needed to edit the image.
1. Add the following code to the end of the `app.py` file:
```python
with Image.open('image.jpg') as im:
draw = ImageDraw.Draw(im)
for prediction in predictions:
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
left = scale_left * im.width
top = scale_top * im.height
right = scale_right * im.width
bottom = scale_bottom * im.height
draw.rectangle([left, top, right, bottom], outline=ImageColor.getrgb('red'), width=2)
im.save('image.jpg')
```
This code opens the image that was saved earlier for editing. It then loops through the predictions getting the bounding boxes, and calculates the bottom right coordinate using the bounding box values from 0-1. These are then converted to image coordinates by multiplying by the relevant dimension of the image. For example, if the left value was 0.5 on an image that was 600 pixels wide, this would convert it to 300 (0.5 x 600 = 300).
Each bounding box is drawn on the image using a red line. Finally the edited image is saved, overwriting the original image.
1. Run the app with the camera pointing at some stock on a shelf. You will see the `image.jpg` file in the VS Code explorer, and you will be able to select it to see the bounding boxes.
![4 cans of tomato paste with bounding boxes around each can](../../../images/rpi-stock-with-bounding-boxes.jpg)
## Count stock
In the image shown above, the bounding boxes have a small overlap. If this overlap was much larger, then the bounding boxes may indicate the same object. To count the objects correctly, you need to ignore boxes with a significant overlap.
### Task - count stock ignoring overlap
1. The Pip package [Shapely](https://pypi.org/project/Shapely/) can be used to calculate the intersection. If you are using a Raspberry Pi, you will need to instal a library dependency first:
```sh
sudo apt install libgeos-dev
```
1. Install the Shapely Pip package:
```sh
pip3 install shapely
```
If you are using a virtual IoT device, make sure to run this from inside the activated virtual environment.
1. Add the following import statement to the top of the `app.py` file:
```python
from shapely.geometry import Polygon
```
This imports code needed to create polygons to calculate overlap.
1. Above the code that draws the bounding boxes, add the following code:
```python
overlap_threshold = 0.20
```
This defines the percentage overlap allowed before the bounding boxes are considered to be the same object. 0.20 defines a 20% overlap.
1. To calculate overlap using Shapely, the bounding boxes need to be converted into Shapely polygons. Add the following function to do this:
```python
def create_polygon(prediction):
scale_left = prediction.bounding_box.left
scale_top = prediction.bounding_box.top
scale_right = prediction.bounding_box.left + prediction.bounding_box.width
scale_bottom = prediction.bounding_box.top + prediction.bounding_box.height
return Polygon([(scale_left, scale_top), (scale_right, scale_top), (scale_right, scale_bottom), (scale_left, scale_bottom)])
```
This creates a polygon using the bounding box of a prediction.
1. The logic for removing overlapping objects involves comparing all bounding boxes and if any pairs of predictions have bounding boxes that overlap more than the threshold, delete one of the predictions. To compare all the predictions, you compare prediction 1 with 2, 3, 4, etc., then 2 with 3, 4, etc. The following code does this:
```python
to_delete = []
for i in range(0, len(predictions)):
polygon_1 = create_polygon(predictions[i])
for j in range(i+1, len(predictions)):
polygon_2 = create_polygon(predictions[j])
overlap = polygon_1.intersection(polygon_2).area
smallest_area = min(polygon_1.area, polygon_2.area)
if overlap > (overlap_threshold * smallest_area):
to_delete.append(predictions[i])
break
for d in to_delete:
predictions.remove(d)
print(f'Counted {len(predictions)} stock items')
```
The overlap is calculated using the Shapely `Polygon.intersection` method that returns a polygon that has the overlap. The area is then calculated from this polygon. This overlap threshold is not an absolute value, but needs to be a percentage of the bounding box, so the smallest bounding box is found, and the overlap threshold is used to calculate what area the overlap can be to not exceed the percentage overlap threshold of the smallest bounding box. If the overlap exceeds this, the prediction is marked for deletion.
Once a prediction has been marked for deletion it doesn't need to be checked again, so the inner loop breaks out to check the next prediction. You can't delete items from a list whilst iterating through it, so the bounding boxes that overlap more than the threshold are added to the `to_delete` list, then deleted at the end.
Finally the stock count is printed to the console. This could then be sent to an IoT service to alert if the stock levels are low. All of this code is before the bounding boxes are drawn, so you will see the stock predictions without overlaps on the generated images.
> 💁 This is very simplistic way to remove overlaps, just removing the first one in an overlapping pair. For production code, you would want to put more logic in here, such as considering the overlaps between multiple objects, or if one bounding box is contained by another.
1. Run the app with the camera pointing at some stock on a shelf. The output will indicate the number of bounding boxes without overlaps that exceed the threshold. Try adjusting the `overlap_threshold` value to see predictions being ignored.
> 💁 You can find this code in the [code-count/pi](code-count/pi) or [code-count/virtual-device](code-count/virtual-device) folder.
😀 Your stock counter program was a success!

@ -0,0 +1,74 @@
# Call your object detector from your IoT device - Virtual IoT Hardware and Raspberry Pi
Once your object detector has been published, it can be used from your IoT device.
## Copy the image classifier project
The majority of your stock detector is the same as the image classifier you created in a previous lesson.
### Task - copy the image classifier project
1. Create a folder called `stock-counter` either on your computer if you are using a virtual IoT device, or on your Raspberry Pi. If you are using a virtual IoT device make sure you set up a virtual environment.
1. Set up the camera hardware.
* If you are using a Raspberry Pi you will need to fit the PiCamera. You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape.
* If you are using a virtual IoT device then you will need to install CounterFit and the CounterFit PyCamera shim. If you are going to use still images, then capture some images that your object detector hasn't seen yet, if you are going to use your web cam make sure it is positioned in a way that can see the stock you are detecting.
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera.
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects.
## Change the code from a classifier to an image detector
The code you used to classify images is very similar to the code to detect objects. The main difference is the method called on the Custom Vision SDK, and the results of the call.
### Task - change the code from a classifier to an image detector
1. Delete the three lines of code that classifies the image and processes the predictions:
```python
results = predictor.classify_image(project_id, iteration_name, image)
for prediction in results.predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
```
Remove these three lines.
1. Add the following code to detect objects in the image:
```python
results = predictor.detect_image(project_id, iteration_name, image)
threshold = 0.3
predictions = list(prediction for prediction in results.predictions if prediction.probability > threshold)
for prediction in predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
```
This code calls the `detect_image` method on the predictor to run the object detector. It then gathers all the predictions with a probability above a threshold, printing them to the console.
Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out.
1. Run this code and it will capture an image, sending it to the object detector, and print out the detected objects. If you are using a virtual IoT device ensure you have an appropriate image set in CounterFit, or our web cam is selected. If you are using a Raspberry Pi, make sure your camera is pointing to objects on a shelf.
```output
pi@raspberrypi:~/stock-counter $ python3 app.py
tomato paste: 34.13%
tomato paste: 33.95%
tomato paste: 35.05%
tomato paste: 32.80%
```
> 💁 You may need to adjust the `threshold` to an appropriate value for your images.
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
![4 cans of tomato paste on a shelf with predictions for the 4 detections of 35.8%, 33.5%, 25.7% and 16.6%](../../../images/custom-vision-stock-prediction.png)
> 💁 You can find this code in the [code-detect/pi](code-detect/pi) or [code-detect/virtual-device](code-detect/virtual-device) folder.
😀 Your stock counter program was a success!

@ -0,0 +1,167 @@
# Count stock from your IoT device - Wio Terminal
A combination of the predictions and their bounding boxes can be used to count stock in an image.
## Count stock
![4 cans of tomato paste with bounding boxes around each can](../../../images/rpi-stock-with-bounding-boxes.jpg)
In the image shown above, the bounding boxes have a small overlap. If this overlap was much larger, then the bounding boxes may indicate the same object. To count the objects correctly, you need to ignore boxes with a significant overlap.
### Task - count stock ignoring overlap
1. Open your `stock-counter` project if it is not already open.
1. Above the `processPredictions` function, add the following code:
```cpp
const float overlap_threshold = 0.20f;
```
This defines the percentage overlap allowed before the bounding boxes are considered to be the same object. 0.20 defines a 20% overlap.
1. Below this, and above the `processPredictions` function, add the following code to calculate the overlap between two rectangles:
```cpp
struct Point {
float x, y;
};
struct Rect {
Point topLeft, bottomRight;
};
float area(Rect rect)
{
return abs(rect.bottomRight.x - rect.topLeft.x) * abs(rect.bottomRight.y - rect.topLeft.y);
}
float overlappingArea(Rect rect1, Rect rect2)
{
float left = max(rect1.topLeft.x, rect2.topLeft.x);
float right = min(rect1.bottomRight.x, rect2.bottomRight.x);
float top = max(rect1.topLeft.y, rect2.topLeft.y);
float bottom = min(rect1.bottomRight.y, rect2.bottomRight.y);
if ( right > left && bottom > top )
{
return (right-left)*(bottom-top);
}
return 0.0f;
}
```
This code defines a `Point` struct to store points on the image, and a `Rect` struct to define a rectangle using a top left and bottom right coordinate. It then defines an `area` function that calculates the area of a rectangle from a top left and bottom right coordinate.
Next it defines a `overlappingArea` function that calculates the overlapping area of 2 rectangles. If they don't overlap, it returns 0.
1. Below the `overlappingArea` function, declare a function to convert a bounding box to a `Rect`:
```cpp
Rect rectFromBoundingBox(JsonVariant prediction)
{
JsonObject bounding_box = prediction["boundingBox"].as<JsonObject>();
float left = bounding_box["left"].as<float>();
float top = bounding_box["top"].as<float>();
float width = bounding_box["width"].as<float>();
float height = bounding_box["height"].as<float>();
Point topLeft = {left, top};
Point bottomRight = {left + width, top + height};
return {topLeft, bottomRight};
}
```
This takes a prediction from the object detector, extracts the bounding box and uses the values on the bounding box to define a rectangle. The right side is calculated from the left plus the width. The bottom is calculated as the top plus the height.
1. The predictions need to be compared to each other, and if 2 predictions have an overlap of more that the threshold, one of them needs to be deleted. The overlap threshold is a percentage, so needs to be multiplied by the size of the smallest bounding box to check that the overlap exceeds the given percentage of the bounding box, not the given percentage of the whole image. Start by deleting the content of the `processPredictions` function.
1. Add the following to the empty `processPredictions` function:
```cpp
std::vector<JsonVariant> passed_predictions;
for (int i = 0; i < predictions.size(); ++i)
{
Rect prediction_1_rect = rectFromBoundingBox(predictions[i]);
float prediction_1_area = area(prediction_1_rect);
bool passed = true;
for (int j = i + 1; j < predictions.size(); ++j)
{
Rect prediction_2_rect = rectFromBoundingBox(predictions[j]);
float prediction_2_area = area(prediction_2_rect);
float overlap = overlappingArea(prediction_1_rect, prediction_2_rect);
float smallest_area = min(prediction_1_area, prediction_2_area);
if (overlap > (overlap_threshold * smallest_area))
{
passed = false;
break;
}
}
if (passed)
{
passed_predictions.push_back(predictions[i]);
}
}
```
This code declares a vector to store the predictions that don't overlap. It then loops through all the predictions, creating a `Rect` from the bounding box.
Next this code loops through the remaining predictions, starting at the one after the current prediction. This stops predictions being compared more than once - once 1 and 2 have been compared, there's no need to compare 2 with 1, only with 3, 4, etc.
For each pair of predictions the overlapping area is calculated. This is then compared to the area of the smallest bounding box - if the overlap exceeds the threshold percentage of the smallest bounding box, the prediction is marked as not passed. If after comparing all the overlap, the prediction passes the checks it is added to the `passed_predictions` collection.
> 💁 This is very simplistic way to remove overlaps, just removing the first one in an overlapping pair. For production code, you would want to put more logic in here, such as considering the overlaps between multiple objects, or if one bounding box is contained by another.
1. After this, add the following code to send details of the passed predictions to the serial monitor:
```cpp
for(JsonVariant prediction : passed_predictions)
{
String boundingBox = prediction["boundingBox"].as<String>();
String tag = prediction["tagName"].as<String>();
float probability = prediction["probability"].as<float>();
char buff[32];
sprintf(buff, "%s:\t%.2f%%\t%s", tag.c_str(), probability * 100.0, boundingBox.c_str());
Serial.println(buff);
}
```
This code loops through the passed predictions and prints their details to the serial monitor.
1. Below this, add code to print the number of counted items to the serial monitor:
```cpp
Serial.print("Counted ");
Serial.print(passed_predictions.size());
Serial.println(" stock items.");
```
This could then be sent to an IoT service to alert if the stock levels are low.
1. Upload and run your code. Point the camera at objects on a shelf and press the C button. Try adjusting the `overlap_threshold` value to see predictions being ignored.
```output
Connecting to WiFi..
Connected!
Image captured
Image read to buffer with length 17416
tomato paste: 35.84% {"left":0.395631,"top":0.215897,"width":0.180768,"height":0.359364}
tomato paste: 35.87% {"left":0.378554,"top":0.583012,"width":0.14824,"height":0.359382}
tomato paste: 34.11% {"left":0.699024,"top":0.592617,"width":0.124411,"height":0.350456}
tomato paste: 35.16% {"left":0.513006,"top":0.647853,"width":0.187472,"height":0.325817}
Counted 4 stock items.
```
> 💁 You can find this code in the [code-count/wio-terminal](code-count/wio-terminal) folder.
😀 Your stock counter program was a success!

@ -0,0 +1,102 @@
# Call your object detector from your IoT device - Wio Terminal
Once your object detector has been published, it can be used from your IoT device.
## Copy the image classifier project
The majority of your stock detector is the same as the image classifier you created in a previous lesson.
### Task - copy the image classifier project
1. Connect your ArduCam your Wio Terminal, following the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/wio-terminal-camera.md#task---connect-the-camera).
You might also want to fix the camera in a single position, for example, by hanging the cable over a box or can, or fixing the camera to a box with double-sided tape.
1. Create a brand new Wio Terminal project using PlatformIO. Call this project `stock-counter`.
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---capture-an-image-using-an-iot-device) to capture images from the camera.
1. Replicate the steps from [lesson 2 of the manufacturing project](../../../4-manufacturing/lessons/2-check-fruit-from-device/README.md#task---classify-images-from-your-iot-device) to call the image classifier. The majority of this code will be re-used to detect objects.
## Change the code from a classifier to an image detector
The code you used to classify images is very similar to the code to detect objects. The main difference is the URL that is called that you obtained from Custom Vision, and the results of the call.
### Task - change the code from a classifier to an image detector
1. Add the following include directive to the top of the `main.cpp` file:
```cpp
#include <vector>
```
1. Rename the `classifyImage` function to `detectStock`, both the name of the function and the call in the `buttonPressed` function.
1. Above the `detectStock` function, declare a threshold to filter out any detections that have a low probability:
```cpp
const float threshold = 0.3f;
```
Unlike an image classifier that only returns one result per tag, the object detector will return multiple results, so any with a low probability need to be filtered out.
1. Above the `detectStock` function, declare a function to process the predictions:
```cpp
void processPredictions(std::vector<JsonVariant> &predictions)
{
for(JsonVariant prediction : predictions)
{
String tag = prediction["tagName"].as<String>();
float probability = prediction["probability"].as<float>();
char buff[32];
sprintf(buff, "%s:\t%.2f%%", tag.c_str(), probability * 100.0);
Serial.println(buff);
}
}
```
This takes a list of predictions and prints them to the serial monitor.
1. In the `detectStock` function, replace the contents of the `for` loop that loops through the predictions with the following:
```cpp
std::vector<JsonVariant> passed_predictions;
for(JsonVariant prediction : predictions)
{
float probability = prediction["probability"].as<float>();
if (probability > threshold)
{
passed_predictions.push_back(prediction);
}
}
processPredictions(passed_predictions);
```
This loops through the predictions, comparing the probability to the threshold. All predictions that have a probability higher than the threshold are added to a `list` and passed to the `processPredictions` function.
1. Upload and run your code. Point the camera at objects on a shelf and press the C button. You will see the output in the serial monitor:
```output
Connecting to WiFi..
Connected!
Image captured
Image read to buffer with length 17416
tomato paste: 35.84%
tomato paste: 35.87%
tomato paste: 34.11%
tomato paste: 35.16%
```
> 💁 You may need to adjust the `threshold` to an appropriate value for your images.
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
![4 cans of tomato paste on a shelf with predictions for the 4 detections of 35.8%, 33.5%, 25.7% and 16.6%](../../../images/custom-vision-stock-prediction.png)
> 💁 You can find this code in the [code-detect/wio-terminal](code-detect/wio-terminal) folder.
😀 Your stock counter program was a success!

@ -1,7 +1,5 @@
# Recognize speech with an IoT device
Add a sketchnote if possible/appropriate
This video gives an overview of the Azure speech service, a topic that will be covered in this lesson:
[![How to get started using your Cognitive Services Speech resource from the Microsoft Azure YouTube channel](https://img.youtube.com/vi/iW0Fw0l3mrA/0.jpg)](https://www.youtube.com/watch?v=iW0Fw0l3mrA)

@ -1,9 +1,5 @@
# Understand language
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/43)

@ -1,9 +1,5 @@
# Set a timer and provide spoken feedback
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/45)

@ -1,7 +1,5 @@
# Support multiple languages
Add a sketchnote if possible/appropriate
This video gives an overview of the Azure speech services, covering speech to text and text to speech from earlier lessons, as well as translating speech, a topic covered in this lesson:
[![Recognizing speech with a few lines of Python from Microsoft Build 2020](https://img.youtube.com/vi/h6xbpMPSGEA/0.jpg)](https://www.youtube.com/watch?v=h6xbpMPSGEA)

@ -8,10 +8,6 @@
[![GitHub forks](https://img.shields.io/github/forks/microsoft/IoT-For-Beginners.svg?style=social&label=Fork&maxAge=2592000)](https://GitHub.com/microsoft/IoT-For-Beginners/network/)
[![GitHub stars](https://img.shields.io/github/stars/microsoft/IoT-For-Beginners.svg?style=social&label=Star&maxAge=2592000)](https://GitHub.com/microsoft/IoT-For-Beginners/stargazers/)
![Under development animated GIF](https://media.giphy.com/media/3o7qE1YN7aBOFPRw8E/giphy.gif)
**This repo is under heavy development. Check back soon for more updates.**
# IoT for Beginners - A Curriculum
Azure Cloud Advocates at Microsoft are pleased to offer a 12-week, 24-lesson curriculum all about IoT basics. Each lesson includes pre- and post-lesson quizzes, written instructions to complete the lesson, a solution, an assignment and more. Our project-based pedagogy allows you to learn while building, a proven way for new skills to 'stick'.
@ -26,13 +22,13 @@ The projects cover the journey of food from farm to table. This includes farming
> **Teachers**, we have [included some suggestions](for-teachers.md) on how to use this curriculum. If you would like to create your own lessons, we have also included a [lesson template](lesson-template/README.md).
> **Students**, to use this curriculum on your own, fork the entire repo and complete the exercises on your own, starting with a pre-lecture quiz, then reading the lecture and completing the rest of the activities. Try to create the projects by comprehending the lessons rather than copying the solution code; however that code is available in the /solutions folders in each project-oriented lesson. Another idea would be to form a study group with friends and go through the content together. For further study, we recommend [Microsoft Learn](create a Learn collection and post it here) and by watching the videos mentioned below.
> **Students**, to use this curriculum on your own, fork the entire repo and complete the exercises on your own, starting with a pre-lecture quiz, then reading the lecture and completing the rest of the activities. Try to create the projects by comprehending the lessons rather than copying the solution code; however that code is available in the /solutions folders in each project-oriented lesson. Another idea would be to form a study group with friends and go through the content together. For further study, we recommend [Microsoft Learn](https://docs.microsoft.com/users/jimbobbennett/collections/ke2ehd351jopwr?WT.mc_id=academic-17441-jabenn).
> Your promo video here
[![Promo video](./images/iot-for-beginners.png)](https://youtube.com/watch?v=R1wrdtmBSII "Promo video")
> 💁 Click the image above for a video about the project and the folks who created it!
> 💁 Click the image above for a video about the project!
## Pedagogy
@ -68,7 +64,7 @@ We have two choices of IoT hardware to use for the projects depending on persona
| | Project Name | Concepts Taught | Learning Objectives | Linked Lesson |
| :-: | :----------: | :-------------: | ------------------- | :-----------: |
| 01 | [Getting started](./1-getting-started) | Introduction to IoT | Learn the basic principles of IoT and the basic building blocks of IoT solutions such as sensors and cloud services whilst you are setting up your first IoT device | [Introduction to IoT](./1-getting-started/lessons/1-introduction-to-iot/README.md) |
| 02 | [Getting started](./1-getting-started) | A deeper dive into IoT| Learn more about the components of an IoT system, as well as microcontrollers and single-board computers | [A deeper dive into IoT](./1-getting-started/lessons/2-deeper-dive/README.md) |
| 02 | [Getting started](./1-getting-started) | A deeper dive into IoT | Learn more about the components of an IoT system, as well as microcontrollers and single-board computers | [A deeper dive into IoT](./1-getting-started/lessons/2-deeper-dive/README.md) |
| 03 | [Getting started](./1-getting-started) | Interact with the physical world with sensors and actuators | Learn about sensors to gather data from the physical world, and actuators to send feedback, whilst you build a nightlight | [Interact with the physical world with sensors and actuators](./1-getting-started/lessons/3-sensors-and-actuators/README.md) |
| 04 | [Getting started](./1-getting-started) | Connect your device to the Internet | Learn about how to connect an IoT device to the Internet to send and receive messages by connecting your nightlight to an MQTT broker | [Connect your device to the Internet](./1-getting-started/lessons/4-connect-internet/README.md) |
| 05 | [Farm](./2-farm) | Predict plant growth | Learn how to predict plant growth using temperature data captured by an IoT device | [Predict plant growth](./2-farm/lessons/1-predict-plant-growth/README.md) |

@ -1,24 +1,10 @@
# TODO: The maintainer of this repo has not yet edited this file
**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
- **No CSS support:** Fill out this template with information about how to file issues and get help.
- **Yes CSS support:** Fill out an intake form at [aka.ms/spot](https://aka.ms/spot). CSS will work with/help you to determine next steps. More details also available at [aka.ms/onboardsupport](https://aka.ms/onboardsupport).
- **Not sure?** Fill out a SPOT intake as though the answer were "Yes". CSS will help you decide.
*Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
# Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
This project uses GitHub Issues to track bugs and feature requests. Please search the existing issues before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new Issue.
For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
For help and questions about using this project, please contact us by raising an issue in this repo.
## Microsoft Support Policy

@ -10,6 +10,15 @@ The specific hardware was chosen to reduce the complexity of the lessons and ass
You will also need a few non-technical items, such as soil or a pot plant, and fruit or vegetables.
## Buy the kits
![The Seeed studios logo](./images/seeed-logo.png)
Seeed Studios have very kindly made all the hardware available as easy to purchase kits:
* [IoT for beginners with Seeed and Microsoft - Wio Terminal Starter Kit]()
* [IoT for beginners with Seeed and Microsoft - Raspberry Pi 4 Starter Kit](https://www.seeedstudio.com/IoT-for-beginners-with-Seeed-and-Microsoft-Raspberry-Pi-Starter-Kit.html)
## Arduino
All the device code for Arduino is in C++. To complete all the assignments you will need the following:

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 299 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 441 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

@ -104,7 +104,7 @@
},
{
"id": 3,
"title": "Lesson 2 - Introduction to IoT devices: Pre-Lecture Quiz",
"title": "Lesson 2 - A deeper dive into IoT: Pre-Lecture Quiz",
"quiz": [
{
"questionText": "The T in IoT stands for:",
@ -157,7 +157,7 @@
},
{
"id": 4,
"title": "Lesson 2 - Introduction to IoT devices: Post-Lecture Quiz",
"title": "Lesson 2 - A deeper dive into IoT: Post-Lecture Quiz",
"quiz": [
{
"questionText": "The three steps in a CPU instruction cycle are:",
@ -388,7 +388,7 @@
"isCorrect": "false"
},
{
"answerText": "It depends on the command, the device a the requirements of the IoT app",
"answerText": "It depends on the command, the device and the requirements of the IoT app",
"isCorrect": "true"
}
]
@ -960,7 +960,7 @@
"title": "Lesson 10 - Keep your plant secure: Post-Lecture Quiz",
"quiz": [
{
"questionText": "Symmetric key encryption compares to asymmetric ky encryption in which ways:",
"questionText": "Symmetric key encryption compares to asymmetric key encryption in which ways:",
"answerOptions": [
{
"answerText": "Symmetric key encryption is slower than asymmetric",

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 MiB

Loading…
Cancel
Save