pull/102/head
Lateefah Bello 3 years ago
commit 7afed015f4

@ -1,10 +1,12 @@
{
"cSpell.words": [
"ADCs",
"Alexa",
"Geospatial",
"Kbps",
"Mbps",
"Seeed",
"Siri",
"Twilio",
"UART",
"UDID",

@ -0,0 +1,16 @@
# Iniciando con IoT
En esta sección del curso, serás introducido al Internet de las cosas y aprenderás los conceptos básicos, incluyendo el desarrollo de tu primer proyecto “Hola Mundo” conectado a la nube. Este proyecto consiste en una lampara que se enciende según los niveles de iluminación medidos por un sensor.
![El LED conectado al WIO encendiendo y apagando según cambian los niveles de iluminación](https://github.com/microsoft/IoT-For-Beginners/blob/main/images/wio-running-assignment-1-1.gif?raw=true)
## Lecciones
1. [Introducción al IoT](lessons/1-introduction-to-iot/README.md)
1. [IoT a profundidad](lessons/2-deeper-dive/README.md)
1. [Interactúa con el mundo físico con sensores y actuadores](lessons/3-sensors-and-actuators/README.md)
1. [Conecta tu dispositivo al Internet](lessons/4-connect-internet/README.md)
## Créditos
Todas las lecciones fueron escritas con ♥️ por [Jim Bennett](https://GitHub.com/JimBobBennett)

@ -26,11 +26,11 @@ The term 'Internet of Things' was coined by [Kevin Ashton](https://wikipedia.org
> **Sensors** gather information from the world, such as measuring speed, temperature or location.
>
> **Actuators** convert electrical signals into real-world interactions such as triggering a switch, turning on lights, making sounds, or sending control signals to other hardware, for example to turn on a power socket.
> **Actuators** convert electrical signals into real-world interactions such as triggering a switch, turning on lights, making sounds, or sending control signals to other hardware, for example, to turn on a power socket.
IoT as a technology area is more than just devices - it includes cloud based services that can process the sensor data, or send requests to actuators connected to IoT devices. It also includes devices that don't have or don't need Internet connectivity, often referred to as edge devices. These are devices that can process and respond to sensor data themselves, usually using AI models trained in the cloud.
IoT as a technology area is more than just devices - it includes cloud-based services that can process the sensor data, or send requests to actuators connected to IoT devices. It also includes devices that don't have or don't need Internet connectivity, often referred to as edge devices. These are devices that can process and respond to sensor data themselves, usually using AI models trained in the cloud.
IoT is a fast growing technology field. It is estimated that by the end of 2020, 30 billion IoT devices were deployed and connected to the Internet. Looking to the future, it is estimated that by 2025, IoT devices will be gathering almost 80 zettabytes of data, or 80 trillion gigabytes. That's a lot of data!
IoT is a fast growing technology field. It is estimated that by the end of 2020, 30 billion IoT devices were deployed and connected to the Internet. Looking to the future, it is estimated that by 2025, IoT devices will be gathering almost 80 zettabytes of data or 80 trillion gigabytes. That's a lot of data!
![A graph showing active IoT devices over time, with an upward trend from under 5 billion in 2015 to over 30 billion in 2025](../../../images/connected-iot-devices.svg)
@ -40,15 +40,15 @@ This data is the key to IoT's success. To be a successful IoT developer, you nee
## IoT devices
The **T** in IoT stands for **Things** - devices that interact with the physical world around them either by gathering data from sensors, or providing real-world interactions via actuators.
The **T** in IoT stands for **Things** - devices that interact with the physical world around them either by gathering data from sensors or providing real-world interactions via actuators.
Devices for production or commercial use, such as consumer fitness trackers, or industrial machine controllers, are usually custom-made. They use custom circuit boards, maybe even custom processors, designed to meet the needs of a particular task, whether that's being small enough to fit on a wrist, or rugged enough to work in a high temperature, high stress or high vibration factory environment.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't see on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't have on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
These developer kits usually fall into two categories - microcontrollers and single-board computers. These will be introduced here, and we'll go into more detail in the next lesson.
> 💁 Your phone can also be considered to be a general-purpose IoT device, with sensors and actuators built in, with different apps using the sensors and actuators in different ways with different cloud services. You can even find some IoT tutorials that use a phone app as an IoT device.
> 💁 Your phone can also be considered to be a general-purpose IoT device, with sensors and actuators built-in, with different apps using the sensors and actuators in different ways with different cloud services. You can even find some IoT tutorials that use a phone app as an IoT device.
### Microcontrollers
@ -68,7 +68,7 @@ Microcontrollers are typically low cost computing devices, with average prices f
Microcontrollers are designed to be programmed to do a limited number of very specific tasks, rather than being general-purpose computers like PCs or Macs. Except for very specific scenarios, you can't connect a monitor, keyboard and mouse and use them for general purpose tasks.
Microcontroller developer kits usually come with additional sensors and actuators on board. Most boards will have one or more LEDs you can program, along with other devices such as standard plugs for adding more sensors or actuators using various manufacturers' ecosystems, or built in sensors (usually the most popular ones such as temperature sensors). Some microcontrollers have built in wireless connectivity such as Bluetooth or WiFi, or have additional microcontrollers on the board to add this connectivity.
Microcontroller developer kits usually come with additional sensors and actuators on board. Most boards will have one or more LEDs you can program, along with other devices such as standard plugs for adding more sensors or actuators using various manufacturers' ecosystems or built-in sensors (usually the most popular ones such as temperature sensors). Some microcontrollers have built-in wireless connectivity such as Bluetooth or WiFi or have additional microcontrollers on the board to add this connectivity.
> 💁 Microcontrollers are usually programmed in C/C++.
@ -90,7 +90,7 @@ Single-board computers are fully-featured computers, so can be programmed in any
### Hardware choices for the rest of the lessons
All the subsequent lessons include assignments using an IoT device to interact with the physical world, and communicate with the cloud. Each lesson supports 3 device choices - Arduino (using a Seeed Studios Wio Terminal), or a single-board computer, either a physical device (a Raspberry Pi 4), or a virtual single-board computer running on your PC or Mac.
All the subsequent lessons include assignments using an IoT device to interact with the physical world and communicate with the cloud. Each lesson supports 3 device choices - Arduino (using a Seeed Studios Wio Terminal), or a single-board computer, either a physical device (a Raspberry Pi 4) or a virtual single-board computer running on your PC or Mac.
You can read about the hardware needed to complete all the assignments in the [hardware guide](../../../hardware.md).
@ -148,15 +148,15 @@ IoT covers a huge range of use cases, across a few broad groups:
### Consumer IoT
Consumer IoT refers to IoT devices that consumers will buy and use around the home. Some of these devices are incredibly useful, such as smart speakers, smart heating systems and robotic vacuum cleaners. Others are questionable in their usefulness, such as voice controlled taps that then mean you cannot turn them off as the voice control cannot hear you over the sound of running water.
Consumer IoT refers to IoT devices that consumers will buy and use around the home. Some of these devices are incredibly useful, such as smart speakers, smart heating systems and robotic vacuum cleaners. Others are questionable in their usefulness, such as voice-controlled taps that then mean you cannot turn them off as the voice control cannot hear you over the sound of running water.
Consumer IoT devices are empowering people to achieve more in their surroundings, especially the 1 billion who have a disability. Robotic vacuum cleaners can provide clean floors to people with mobility issues who cannot vacuum themselves, voice controlled ovens allow people with limited vision or motor control to heat their ovens with only their voice, health monitors can allow patients to monitor chronic conditions themselves with more regular and more detailed updates on their conditions. These devices are becoming so ubiquitous that even young children are using them as part of their daily lives, for example students doing virtual schooling during the COVID pandemic setting timers on smart home devices to track their schoolwork or alarms to remind them of upcoming class meetings.
Consumer IoT devices are empowering people to achieve more in their surroundings, especially the 1 billion who have a disability. Robotic vacuum cleaners can provide clean floors to people with mobility issues who cannot vacuum themselves, voice-controlled ovens allow people with limited vision or motor control to heat their ovens with only their voice, health monitors can allow patients to monitor chronic conditions themselves with more regular and more detailed updates on their conditions. These devices are becoming so ubiquitous that even young children are using them as part of their daily lives, for example, students doing virtual schooling during the COVID pandemic setting timers on smart home devices to track their schoolwork or alarms to remind them of upcoming class meetings.
✅ What consumer IoT devices do you have on your person or in your home?
### Commercial IoT
Commercial IoT covers the use of IoT in the workplace. In an office setting there may be occupancy sensors and motion detectors to manage lighting and heating to only keep the lights and heat off when not needed, reducing cost and carbon emissions. In a factory, IoT devices can monitor for safety hazards such as workers not wearing hard hats or noise that has reached dangerous levels. In retail, IoT devices can measure the temperature of cold storage, alerting the shop owner if a fridge or freezer is outside the required temperature range, or they can monitor items on shelves to direct employees to refill produce that has been sold. The transport industry is relying more and more on IoT to monitor vehicle locations, track on-road mileage for road user charging, track driver hours and break compliance, or notify staff when a vehicle is approaching a depot to prepare for loading or unloading.
Commercial IoT covers the use of IoT in the workplace. In an office setting, there may be occupancy sensors and motion detectors to manage lighting and heating to only keep the lights and heat off when not needed, reducing cost and carbon emissions. In a factory, IoT devices can monitor for safety hazards such as workers not wearing hard hats or noise that has reached dangerous levels. In retail, IoT devices can measure the temperature of cold storage, alerting the shop owner if a fridge or freezer is outside the required temperature range, or they can monitor items on shelves to direct employees to refill produce that has been sold. The transport industry is relying more and more on IoT to monitor vehicle locations, track on-road mileage for road user charging, track driver hours and break compliance, or notify staff when a vehicle is approaching a depot to prepare for loading or unloading.
✅ What commercial IoT devices do you have in your school or workplace?
@ -166,7 +166,7 @@ Industrial IoT, or IIoT, is the use of IoT devices to control and manage machine
Factories use IoT devices in many different ways. Machinery can be monitored with multiple sensors to track things like temperature, vibration and rotation speed. This data can then be monitored to allow the machine to be stopped if it goes outside of certain tolerances - it runs too hot and gets shut down for example. This data can also be gathered and analyzed over time to do predictive maintenance, where AI models will look at the data leading up to a failure, and use that to predict other failures before they happen.
Digital agriculture is important if the planet is to feed the growing population, especially for the 2 billion people in 500 million households that survive on [subsistence farming](https://wikipedia.org/wiki/Subsistence_agriculture). Digital agriculture can range from a few single digit dollar sensors, to massive commercial setups. A farmer can start by monitoring temperatures and using [growing degree days](https://wikipedia.org/wiki/Growing_degree-day) to predict when a crop will be ready for harvest. They can connect soil moisture monitoring to automated watering systems to give their plants as much water as is needed, but no more to ensure their crops don't dry out without wasting water. Farmers are even taking it further and using drones, satellite data and AI to monitor crop growth, disease and soil quality over huge areas of farmland.
Digital agriculture is important if the planet is to feed the growing population, especially for the 2 billion people in 500 million households that survive on [subsistence farming](https://wikipedia.org/wiki/Subsistence_agriculture). Digital agriculture can range from a few single digit dollar sensors to massive commercial setups. A farmer can start by monitoring temperatures and using [growing degree days](https://wikipedia.org/wiki/Growing_degree-day) to predict when a crop will be ready for harvest. They can connect soil moisture monitoring to automated watering systems to give their plants as much water as is needed, but no more to ensure their crops don't dry out without wasting water. Farmers are even taking it further and using drones, satellite data and AI to monitor crop growth, disease and soil quality over huge areas of farmland.
✅ What other IoT devices could help farmers?
@ -191,7 +191,7 @@ You'd be amazed by just how many IoT devices you have around you. I'm writing th
* Video doorbell and security cameras
* Smart thermostat with multiple smart room sensors
* Garage door opener
* Home entertainment systems and voice controlled TVs
* Home entertainment systems and voice-controlled TVs
* Lights
* Fitness and health trackers

@ -2,7 +2,7 @@
## Instructions
There are many large and small scale IoT projects being rolled out globally, from smart farms to smart cities, healthcare monitoring, transport, or the use of public spaces.
There are many large and small scale IoT projects being rolled out globally, from smart farms to smart cities, in healthcare monitoring, transport, and for the use of public spaces.
Search the web for details of a project that interests you, ideally one close to where you live. Explain the upsides and the downsides of the project, such as what benefit comes from it, any problems it causes and how privacy is taken into consideration.

@ -20,7 +20,7 @@ Install the Grove base hat on your Pi and configure the Pi
![Fitting the grove hat](../../../images/pi-grove-hat-fitting.gif)
1. Decide how you want to program you Pi, and head to the relevant section below:
1. Decide how you want to program your Pi, and head to the relevant section below:
* [Work directly on your Pi](#work-directly-on-your-pi)
* [Remote access to code the Pi](#remote-access-to-code-the-pi)
@ -53,7 +53,7 @@ To program the Pi using the Grove sensors and actuators, you will need to instal
One of the powerful features of Python is the ability to install [pip packages](https://pypi.org) - these are packages of code written by other people and published to the Internet. You can install a pip package onto your computer with one command, then use that package in your code. This Grove install script will install the pip packages you will use to work with the Grove hardware from Python.
1. Reboot the Pi either using the menu, or running the following command in the Terminal:
1. Reboot the Pi either using the menu or running the following command in the Terminal:
```sh
sudo reboot
@ -93,7 +93,7 @@ Set up the headless Pi OS.
![The Raspberry Pi Imager with Raspberry Pi OS Lite selected](../../../images/raspberry-pi-imager.png)
> 💁 Raspberry Pi OS Lite is a version of Raspberry Pi OS that doesn't have the desktop UI, or UI based tools. These aren't needed for a headless Pi, and makes the install smaller and boot up time faster.
> 💁 Raspberry Pi OS Lite is a version of Raspberry Pi OS that doesn't have the desktop UI or UI based tools. These aren't needed for a headless Pi and makes the install smaller and boot up time faster.
1. Select the **CHOOSE STORAGE** button, then select your SD card
@ -109,7 +109,7 @@ Set up the headless Pi OS.
1. Select the **WRITE** button to write the OS to the SD card. If you are using macOS, you will be asked to enter your password as the underlying tool that writes disk images needs privileged access.
The OS will be written to the SD card, and once compete the card will be ejected by the OS, and you will be notified. Remove the SD card from your computer, insert it into the Pi and power up the Pi.
The OS will be written to the SD card, and once complete the card will be ejected by the OS, and you will be notified. Remove the SD card from your computer, insert it into the Pi and power up the Pi.
#### Connect to the Pi
@ -139,7 +139,7 @@ Remotely access the Pi.
1. If you are using Windows, the easiest way to enable ZeroConf is to install [Bonjour Print Services for Windows](http://support.apple.com/kb/DL999). You can also install [iTunes for Windows](https://www.apple.com/itunes/download/) to get a newer version of the utility (which is not available standalone).
> 💁 If you cannot connect using `raspberrypi.local`, then you can use the IP address of your Pi. Refer to the [Raspberry Pi IP address documentation](https://www.raspberrypi.org/documentation/remote-access/ip-address.md) for instructions of a number of ways to get the IP address.
> 💁 If you cannot connect using `raspberrypi.local`, then you can use the IP address of your Pi. Refer to the [Raspberry Pi IP address documentation](https://www.raspberrypi.org/documentation/remote-access/ip-address.md) for instructions on a number of ways to get the IP address.
1. Enter the password you set in the Raspberry Pi Imager Advanced Options
@ -157,7 +157,7 @@ Configure the installed Pi software and install the Grove libraries.
sudo apt update && sudo apt full-upgrade --yes && sudo reboot
```
The Pi will be updated and rebooted. The `ssh` session will end when the Pi is rebooted, so leave it about 30 seconds then reconnect.
The Pi will be updated and rebooted. The `ssh` session will end when the Pi is rebooted, so leave it for about 30 seconds then reconnect.
1. From the reconnected `ssh` session, run the following command to install all the needed libraries for the Grove hardware:
@ -235,7 +235,7 @@ Create the Hello World app.
> 💁 You need to explicitly call `python3` to run this code just in case you have Python 2 installed in addition to Python 3 (the latest version). If you have Python2 installed then calling `python` will use Python 2 instead of Python 3
You should see the following output:
The following output will appear in the terminal:
```output
pi@raspberrypi:~/nightlight $ python3 app.py

@ -1,9 +0,0 @@
# Dummy File
This file acts as a placeholder for the `translations` folder. <br>
**Please remove this file after adding the first translation**
For the instructions, follow the directives in the [translations guide](https://github.com/microsoft/IoT-For-Beginners/blob/main/TRANSLATIONS.md) .
## THANK YOU
We truly appreciate your efforts!

@ -0,0 +1,226 @@
# IoT পরিচিতি
![A sketchnote overview of this lesson](../../../../sketchnotes/lesson-1.png)
> Nitya Narasimhan তৈরী করছেন এই স্কেচনোটটি । এটির বড় সংস্করণ দেখতে চাইলে ছবিটির উপর ক্লিক করুন ।
## লেকচার পূর্ববর্তী কুইজ
[লেকচার পূর্ববর্তী কুইজ ](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/1)
## পরিচিতি
এই পাঠে ইন্টারনেট অফ থিংস সম্পর্কিত বেসিক কিছু বিষয় আমরা জানবো এবং হার্ডওয়্যার সেটাপ করা শিখবো ।
এই পাঠে থাকছে :
* [‘ইন্টারনেট অফ থিংস’ কী ?](#‘ইন্টারনেট-অফ-থিংস’-কী )
* [IoT যন্ত্রসমূহ ](#iot-যন্ত্রসমূহ)
* [ডিভাইস সেটাপ করা ](#ডিভাইস-সেটাপ-করা)
* [আইওটি এর প্রয়োগসমূহ ](#আইওটি-এর-প্রয়োগসমূহ)
* [আমাদের চারপাশে থাকতে পারে এমন কিছু আইওটি ডিভাইসের উদাহরণ ](#আমাদের-চারপাশে-থাকতে-পারে-এমন-কিছু-আইওটি-ডিভাইসের-উদাহরণ)
## ‘ইন্টারনেট অফ থিংস’ কী?
১৯৯৯ সালে [Kevin Ashton](https://wikipedia.org/wiki/Kevin_Ashton) সর্বপ্রথম ‘ইন্টারনেট অফ থিংস’ এই শব্দবন্ধ প্রচলন করেন যা দ্বারা মূলত তিনি বোঝান সেন্সর ব্যবহার করে আমাদের চারপাশের সবকিছুর সাথে ইন্টারনেটকে যুক্ত করা । তখন থেকেই এই শব্দগুচ্ছ দিয়ে বোঝানো হয় সেসব যন্ত্রকে যারা সাধারণত অন্য কোন যন্ত্র বা ইন্টারনেটের সাথে সংযুক্ত থেকে আমাদের চারপাশের জগতে বিভিন্ন কাজ করে । এই কাজ হতে পারে সেন্সর এর মাধ্যমে তথ্য সংগ্রহ অথবা একচুয়েটরের মাধ্যমে সরাসরি যান্ত্রিক কাজ করা (যেমনঃ সুইচ অন করা বা কোন এলইডি জ্বালিয়ে দেয়া)
> **সেন্সর** আমাদের চারপাশ থেকে তথ্য সংগ্রহ করে যেমনঃ গতি, তাপমাত্রা, অবস্থান।
>
> **একচুয়েটর** ইলেক্ট্রিক সিগন্যাল কে বাস্তব জগতের বিভিন্ন যান্ত্রিক কাজে রুপান্তর করে উদাহরণ হিসেবে বলা যায় কোন সুইচকে নিয়ন্ত্রণ করা, লাইট জ্বালিয়ে দেয়া, শব্দ তৈরী করা বা অন্য কোন যন্ত্রে নির্দেশনা পাঠানো যেমনঃ পাওয়ার সকেট চালু করা।
প্রযুক্তিক্ষেত্র হিসেবে IoT শুধুমাত্র কোন যন্ত্র নয়, বরং এখানে আরো অনেক বিস্তৃত বিষয়াদি অন্তর্ভুক্ত রয়েছে যেমনঃ ক্লাউড নির্ভর সার্ভিসগুলো যার মাধ্যমে সেন্সরের সংগৃহিত ডেটা কে বিশ্লেষণ করা বা একচুয়েটরে নির্দেশনা পাঠানো । আরো একটি বিষয় IoT এর অন্তর্ভুক যা হয়তো অনেককেই অবাক করবে আর তা হলো Edge শ্রেণির যন্ত্রসমূহ যারা ইন্টারনেটে যুক্ত নয় বা যুক্ত হবার প্রয়োজনও হয়না । এই যন্ত্রগুলো ক্লাউডে ট্রেইনড কৃত্রিম বুদ্ধিমত্তার মডেল ব্যবহার করে নিজেরাই প্রাপ্ত তথ্য বিশ্লেষণ করে এবং কীভাবে কাজ করবে সেই সিদ্ধান্ত গ্রহণ করে ।
IoT খুবই দ্রুত বর্ধণশীল একটি প্রযুক্তিক্ষেত্র । ২০২০ সালের মধ্যে আনুমানিক ৩০ বিলিয়ন IoT যন্ত্র ইন্টারনেটে যুক্ত করে ব্যবহার করা হয়েছে । ভবিষ্যতের দিকে তাকিয়ে আনুমান করা হয় যে ২০২৫ সালের মধ্যে এই IoT যন্ত্রগুলো ৮০জেটাবাইট অর্থাৎ ৮০ ট্রিলিয়ন গিগাবাইট সংগ্রহ করে। অনেক ডেটা, তাই না ?
![A graph showing active IoT devices over time, with an upward trend from under 5 billion in 2015 to over 30 billion in 2025](../../../../images/connected-iot-devices.svg)
✅ ছোট্ট একটা গবেষণা করিঃ IoT যে পরিমাণ ডেটা তৈরী করছে তার কতটুকু সত্যিকার অর্থে ব্যবহৃত হচ্ছে আর কতটুকু নষ্ট হচ্ছে ? এত বিশাল পরিমান তথ্য কেন অবহেলা করা হচ্ছে ?
খেয়াল রাখতে হবে, IoT এর সাফল্যের মূলে কিন্তু রয়েছে ডেটা । সফল IoT ডেভলাপার হতে গেলে আমাদের বুঝতে হবে যে আমরা কোন তথ্যগুলো সংগ্রহ করবো, কীভাবে তা করা যাবে । আমাদের জানতে হবে এই তথ্যের ভিত্তিতে কীভাবে আমরা সিদ্ধান্ত নিতে পারি এবং সিদ্ধান্তের প্রাপ্ত ফলাফল দ্বারা আমাদের চারপাশের পরিবেশে কীভাবে কাজ করবো।
## IoT যন্ত্রসমূহ
IoT শব্দে **T** হলো **Things** - ‘থিংস’ বা জিনিষপত্র বলতে এমন সব যন্ত্রকে বোঝানো হয়েছে যারা আশপাশের জগতের সাথে যোগাযোগ করতে পারে সেন্সর থেকে তথ্য সংগ্রহ করে বা একচুয়েটর দ্বারা কোন যান্ত্রিক কাজ করে।
উৎপাদন বা ব্যবসায়িকভাবে ব্যবহারের যন্ত্র যেমনঃ কোন ব্যক্তির ফিটনেস ট্র্যাকার বা কোন কারখানার যন্ত্রের কন্ট্রোলার - প্রয়োজন এর উপর নির্ভর করে বিশেষভাবে তৈরী করতে হতে পারে। তখন সেক্ষেত্রে বিশেষ সার্কিট বোর্ড, আলাদা ধরণের প্রসেসর ব্যবহার করা হয় সে সুনির্দিষ্ট প্রয়োজন মেটাতে। এই কাজ টা খুবই সামান্য হতে পারে যেমন কব্জির সাইজের সাথে মিলিয়ে আকার দেয়া অথবা অনেক কঠিনও হতে পারে যেমন কারখানার অধিক তাপমাত্রা, অত্যধিক চাপ বা তীব্র কম্পনেও যন্ত্রকে চালু রাখতে পারা । .
একজন IoT ডেভলাপার হিসেবে নিজেকে গড়ে তোলার ক্ষেত্রে , এই বিষয়ে জানা বা প্রোটোটাইপ তৈরী করতে হলে আমাদেরকে তা শুরু করতে হবে developer kit দিয়ে । এগুলো সাধারণ ব্যবহারের জন্য তৈরী করা। এর বেশকিছু বৈশিষ্ট্য রয়েছে যা ব্যবহার্য যন্ত্রে দেখা যায়না যেমনঃ অনেকগুলো এক্সটার্নাল পিন, ডিবাগিং এর হার্ডওয়ার বা এমন কিছু সুবিধা যা আপনার সবসময় প্রয়োজন হবেনা। একারণেই এদেরকে general-purpose বা সাধারণ ব্যবহার্য যন্ত্র বলা হয়।
এই ডেভলাপার কিটস গুলোকে দুটো ভাগে ভাগ করা যেতে পারে - মাইক্রোকন্ট্রোলার এবং সিংগেল বোর্ড কম্পিউটার । এগুলোর সাথে আমরা এখন পরিচিত হবো এবং সামনের অধ্যায় এ অনেক বিষদভাবে জানবো ।
> 💁 আমাদের ফোনও কিন্তু general-purpose IoT যন্ত্র হিসেবে কাজ করতে পারে , স্মার্টফোনে সেন্সর আর একচুয়েটর তো রয়েছেই যার সাথে বিভিন্ন অ্যাপ (App) ব্যবহার করে আমরা এদের নিয়ন্ত্রণ করতে পারি বিভিন্নভাবেই । টিউটোরিয়াল খুঁজতে গেলে অনেকসময়ই দেখা যায় ফোনের অ্যাপ ব্যবহার করেই IoT এর কাজ করা হচ্ছে ।
### মাইক্রোকন্ট্রোলার
মাইক্রোকন্ট্রোলার ( microcontroller unit থেকে সংক্ষেপে MCU ও লেখা হয়) হলো ছোট কম্পিউটার যাতে অন্তর্ভুক্ত রয়েছেঃ
🧠 এক বা একাধিক ‘সেন্ট্রাল প্রসেসিং ইউনিট’ (সিপিইউ) - মাইক্রোকন্ট্রোলারের মস্তিষ্ক, যা এটিকে চালায়।
💾 মেমোরি (র‍্যাম এবং প্রোগ্রাম মেমরি) - যেখানে সকল প্রোগ্রাম, তথ্য, চলক রয়েছে ।
🔌 প্রোগ্রামেবল ইনপুট/আউটপুট কানেকশন - যার মাধমে বাহ্যিক যন্ত্রগুলোর সাথে যোগাযোগ করা যায় (যেমনঃ সেন্সর এবং একচুয়েটর এর সাথে)
মাইক্রোকন্ট্রোলার সাধারণ কম খরচের কম্পিউটিং যন্ত্র যার গড় বাজার মূল্য .০৩ মার্কিন ডলার থেকে .৫০ মার্কিন ডলারের মধ্যেই হবে। ডেভলাপার কিট গুলোর দাম ৪ডলার থেকে শুরু হয়ে আস্তে আস্তে বাড়ে , সেখানে কী কী পণ্য দেয়া হচ্ছে তার উপর ভিত্তি করে। [Wio Terminal](https://www.seeedstudio.com/Wio-Terminal-p-4509.html), যেটি [Seeed studios](https://www.seeedstudio.com) এর একটি মাইক্রোকন্ট্রোলার ডেভলাপার কিট - আর এতে রয়েছে - সেন্সর, অ্যাকচুয়েটর, ওয়াইফাই এবং একটি স্ক্রিন; এর দাম প্রায় ৩০ মার্কিন ডলার।
![A Wio Terminal](../../../../images/wio-terminal.png)
> 💁 ইন্টারনেটে মাইক্রোকন্ট্রোলারের ব্যপারে সার্চ করার সময়, MCU এর ক্ষেত্রে খেয়াল রাখতে হবে। মজার ব্যাপার হলো , MCU এর সার্চ রেজাল্টে মাইক্রোকন্ট্রোলার নয়, বরং বিখ্যাত Marvel Cinematic Universe আসতে পারে !
মাইক্রোকন্ট্রোলার এমনভাবে তৈরী করা হয়েছে যাতে তারা অল্পসংখ্যক সুনির্দিষ্ট কিছু কাজ করতে পারে যা কিনা পিসি বা ম্যাক এর মতো মোটেও নয় । খুবই সুনির্দিষ্ট কিছু পরিস্থিতিকে বাদ দিলে, সাধারণত মনিটর, কীবোর্ড বা মাউসকে সংযুক্ত করে general purpose কাজগুলো করা যায়না।
মাইক্রোকন্ট্রোলার ডেভলাপার কিট সাধারণত অতিরিক্ত সেন্সর এবং একচুয়েটর সম্বলিত থাকে। বেশিরভাগ বোর্ডেই অতিরিক্ত প্রোগ্রামেবল এলইডি থাকে , এতে আরো দেখা যায় কিছু অতিরিক্ত প্লাগ যাতে আমরা বিভিন্ন ধরণের প্রস্তুতকারকের যন্ত্রকে সংযুক্ত করতে পারি অথনা বিল্ট-ইন সেন্সরকে আরো ভালোভাবে ব্যবহার করতে পারি । কিছু মাইক্রোকন্ট্রোলারে বিল্ট-ইন ব্লুটুথ বা ওয়াইফাই কানেকশন থাকে আবার কিছুতে অতিরিক্ত অংশ যোগ করে এই সুবিধা প্রদান করা হয়।
> 💁 মাইক্রোকন্ট্রোলারগুলো সাধারণত সি/সি++ ভাষায় প্রোগ্রাম করা হয়ে থাকে ।
### সিংগেল-বোর্ড কম্পিউটার
সিংগেল-বোর্ড কম্পিউটার হলো ছোট একটি যন্ত্র যার কাছে পূর্ণাঙ্গ কম্পিউটারের সব উপাদানই রয়েছে, তবে ছোট আকারে । এই যন্ত্রগুলোরও ডেস্কটপ বা ল্যাপটপ যেমনঃ পিসি, ম্যাক এর মতো বৈশিষ্ট্য রয়েছে যারা একটি অপারেটিং সিস্টেম রান করতে পারে । তবে এরা কলেবরে ছোট, কম পাওয়ার ব্যবহার করে এবং দৃশ্যত স্বল্পমূল্যের ।
![A Raspberry Pi 4](../../../../images/raspberry-pi-4.jpg)
***Raspberry Pi 4. Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)***
বিখ্যাত সিংগেল-বোর্ড কম্পিউটারগুলোর মধ্যে রাস্পবেরি পাই অন্যতম ।
মাইক্রোকন্ট্রোলারের মতো সিংগেল-বোর্ড কম্পিউটার এরও রয়েছে সিপিইউ, মেমরি, ইনপুট/আউটপুট পিন । তবে তাদের অতিরিক্ত বৈশিষ্ট্য রয়েছে যেমন গ্রাফিক্স চিপ দ্বারা আমরা মনিটর, অডিও আউটপুট এবং ইউএসবি পোর্টগুলিকে সংযুক্ত করতে পারি কী-বোর্ড , মাউস এর সাথে বা অন্যান্য স্ট্যান্ডার্ড ইউএসবি ডিভাইসগুলি যেমনঃ ওয়েবক্যাম বা বাহ্যিক স্টোরেজ এর সাথে সংযোগ করতে দেয় । প্রোগ্রামগুলি বোর্ডে তৈরি মেমরি চিপে নয়, বরং হার্ড ড্রাইভে বা এসডি কার্ড এ সংরক্ষণ করা হয়, যার সাথে থাকে অপারেটিং সিস্টেম ।
> 🎓 সিংগেল-বোর্ড কম্পিউটারকে পিসি বা ম্যাকের ছোট ভার্সন হিসেবে ভাবা যেতে পারে যার সাথে অতিরিক্ত কিছু সাধারণ ব্যবহার্য ইনপুট আউটপুট অর্থাৎ GPIO (general-purpose input/output) পিন রয়েছে ।
এগুলো পূর্নাঙ্গ কম্পিউটারের মতই, তাই এদেরকে যেকোন প্রোগ্রামিং ভাষায় চালানো যেতে পারে । তবে আইওটি যন্ত্রগুলো সাধারণত পাইথন ভাষায় প্রোগ্রাম করা হয়ে থাকে।
### এই পাঠের বাকি অংশের জন্য হার্ডওয়ার নির্বাচন
পরবর্তী সমস্ত পাঠগুলিতে আমাদের চারপাশের জগতের সাথে যোগাযোগ করার জন্য এবং ক্লাউড সার্ভিসের সাথে যংযুক্ত থাকার জন্য আইওটি ডিভাইস ব্যবহার করে এসাইনমেন্ট দেয়া হবে। প্রতিটি পাঠ ৩ ধরণের ডিভাইস সাপোর্ট করে - আরডুইনো (একটি সীড স্টুডিওস উইও টার্মিনাল ব্যবহার করে), বা একটি একক বোর্ড কম্পিউটার বা কোনও ফিজিকাল ডিভাইস (একটি রাস্পবেরি পাই 4), অথবা পিসি বা ম্যাকে ব্যবহার যোগ্য ভার্চুয়াল একক-বোর্ড কম্পিউটার (কোন ফিজিক্যাল আইওটি ডিভাইস ব্যবহার না করেই) ।
এখানে [hardware guide](../../../../hardware.md) অংশে সমস্ত অ্যাসাইনমেন্ট সম্পন্ন করার জন্য প্রয়োজনীয় হার্ডওয়্যার সম্পর্কে বিস্তারিত রয়েছে ।
> 💁 অ্যাসাইনমেন্টগুলি সম্পন্ন করার জন্য আপনার কোন আইওটি হার্ডওয়্যার কেনার দরকার নেই, আমরা চাইলে ভার্চুয়াল সিঙ্গল-বোর্ড কম্পিউটার ব্যবহার করে সবকিছু করতে পারবো ।
আমরা কোন হার্ডওয়্যারটি ব্যবহার করবো তা নির্ভর করে আমাদের শিক্ষা প্রতিষ্ঠান বা ঘরে আমাদের কাছে কী কী এভেইলেবল রয়েছে এবং কোন প্রোগ্রামিং ভাষা আমরা জানি বা শিখার পরিকল্পনা করছি তার উপর । উভয় হার্ডওয়্যার ভেরিয়েন্ট একই সেন্সর ইকোসিস্টেম ব্যবহার করবে, সুতরাং আমরা যদি একটি পাথ শুরু করি তবে বেশিরভাগ কিট প্রতিস্থাপন না করেই অন্যটিতে পরিবর্তন করতে পারবো। ভার্চুয়াল সিংগেল-বোর্ড কম্পিউটারটি রাস্পবেরি পাই তে শিখার সমতুল । তাই পরবর্তীতে ফিজিক্যাল ডিভাইসে কাজ করতে চাইলে বেশ সহজেই আমরা কোড গুলো ব্যবহার করে কাজ করতে পারবো।
### আরডুইনো ডেভলাপার কিট
আমাদের মাইক্রোকন্ট্রোলার ডেভলাপমেন্ট শেখার ব্যাপারে আগ্রহ থাকলে, সেক্ষেত্রে এসাইনমেন্টগুলো আমরা আরডুইনো তে করতে পারি। তবে সি / সি ++ প্রোগ্রামিংয়ের প্রাথমিক জ্ঞান প্রয়োজন হবে, কারণ এখানে পাঠগুলি থেকে আমরা কেবল সেই কোডগুলো ব্যবহার করা শিখবো যেগুলো আরডুইনো কাঠামো, সেন্সর ও অ্যাকচুয়েটরে ব্যবহৃত হচ্ছে এবং এই কোডগুলো মূলত ক্লাউডের সাথে যুক্ত থাকে এমন লাইব্রেরিগুলির সাথে প্রাসঙ্গিক।
এসাইনমেন্টসমূহ আমরা করবো [PlatformIO extension for microcontroller development](https://platformio.org) এক্সটেনশন সম্বলিত [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=academic-17441-jabenn) ব্যবহার করে। কেউ চাইলে আরডুইনো আইডিই ব্যবহার করতে পারে, যদি এইক্ষেত্রে যথেষ্ট অভিজ্ঞতা থাকে ; কারণ আমাদের পাঠ্যে আরডুইনো আইডিই সংক্রান্ত নির্দেশনা থাকবে।
### সিংগেল-বোর্ড কম্পিউটার ডেভলাপার কিট
সিংগেল-বোর্ড কম্পিউটার ব্যবহার করে আইওটি শিখতে আগ্রহী হলে, আমাদের সেক্ষেত্রে একটি রাস্পবেরি পাই বা আমাদের পিসি বা ম্যাকের সাথে চলমান ভার্চুয়াল ডিভাইস ব্যবহার করে অ্যাসাইনমেন্টগুলি সম্পন্ন করতে পারবো ।
আমাদের পাইথন প্রোগ্রামিংয়ের প্রাথমিক জ্ঞানের প্রয়োজন হবে, কারণ পাঠগুলি থেকে আমরা কেবল এমন কোড শিখবো যা সেন্সর এবং অ্যাকচুয়েটরে ব্যবহৃত হচ্ছে এবং ক্লাউডের সাথে যোগাযোগ করতে পারে এমন প্রাসঙ্গিক লাইব্রেরিগুলি।
> 💁 পাইথনে কোডিং শিখতে চাইলে, নীচের ভিডিও সিরিজটি দুটি দেখতে পারি আমরা -
>
> * [পাইথন ফর বিগিনার্স](https://channel9.msdn.com/Series/Intro-to-Python-Development?WT.mc_id=academic-17441-jabenn)
> * [মোর পাইথন ফর বিগিনার্স](https://channel9.msdn.com/Series/More-Python-for-Beginners?WT.mc_id=academic-7372-jabenn)
.
এসাইনমেন্টগুলো করা হবে [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=academic-17441-jabenn) ব্যবহার করে ।
আমরা যদি রাস্পবেরি পাই ব্যবহার করতে চাই, তবে হয় রাস্পবেরি পাই ওএস এর সম্পূর্ণ ডেস্কটপ সংস্করণ ব্যবহার করে এটি চালাতে পারি এবং [ভিএস কোডের রাস্পবেরি পাই ওএস সংস্করণ](https://code.visualstudio.com/docs/setup/raspberry-pi?WT.mc_id=academic-17441-jabenn) ব্যবহার করে সরাসরি পাইতে সমস্ত কোডিং করতে পারবো। অথবা পাইকে পিসি বা ম্যাক থেকে হেডলেস ডিভাইস হিসাবে চালাতে পারি [Remote SSH extension](https://code.visualstudio.com/docs/remote/ssh?WT.c_id=academic-17441-jabenn) সহ ভিএস কোড ব্যবহার করে । এতে করে আমাদের পাইয়ের সাথে সংযোগ স্থাপন , কোড এডিট, ডিবাগ বা রান করতে পারবো যেন সরাসরি এতে কোডিং করছি ।
যদি আমরা ভার্চুয়াল ডিভাইস বিকল্পটি ব্যবহার করি, তখন সরাসরি কম্পিউটারে কোড করবো। সত্যিকার সেন্সর এবং অ্যাকচুয়েটর ব্যবহার না করে, বরং এদের একটি সিমুলেশনে আমরা সবকিছু নিয়ন্ত্রণ করতে পারবো ।
## ডিভাইস সেটাপ করা
আইওটি ডিভাইস প্রোগ্রামিং শুরু করার আগে আমাদেরকে কিছু বিষয় সেটআপ করতে হবে। কোন ডিভাইসটি আমরা ব্যবহার করবোতার উপর নির্ভর করে নীচের প্রাসঙ্গিক নির্দেশাবলী অনুসরণ করতে হবে।
> 💁 যদি কোন ডিভাইস না থাকে, তবে কোন ডিভাইস ব্যবহার করা যেতে পারে এবং কী কী অতিরিক্ত যন্ত্র আমাদের কিনতে হবে সেই সংক্রান্ত তথ্যাবলি [hardware guide](../../../../hardware.md) এ রয়েছে । এক্ষেত্র আমাদের কোন হার্ডওয়্যার না কিনলেও হবে , কারণ সমস্ত প্রকল্প ভার্চুয়াল ভাবে চালিয়ে আমরা শিখতে পারবো ।
এখানের নির্দেশাবলীর মধ্যে এমন হার্ডওয়্যার বা সরঞ্জামগুলির নির্মাতাদের ( তৃতীয় পক্ষের) ওয়েবসাইটগুলির লিঙ্ক অন্তর্ভুক্ত রয়েছে। আমরা যেন সর্বদা বিভিন্ন সরঞ্জাম এবং হার্ডওয়্যারের জন্য সর্বাধিক যুগোপযোগী নির্দেশাবলী ব্যবহার করতে পারি তা নিশ্চিত করার জন্য এটি করা হয়েছে।
আমাদের ডিভাইস সেট আপ করতে এবং 'হ্যালো ওয়ার্ল্ড' প্রকল্পটি সম্পূর্ণ করতে নিচের সহায়ক লিংকগুলোতে নির্দেশনা রয়েছে। Getting Started এর এই অধ্যায়ের 4টি পাঠে, আমরা একটি “নাইটলাইট” প্রজেক্ট করবো যার শুরু এখান থেকেই হতে চলেছে।
* [আরডুইনো Wio টার্মিনাল](../wio-terminal.md)
* [সিংগেল বোর্ড কম্পিউটার - রাস্পবেরি পাই](../pi.md)
* [সিংগেল বোর্ড কম্পিউটার -ভার্চুয়াল ডিভাইস](../virtual-device.md)
## আইওটি এর প্রয়োগসমূহ
আইওটি কয়েকটি বিস্তৃত প্রেক্ষাপটে অনেকগুলো ক্ষেত্রেই ব্যবহৃত হয়ঃ
* ভোক্তাপর্যায়ে আইওটি
* বাণিজ্যিকক্ষেত্রে আইওটি
* শিল্পক্ষেত্রে আইওটি
* অবকাঠামোগত আইওটি
✅ একটু গবেষণা করা যাক। নিম্নে বর্ণিত প্রতিটি ক্ষেত্রের জন্য, একটি করে উদাহরণ চিন্তা করি যা পাঠ্যে দেওয়া হয়নি।
### ভোক্তাপর্যায়ে আইওটি
কনজিউমার আইওটি বা ভোক্তাপর্যায়ে আইওটি বলতে সেসকল ডিভাইসগুলি বোঝায় যা গ্রাহকরা কিনে এবং বাড়িতে ব্যবহার করে । এর মধ্যে কয়েকটি ডিভাইস অবিশ্বাস্যরূপে দরকারী যেমন স্মার্ট স্পিকার, স্মার্ট হিটিং সিস্টেম এবং রোবোটিক ভ্যাকুয়াম ক্লিনার। অন্যগুলোর প্রয়োজনীয়তা কিছু ক্ষেত্রে প্রশ্নবিদ্ধ, যেমন ভয়েস নিয়ন্ত্রিত পানির ট্যাপ । যখন পানি প্রবাহমান থাকবে, তখন আমাদের ভয়েস এর শব্দ পানির কারণে শোনা না গেলে, এই যন্ত্র আমাদের চাহিদামাফিক কাজটি সম্পাদন করতে পারবেনা ।
গ্রাহকপর্যায়ে আইওটি ডিভাইসগুলি আমাদের সবাইকে আরও বেশি শক্তিশালী করছে, বিশেষত সেই ১ বিলিয়ন মানুষ যাদের বিভিন্ন ধরণের অক্ষমতা আছে। রোবোটিক ভ্যাকুয়াম ক্লিনারগুলি গতিশীলতার সমস্যাযুক্ত লোকদের পরিষ্কার ফ্লোর সরবরাহ করতে পারে , যারা নিজে তা করতে সমস্যার সম্মুখীন হয়। ভয়েস নিয়ন্ত্রিত ওভেনগুলি সীমিত দৃষ্টি বা সীমিত শারীরিক নিয়ন্ত্রণযুক্ত লোকেদের কেবল তাদের ভয়েস দিয়ে ওভেন গরম করতে দেয়, স্বাস্থ্য মনিটরগুলো রোগীদের আরও নিয়মিতভাবে দীর্ঘস্থায়ী পরিস্থিতি পর্যবেক্ষণ করে এবং তাদের অবস্থার বিষয়ে আরও বিশদ তথ্য সরবরাহ করতে পারে। এই ডিভাইসগুলি এতটা সর্বব্যাপী হয়ে উঠছে যে এমনকি ছোট বাচ্চারাও তাদের দৈনন্দিন জীবনের অংশ হিসাবে ব্যবহার করছে, উদাহরণস্বরূপ, শিক্ষার্থীরা কোভিড মহামারীর মাঝে ভার্চুয়াল স্কুলিংয়ের সময় স্মার্ট হোম ডিভাইসগুলিতে টাইমার সেট করছে তাদের স্কুলের কাজগুলো সময়ের মাঝে করার জন্য বা ক্লাসের সময় এলার্মও সেট করে রাখা হচ্ছে আইওটি ব্যবহার করে ।
✅ আপনার বাড়িতে কী কোন গ্রাহকপর্যায়ের আইওটি ডিভাইস রয়েছে?
### বাণিজ্যিকক্ষেত্রে আইওটি
বাণিজ্যিক আইওটি বলতে মূলত কর্মক্ষেত্রে আইওটির ব্যবহারকে বোঝান হয়। কোন অফিসে ব্যয় এবং কার্বন নিঃসরণ হ্রাস করার জন্য, আলো এবং তাপ প্রয়োজন না হলে তা বন্ধ রাখতে অকুপেন্সি সেন্সর এবং মোশন ডিটেক্টর ব্যবহার করা হতে পারে। কারখানায় আইওটি ডিভাইসগুলি ঝুঁকিপূর্ণ পরিস্থিতি পর্যবেক্ষণ করতে পারে যেমনঃ হেলমেট না পরা শ্রমিক বা বিপদজনক সংকেত বাজতে থাকার মতো ঘটনা নজরদারি করতে পারে। আইওটি ডিভাইসগুলি কোল্ড স্টোরেজের তাপমাত্রা পরিমাপ করতে পারে। যদি কোন ফ্রিজ বা ফ্রিজার প্রয়োজনীয় তাপমাত্রার সীমার বাইরে থাকে তবে তারা দোকানের মালিককে সতর্ক করে দিতে পারে। এছাড়াও ব্যবসায়ীরা পণ্যগুলি বিক্রির পর পুনরায় যেন স্টোরেজ পূরণ করা হয় সেই নির্দেশনা সরাসরি কর্মচারীদের দিতে পারে । যানবাহন এর অবস্থান নিরীক্ষণ, গতিসীমার বেশি গাড়ি চালানো হলে তার শাস্তি নিশ্চিত করা, ড্রাইভার কতক্ষণ গাড়ি চালালো এবং পর্যাপ্ত বিরতি নিয়েছে কিনা তা নির্ণয় করা যায়। এছাড়াও যখন কোনও গাড়ি লোড বা আনলোডের প্রস্তুতির জন্য কোনও ডিপোতে পৌঁছায় তা তখন কর্মীদের অবহিত করতেও আইওটি গুরুত্বপূর্ণ ভূমিকা রাখে।
✅ আপনার শিক্ষাপ্রতিষ্ঠান বা কর্মক্ষেত্রে কোন বাণিজ্যিক আইওটি ডিভাইস রয়েছে কী ?
### শিল্পক্ষেত্রে আইওটি (IIoT)
শিল্পক্ষেত্রে আইওটি (Industrial IoT or IIoT ) বলতে বোঝান হয় কারখানার বৃহত্তর পর্যায়ের যন্ত্রপাতি নিয়ন্ত্রণ ও পরিচালনা করা। এই ক্ষেত্রটি ফ্যাক্টরি থেকে শুরু করে ডিজিটাল কৃষিব্যবস্থা পর্যন্ত - অনেক বিশাল প্রেক্ষাপট জুড়ে বিস্তৃত ।
কারখানাগুলি আইওটি ডিভাইসগুলি বিভিন্ন উপায়ে ব্যবহার করে। তাপমাত্রা, কম্পন এবং ঘূর্ণন গতির মতো জিনিসগুলি ট্র্যাক করতে একাধিক সেন্সর দিয়ে যন্ত্রপাতিগুলি পর্যবেক্ষণ করা হয়। এই ডেটাগুলি তখন নির্দিষ্ট কিছু মাত্রার বাইরে চলে গেলে, মেশিনটিকে বন্ধ করার জন্য এটি সার্বক্ষণিক পর্যবেক্ষণ করা হয় যেমনঃ মেশিন খুব গরম হয়ে গেলো আর তখনই আইওটি দ্বারা তা বন্ধ হয়ে যায়। ভবিষ্যদ্বাণীমূলক রক্ষণাবেক্ষণের জন্য এই ডেটাগুলি সময়ের সাথে সংগৃহীত ও বিশ্লেষণ করা যেতে পারে, যেখানে কৃত্রিম বুদ্ধিমত্তার মডেলগুলি বিশ্লেষণ করবে কোন কোন পরিস্থিতিতে মেশিনগুলো ফেইল করছে এবং নেতিবাচক ঘটনা সংঘটিত হওয়ার আগেই তার পূর্বাভাস দিবে।
পৃথিবীর ক্রমবর্ধমান জনগোষ্ঠীের অন্নসংস্থানের জন্য ডিজিটাল কৃষিকাজটি গুরুত্বপূর্ণ, বিশেষত ৫০০ মিলিয়ন পরিবারের ২ বিলিয়ন মানুষ যারা [কৃষিকাজ করে জীবিকা নির্বাহ করে](https://wikipedia.org/wiki/Subsistence_agriculture) থাকে। ডিজিটাল কৃষিক্ষেত্র কয়েকটি স্বল্পমূল্যের সেন্সর থেকে শুরু করে বিশাল বাণিজ্যিক সেটআপ পর্যন্ত হতে পারে। একজন কৃষক তাপমাত্রা পর্যবেক্ষণ করতে পারে এবং [growing degree days](https://wikipedia.org/wiki/Growing_degree-day) ব্যবহার করে ভবিষ্যদ্বাণী করতে পারে কোন ফসল কখন প্রস্তুত হবে। তারা তাদের উদ্ভিদের প্রয়োজনমতো জল দেওয়ার জন্য স্বয়ংক্রিয় জল সরবরাহ ব্যবস্থার সাথে মাটির আর্দ্রতা পর্যবেক্ষণের তথ্যের সংযোগ স্থাপন করতে পারে যাতে পানি নষ্ট না হয়, আবার তাদের ফসলও যেন শুকিয়ে না যায় । কৃষকরা এই প্রযুক্তি আরো এগিয়ে নিয়ে যেতে পারেন এবং ফসলের বৃদ্ধি, রোগ এবং মাটির গুণাগুণ পর্যবেক্ষণ করতে ড্রোন, স্যাটেলাইট থেকে প্রাপ্ত তথ্য এবং আর্টিফিশিয়াল ইন্টেলিজেন্স ব্যবহার করতে পারেন ।
✅ আর কোন কোন আইওটি ডিভাইস কৃষকদের সহায়তা করতে পারে ?
### অবকাঠামোগত আইওটি
অবকাঠামোগত আইওটি মূলত স্থানীয় এবং বিশ্বব্যাপী অবকাঠামোসমূহ পর্যবেক্ষণ এবং নিয়ন্ত্রণ করছে যা সবাই প্রতিদিন ব্যবহার করে।
[স্মার্ট সিটি](https://wikipedia.org/wiki/Smart_city) হল এমন শহুরে অঞ্চল যা আইওটি ডিভাইসগুলি্ ব্যবহার করে ডেটা সংগ্রহ করে এবং শহরটি কীভাবে চালিত হয় তা উন্নত করতে তথ্য প্রদান করে। এই শহরগুলি সাধারণত স্থানীয় সরকার, শিক্ষাপ্রতিষ্ঠান এবং স্থানীয় ব্যবসা্র পরিমন্ডল- সকলের মধ্যে পারষ্পরিক সহযোগিতায় পরিচালিত হয় ; পরিবহণ থেকে পার্কিং এবং দূষণের ক্ষেত্রে বিভিন্ন জিনিস ট্র্যাকিং এবং পরিচালনা করে। উদাহরণস্বরূপ, ডেনমার্কের কোপেনহেগেনে বায়ু দূষণ স্থানীয় বাসিন্দাদের কাছে গুরুত্বপূর্ণ । তাই সেখানে এটি পরিমাপ করা হয় এবং প্রাপ্ত তথ্যাবলি ব্যবহার করে সবচেয়ে পরিষ্কার সাইক্লিং এবং জগিং রুটের তথ্য সরবরাহ করতে ব্যবহৃত হয়।
[স্মার্ট পাওয়ার](https://wikipedia.org/wiki/Smart_grid) গ্রিডগুলি পৃথক পৃথক বাড়ির বিদ্যুতের চাহিদা ও ব্যবহারের তথ্য সংগ্রহ করে, বৈদ্যুতিক চাহিদার বিশ্লেষণ করে। নতুন ডেটা স্টেশন কোথায় তৈরি করতে হবে এবং ব্যক্তিগত পর্যায়ে ব্যবহারকারীরা কতটা বিদ্যুৎ ব্যবহার করছেন, কখন তারা এটি ব্যবহার করছেন এবং কীভাবে কীভাবে ব্যয় হ্রাস করতে হবে কিংবা ইলেক্ট্রিক গাড়িকে রাতে চার্জ দেয়া - সবকিছু সম্পর্কে সিদ্ধান্ত নেয়ার ক্ষেত্রে এই তথ্যাবলী সহায়তা করে থাকে ।
✅ আপনি যেখানে থাকেন সেখানে কিছু পরিমাপ করতে যদি আপনি আইওটি ডিভাইস ব্যবহার করতে পারেন, তবে কোন ডিভাইস ব্যবহার করবেন?
## আমাদের চারপাশে থাকতে পারে এমন কিছু আইওটি ডিভাইসের উদাহরণ
চারপাশে আসলে কী পরিমাণ আইওটি ডিভাইস রয়েছে তা জানলে, আমরা রীতিমতো অবাক হয়ে যাব । আমি এই লেসনটি আমার বাসা থেকে লিখছি এবং আমার কাছে অ্যাপ্লিকেশন নিয়ন্ত্রণ,ভয়েস নিয়ন্ত্রণ বা আমার ফোনের মাধ্যমে ডেটা প্রেরণ করার মতো স্মার্ট বৈশিষ্ট্যগুলির সাথে আমার সাথে নিম্নলিখিত ডিভাইসগুলি ইন্টারনেটে সংযুক্ত রয়েছে:
* একাধিক স্মার্ট স্পিকার
* ফ্রিজ, ডিশ ওয়াশার, ওভেন এবং মাইক্রোওয়েভ
* সৌর প্যানেলগুলির জন্য বিদ্যুৎ নিরীক্ষক
* স্মার্ট প্লাগ
* ভিডিও সম্বলিত দরজার বেল এবং সুরক্ষা ক্যামেরা
* একাধিক স্মার্ট রুম সেন্সর সহ স্মার্ট থার্মোস্ট্যাট
* গ্যারেজের দরজা নিয়ন্ত্রক
* ঘরোয়া বিনোদন সিস্টেম এবং ভয়েস নিয়ন্ত্রিত টিভি
* আলো
* ফিটনেস ট্র্যাকার
এই ধরণের ডিভাইসে সেন্সর এবং / অথবা অ্যাকিউটিউটর রয়েছে এবং তারা ইন্টারনেটে সংযুক্ত থাকে। আমার গ্যারেজের দরজা খোলা আছে কিনা তা আমি আমার ফোন থেকে বলতে পারি এবং আমার স্মার্ট স্পিকার ব্যবহার করেই আমি তা বন্ধ করতে পারি। এমনকি আমি চাইলে টাইমার সেট করতে পারি, তাই এটি যদি রাতে খোলাও থাকে তবে এটি স্বয়ংক্রিয়ভাবে বন্ধ হয়ে যাবে।আমার ডোরবেল বেজে উঠলে আমি বিশ্বের যেখানেই থাকি না কেন, কে বেল বাজিয়েছে তা দেখতে পাই এবং ডোরবেলটিতে নির্মিত স্পিকার এবং মাইক্রোফোনের মাধ্যমে তাদের সাথে কথা বলতে পারি।আমি আমার রক্তের গ্লুকোজ, হার্টরেট এবং ঘুমের ধরণগুলি পর্যবেক্ষণ করতে পারি । আমার স্বাস্থ্যের উন্নতি করতে তথ্যাবলি বিশ্লেষণ করি। আমি ক্লাউডের মাধ্যমে আমার ঘরের লাইটগুলি নিয়ন্ত্রণ করতে পারি । তবে কখনো কখনো আমার ইন্টারনেট সংযোগ বিঘ্নিত হলে অন্ধকারে বসে থাকতে হয়।
---
## 🚀 চ্যালেঞ্জ
আমাদের বাড়ি, স্কুল বা কর্মক্ষেত্রে যতগুলি আইওটি ডিভাইস রয়েছে তার তালিকা করি - হয়তো আমাদের ভাবনার চেয়েও বেশি কিছু থাকতে পারে !
## লেকচার পরবর্তী কুইজ
[লেকচার পরবর্তী কুইজ ](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/2)
## পর্যালোচনা এবং স্ব-অধ্যয়ন
গ্রাহকপর্যায়ের আইওটি প্রকল্পগুলির সুবিধাগুলি এবং ব্যর্থতাগুলি পড়তে হবে। গোপনীয়তা সংক্রান্ত সমস্যা, হার্ডওয়্যার সমস্যা বা ইন্টারনেট সংযোগের বিঘ্ন ঘটা - এই সমস্যাগুলির ব্যপারে পত্রিকা ও আর্টিকেল পড়তে পারি আমরা।
উদাহরণস্বরূপঃ
* এই টুইটার হ্যান্ডেল দেখতে পারি যেখানে **[Internet of Sh*t](https://twitter.com/internetofshit)** *(সংবিধিবদ্ধ সতর্কীকরণ)* গ্রাহকপর্যায়ে আইওটির ব্যর্থতার বেশ কিছু উদাহরণ রয়েছে।
* [c|net - আমার অ্যাপল ওয়াচ আমার জীবন বাঁচিয়েছিল: ৫ জন তাদের গল্প শুনিয়েছেন](https://www.cnet.com/news/apple-watch-lifesaving-health-features-read-5-peoples-stories/)
* [c|net -এডিটি টেকনিশিয়ান বছরের পর বছর গ্রাহক ক্যামেরা ফিডে গুপ্তচরবৃত্তির জন্য দোষী হন](https://www.cnet.com/news/adt-home-security-technician-pleads-guilty-to-spying-on-customer-camera-feeds-for-years/) *(সতর্কতা - অনুমতি ব্যাতীত আপত্তিকর দৃশ্য ধারণ )*
## এসাইনমেন্ট
[একটি আইওটি প্রজেক্ট পর্যালোচনা](../assignment.md)

@ -1,6 +1,6 @@
# इंटरनेट ऑफ थिंग्स (IoT )का परिचय
![इस पाठ का एक संक्षिप्त विवरण](../../../sketchnotes/lesson-1.png)
![इस पाठ का एक संक्षिप्त विवरण](../../../../sketchnotes/lesson-1.png)
> [Nitya Narasimhan](https://github.com/nitya) द्वारा स्केचनोट. Click the image for a larger version.
@ -17,11 +17,11 @@
इस पाठ में हम कवर करेंगे:
* [ 'इंटरनेट ऑफ थिंग्स' क्या है? ](#what-is-the-internet-of-things)
* [ 'इंटरनेट ऑफ थिंग्स' IoT डिवाइस ](#iot-devices)
* [अपना डिवाइस सेट करें ](#set-up-your-device)
* [ 'इंटरनेट ऑफ थिंग्स' IoT के अनुप्रयोग ](#applications-of-iot)
* [आपके आस-पास मौजूद IoT उपकरणों के उदाहरण ](#examples-of-iot-devices-you-may-have-around-you)
* [ 'इंटरनेट ऑफ थिंग्स' क्या है? ](#'इंटरनेट-ऑफ-थिंग्'स-क्या-है)
* [ 'इंटरनेट ऑफ थिंग्स' IoT डिवाइस ](#IoT-डिवाइस)
* [अपना डिवाइस सेट करें ](#अपना-डिवाइस-सेट-करें)
* [ 'इंटरनेट ऑफ थिंग्स' IoT के अनुप्रयोग ](#IoT-.-के-अनुप्रयोग)
* [आपके आस-पास मौजूद IoT उपकरणों के उदाहरण ](#IoT-उपकरणों-के-उदाहरण-जो-आपके-आस-पास-हो-सकते-हैं)
## 'इंटरनेट ऑफ थिंग्स' क्या है?
@ -35,7 +35,7 @@
IoT एक तेजी से बढ़ता प्रौद्योगिकी क्षेत्र है। ऐसा अनुमान है कि 2020 के अंत तक, 30 बिलियन IoT उपकरणों को तैनात किया गया और इंटरनेट से जोड़ा गया। भविष्य को देखते हुए, यह अनुमान लगाया गया है कि 2025 तक, IoT डिवाइस लगभग 80 ज़ेटाबाइट डेटा, या 80 ट्रिलियन गीगाबाइट्स एकत्र करेंगे। यह बहुत सारा डेटा है!
![समय के साथ सक्रिय IoT उपकरणों को दिखाने वाला एक ग्राफ, 2015 में 5 बिलियन से कम से 2025 में 30 बिलियन से अधिक की प्रवृत्ति के साथ](../../../images/connected-iot-devices.svg)
![समय के साथ सक्रिय IoT उपकरणों को दिखाने वाला एक ग्राफ, 2015 में 5 बिलियन से कम से 2025 में 30 बिलियन से अधिक की प्रवृत्ति के साथ](../../../../images/connected-iot-devices.svg)
✅थोड़ा शोध करें: IoT उपकरणों द्वारा उत्पन्न डेटा का वास्तव में कितना उपयोग किया जाता है, और कितना बर्बाद होता है? इतने डेटा को नज़रअंदाज़ क्यों किया जाता है?
@ -66,7 +66,7 @@ IoT में **T** का अर्थ है **चीजें** - ऐसे
माइक्रोकंट्रोलर आम तौर पर कम लागत वाले कंप्यूटिंग डिवाइस होते हैं, कस्टम हार्डवेयर में उपयोग किए जाने वाले लोगों के लिए औसत मूल्य लगभग यूएस $ 0.50 तक गिरते हैं, कुछ डिवाइस यूएस $ 0.03 के सस्ते होते हैं। डेवलपर किट कम से कम US$4 से शुरू हो सकती हैं, जैसे-जैसे आप अधिक सुविधाएँ जोड़ते हैं, लागत बढ़ती जाती है। [
वाईओ टर्मिनल](https://www.seeedstudio.com/Wio-Terminal-p-4509.html), [सीड स्टूडियो](https://www.seeedstudio.com) का एक माइक्रोकंट्रोलर डेवलपर किट जिसमें सेंसर लगे हैं , एक्चुएटर्स, वाईफाई और एक स्क्रीन की कीमत लगभग US$30 है।
![एक वाईओ टर्मिनल](../../../images/wio-terminal.png)
![एक वाईओ टर्मिनल](../../../../images/wio-terminal.png)
> 💁 माइक्रोकंट्रोलर के लिए इंटरनेट पर खोज करते समय **MCU** शब्द की खोज करने से सावधान रहें क्योंकि इससे मार्वल सिनेमैटिक यूनिवर्स के लिए बहुत सारे परिणाम वापस आएंगे, न कि माइक्रोकंट्रोलर।
माइक्रोकंट्रोलर को पीसी या मैक जैसे सामान्य-उद्देश्य वाले कंप्यूटर होने के बजाय सीमित संख्या में बहुत विशिष्ट कार्यों को करने के लिए प्रोग्राम करने के लिए डिज़ाइन किया गया है। बहुत विशिष्ट परिदृश्यों को छोड़कर, आप मॉनिटर, कीबोर्ड और माउस को कनेक्ट नहीं कर सकते हैं और सामान्य प्रयोजन के कार्यों के लिए उनका उपयोग नहीं कर सकते हैं।
@ -79,7 +79,7 @@ IoT में **T** का अर्थ है **चीजें** - ऐसे
एक सिंगल-बोर्ड कंप्यूटर एक छोटा कंप्यूटिंग डिवाइस है जिसमें एक छोटे से बोर्ड पर निहित संपूर्ण कंप्यूटर के सभी तत्व होते हैं। ये ऐसे उपकरण हैं जिनके विनिर्देश डेस्कटॉप या लैपटॉप पीसी या मैक के करीब हैं, एक पूर्ण ऑपरेटिंग सिस्टम चलाते हैं, लेकिन छोटे होते हैं, कम शक्ति का उपयोग करते हैं, और काफी सस्ते होते हैं।
![एक रास्पबेरी पाई 4](../../../images/raspberry-pi-4.jpg)
![एक रास्पबेरी पाई 4](../../../../images/raspberry-pi-4.jpg)
***रास्पबेरी पाई 4. माइकल हेन्ज़लर /
[विकिमीडिया कॉमन्स](https://commons.wikimedia.org/wiki/Main_Page) /
@ -97,7 +97,7 @@ IoT में **T** का अर्थ है **चीजें** - ऐसे
बाद के सभी पाठों में भौतिक दुनिया के साथ बातचीत करने और क्लाउड के साथ संचार करने के लिए IoT डिवाइस का उपयोग करने वाले असाइनमेंट शामिल हैं। प्रत्येक पाठ 3 डिवाइस विकल्पों का समर्थन करता है - Arduino (एक Seeed Studios Wio Terminal का उपयोग करके), या एक सिंगल-बोर्ड कंप्यूटर, या तो एक भौतिक उपकरण (एक रास्पबेरी पाई 4), या आपके पीसी या मैक पर चलने वाला एक वर्चुअल सिंगल-बोर्ड कंप्यूटर।
आप [हार्डवेयर गाइड](../../hardware.md) में सभी असाइनमेंट को पूरा करने के लिए आवश्यक हार्डवेयर के बारे में पढ़ सकते हैं।
आप [हार्डवेयर गाइड](../../../../hardware.md) में सभी असाइनमेंट को पूरा करने के लिए आवश्यक हार्डवेयर के बारे में पढ़ सकते हैं।
> 💁 असाइनमेंट को पूरा करने के लिए आपको कोई IoT हार्डवेयर खरीदने की आवश्यकता नहीं है, आप वर्चुअल सिंगल-बोर्ड कंप्यूटर का उपयोग करके सब कुछ कर सकते हैं।
आप कौन सा हार्डवेयर चुनते हैं यह आप पर निर्भर करता है - यह इस बात पर निर्भर करता है कि आपके स्कूल में घर पर क्या उपलब्ध है, और आप कौन सी प्रोग्रामिंग भाषा जानते हैं या सीखने की योजना बना रहे हैं। दोनों हार्डवेयर वेरिएंट एक ही सेंसर इकोसिस्टम का उपयोग करेंगे, इसलिए यदि आप एक पथ को शुरू करते हैं, तो आप अधिकांश किट को बदले बिना दूसरे में बदल सकते हैं। वर्चुअल सिंगल-बोर्ड कंप्यूटर रास्पबेरी पाई पर सीखने के बराबर होगा, यदि आप अंततः एक डिवाइस और सेंसर प्राप्त करते हैं तो अधिकांश कोड पीआई में स्थानांतरित हो जाते हैं।
@ -129,15 +129,15 @@ IoT में **T** का अर्थ है **चीजें** - ऐसे
इससे पहले कि आप अपने IoT डिवाइस की प्रोग्रामिंग शुरू करें, आपको थोड़ी मात्रा में सेटअप करने की आवश्यकता होगी। आप किस डिवाइस का उपयोग कर रहे हैं, इसके आधार पर नीचे दिए गए प्रासंगिक निर्देशों का पालन करें.
> 💁यदि आपके पास अभी तक कोई उपकरण नहीं है, तो यह तय करने में सहायता के लिए [हार्डवेयर गाइड](../../../hardware.md) देखें कि आप किस उपकरण का उपयोग करने जा रहे हैं, और आपको कौन सा अतिरिक्त हार्डवेयर खरीदने की आवश्यकता है . आपको हार्डवेयर खरीदने की आवश्यकता नहीं है, क्योंकि सभी प्रोजेक्ट वर्चुअल हार्डवेयर पर चलाए जा सकते हैं।
> 💁यदि आपके पास अभी तक कोई उपकरण नहीं है, तो यह तय करने में सहायता के लिए [हार्डवेयर गाइड](../../../../hardware.md) देखें कि आप किस उपकरण का उपयोग करने जा रहे हैं, और आपको कौन सा अतिरिक्त हार्डवेयर खरीदने की आवश्यकता है . आपको हार्डवेयर खरीदने की आवश्यकता नहीं है, क्योंकि सभी प्रोजेक्ट वर्चुअल हार्डवेयर पर चलाए जा सकते हैं।
इन निर्देशों में आपके द्वारा उपयोग किए जा रहे हार्डवेयर या टूल के निर्माताओं से तृतीय-पक्ष वेबसाइटों के लिंक शामिल हैं। यह सुनिश्चित करने के लिए है कि आप हमेशा विभिन्न उपकरणों और हार्डवेयर के लिए नवीनतम निर्देशों का उपयोग कर रहे हैं।
अपने डिवाइस को सेट अप करने और 'हैलो वर्ल्ड' प्रोजेक्ट को पूरा करने के लिए प्रासंगिक गाइड के माध्यम से काम करें। इस आरंभिक भाग में 4 पाठों पर IoT नाइटलाइट बनाने में यह पहला कदम होगा।
* [Arduino - Wio Terminal](wio-terminal.md)
* [सिंगल-बोर्ड कंप्यूटर - रास्पबेरी पाई](pi.md)
* [सिंगल-बोर्ड कंप्यूटर - वर्चुअल डिवाइस](virtual-device.md)
* [Arduino - Wio Terminal](../wio-terminal.md)
* [सिंगल-बोर्ड कंप्यूटर - रास्पबेरी पाई](../pi.md)
* [सिंगल-बोर्ड कंप्यूटर - वर्चुअल डिवाइस](../virtual-device.md)
## IoT . के अनुप्रयोग

@ -67,13 +67,13 @@ Configure a Python virtual environment and install the pip packages for CounterF
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh
python --version
```
You should see the following:
The output should contain the following:
```output
(.venv) ➜ nightlight python --version
@ -82,7 +82,7 @@ Configure a Python virtual environment and install the pip packages for CounterF
> 💁 Your Python version may be different - as long as it's version 3.6 or higher you are good. If not, delete this folder, install a newer version of Python and try again.
1. Run the following commands to install the pip packages for CounterFit. These packages include the main CounterFit app as well as shims for Grove hardware. These shims allow you to write code as if you were programming using physical sensors and actuators from the Grove ecosystem, but connected to virtual IoT devices.
1. Run the following commands to install the pip packages for CounterFit. These packages include the main CounterFit app as well as shims for Grove hardware. These shims allow you to write code as if you were programming using physical sensors and actuators from the Grove ecosystem but connected to virtual IoT devices.
```sh
pip install CounterFit
@ -120,9 +120,9 @@ Create a Python application to print `"Hello World"` to the console.
code .
```
> 💁 If your terminal returns `command not found` on macOS it means VS Code has not been added to your PATH. You can add VS Code to yout PATH by following the instructions in the [Launching from the command line section of the VS Code documentation](https://code.visualstudio.com/docs/setup/mac?WT.mc_id=academic-17441-jabenn#_launching-from-the-command-line) and run the command afterwards. VS Code is installed to your PATH by default on Windows and Linux.
> 💁 If your terminal returns `command not found` on macOS it means VS Code has not been added to your PATH. You can add VS Code to your PATH by following the instructions in the [Launching from the command line section of the VS Code documentation](https://code.visualstudio.com/docs/setup/mac?WT.mc_id=academic-17441-jabenn#_launching-from-the-command-line) and run the command afterwards. VS Code is installed to your PATH by default on Windows and Linux.
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. The selected virtual environment will appear in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -136,9 +136,9 @@ Create a Python application to print `"Hello World"` to the console.
(.venv) ➜ nightlight
```
If you don't see `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
If you don't have `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and the the call to activate this will appear in the terminal. The prompt will also have the name of the virtual environment (`.venv`):
```output
➜ nightlight source .venv/bin/activate
@ -159,7 +159,7 @@ Create a Python application to print `"Hello World"` to the console.
python app.py
```
You should see the following output:
The following will be in the output:
```output
(.venv) ➜ nightlight python app.py
@ -184,7 +184,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![The Counter Fit app running in a browser](../../../images/counterfit-first-run.png)
You will see it marked as *Disconnected*, with the LED in the top-right corner turned off.
It will be marked as *Disconnected*, with the LED in the top-right corner turned off.
1. Add the following code to the top of `app.py`:
@ -201,7 +201,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![VS Code Create a new integrated terminal button](../../../images/vscode-new-terminal.png)
1. In this new terminal, run the `app.py` file as before. You will see the status of CounterFit change to **Connected** and the LED light up.
1. In this new terminal, run the `app.py` file as before. The status of CounterFit will change to **Connected** and the LED will light up.
![Counter Fit showing as connected](../../../images/counterfit-connected.png)

@ -1,6 +1,6 @@
# Wio Terminal
The [Wio Terminal from Seeed Studios](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) is an Arduino-compatible microcontroller, with WiFi and some sensors and actuators built in, as well as ports to add more sensors and actuators, using a hardware ecosystem called [Grove](https://www.seeedstudio.com/category/Grove-c-1003.html).
The [Wio Terminal from Seeed Studios](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) is an Arduino-compatible microcontroller, with WiFi and some sensors and actuators built-in, as well as ports to add more sensors and actuators, using a hardware ecosystem called [Grove](https://www.seeedstudio.com/category/Grove-c-1003.html).
![A Seeed studios Wio Terminal](../../../images/wio-terminal.png)
@ -18,7 +18,7 @@ Install the required software and update the firmware.
1. Install the VS Code PlatformIO extension. This is an extension for VS Code that supports programming microcontrollers in C/C++. Refer to the [PlatformIO extension documentation](https://marketplace.visualstudio.com/items?itemName=platformio.platformio-ide&WT.mc_id=academic-17441-jabenn) for instructions on installing this extension in VS Code. This extension depends on the Microsoft C/C++ extension to work with C and C++ code, and the C/C++ extension is installed automatically when you install PlatformIO.
1. Connect your Wio Terminal to your computer. The Wio Terminal has a USB-C port on the bottom, and this needs to be connected to a USB port on your computer. The Wio Terminal comes with a USB-C to USB-A cable, but if your computer only has USB-C ports then you will either need a USB-C cable, or a USB-A to USB-C adapter.
1. Connect your Wio Terminal to your computer. The Wio Terminal has a USB-C port on the bottom, and this needs to be connected to a USB port on your computer. The Wio Terminal comes with a USB-C to USB-A cable, but if your computer only has USB-C ports then you will either need a USB-C cable or a USB-A to USB-C adapter.
1. Follow the instructions in the [Wio Terminal Wiki WiFi Overview documentation](https://wiki.seeedstudio.com/Wio-Terminal-Network-Overview/) to set up your Wio Terminal and update the firmware.
@ -40,7 +40,7 @@ Create the PlatformIO project.
1. Launch VS Code
1. You should see the PlatformIO icon on the side menu bar:
1. The PlatformIO icon will be on the side menu bar:
![The Platform IO menu option](../../../images/vscode-platformio-menu.png)
@ -75,15 +75,15 @@ The VS Code explorer will show a number of files and folders created by the Plat
#### Folders
* `.pio` - this folder contains temporary data needed by PlatformIO such as libraries or compiled code. It is recreated automatically if deleted, and you don't need to add this to source code control if you are sharing your project on sites such as GitHub.
* `.vscode` - this folder contains configuration used by PlatformIO and VS Code. It is recreated automatically if deleted, and you don't need to add this to source code control if you are sharing your project on sites such as GitHub.
* `.vscode` - this folder contains the configuration used by PlatformIO and VS Code. It is recreated automatically if deleted, and you don't need to add this to source code control if you are sharing your project on sites such as GitHub.
* `include` - this folder is for external header files needed when adding additional libraries to your code. You won't be using this folder in any of these lessons.
* `lib` - this folder is for external libraries that you want to call from your code. You won't be using this folder in any of these lessons.
* `src` - this folder contains the main source code for your application. Initially it will contain a single file - `main.cpp`.
* `src` - this folder contains the main source code for your application. Initially, it will contain a single file - `main.cpp`.
* `test` - this folder is where you would put any unit tests for your code
#### Files
* `main.cpp` - this file in the `src` folder contains the entry point for your application. If you open the file, you will see the following:
* `main.cpp` - this file in the `src` folder contains the entry point for your application. Open this file, and it will contain the following code:
```cpp
#include <Arduino.h>
@ -99,9 +99,9 @@ The VS Code explorer will show a number of files and folders created by the Plat
When the device starts up, the Arduino framework will run the `setup` function once, then run the `loop` function repeatedly until the device is turned off.
* `.gitignore` - this file lists the files an directories to be ignored when adding your code to git source code control, such as uploading to a repository on GitHub.
* `.gitignore` - this file lists the files and directories to be ignored when adding your code to git source code control, such as uploading to a repository on GitHub.
* `platformio.ini` - this file contains configuration for your device and app. If you open this file, you will see the following:
* `platformio.ini` - this file contains configuration for your device and app. Open this file, and it will contain the following code:
```ini
[env:seeed_wio_terminal]
@ -150,9 +150,9 @@ Write the Hello World app.
}
```
The `setup` function initializes a connection to the serial port - in this case the USB port that is used to connect the Wio Terminal to your computer. The parameter `9600` is the [baud rate](https://wikipedia.org/wiki/Symbol_rate) (also known as Symbol rate), or speed that data will be sent over the serial port in bits per second. This setting means 9,600 bits (0s and 1s) of data are sent each second. It then waits for the serial port to be ready.
The `setup` function initializes a connection to the serial port - in this case, the USB port that is used to connect the Wio Terminal to your computer. The parameter `9600` is the [baud rate](https://wikipedia.org/wiki/Symbol_rate) (also known as Symbol rate), or speed that data will be sent over the serial port in bits per second. This setting means 9,600 bits (0s and 1s) of data are sent each second. It then waits for the serial port to be ready.
The `loop` function sends the line `Hello World!` to the serial port, so the characters of `Hello World!` along with a new line character. It then sleeps for 5,000 milliseconds, or 5 seconds. After the `loop` ends, it is run again, and again, and so on all the time the microcontroller is powered on.
The `loop` function sends the line `Hello World!` to the serial port, so the characters of `Hello World!` along with a new line character. It then sleeps for 5,000 milliseconds or 5 seconds. After the `loop` ends, it is run again, and again, and so on all the time the microcontroller is powered on.
1. Build and upload the code to the Wio Terminal
@ -164,9 +164,9 @@ Write the Hello World app.
PlatformIO will automatically build the code if needed before uploading.
1. The code will be compiled, and uploaded to the Wio Terminal
1. The code will be compiled and uploaded to the Wio Terminal
> 💁 If you are using macOS you will see a notification about a *DISK NOT EJECTED PROPERLY*. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
> 💁 If you are using macOS, a notification about a *DISK NOT EJECTED PROPERLY* will appear. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
⚠️ If you get errors about the upload port being unavailable, first make sure you have the Wio Terminal connected to your computer, and switched on using the switch on the left hand side of the screen. The green light on the bottom should be on. If you still get the error, pull the on/off switch down twice in quick succession to force the Wio Terminal into bootloader mode and try the upload again.
@ -191,7 +191,7 @@ PlatformIO has a Serial Monitor that can monitor data sent over the USB cable fr
Hello World
```
You will see `Hello World` appear every 5 seconds.
`Hello World` will print to the serial monitor every 5 seconds.
> 💁 You can find this code in the [code/wio-terminal](code/wio-terminal) folder.

@ -1,8 +1,8 @@
# A deeper dive into IoT
Add a sketchnote if possible/appropriate
![A sketchnote overview of this lesson](../../../sketchnotes/lesson-2.png)
![Embed a video here if available](video-url)
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
## Pre-lecture quiz
@ -20,7 +20,7 @@ In this lesson we'll cover:
## Components of an IoT application
The two components of an IoT application are the *Internet* and the *thing*. Lets look at these two components in a bit more detail.
The two components of an IoT application are the *Internet* and the *thing*. Let's look at these two components in a bit more detail.
### The Thing
@ -28,9 +28,9 @@ The two components of an IoT application are the *Internet* and the *thing*. Let
***Raspberry Pi 4. Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)***
The **Thing** part of IoT refers to a device that can interact with the physical world. These devices are usually small, low-priced computers, running at low speeds and using low power - for example simple microcontrollers with kilobytes of RAM (as opposed to gigabytes in a PC) running at only a few hundred megahertz (as opposed to gigahertz in a PC), but consuming sometimes so little power they can run for weeks, months or even years on batteries.
The **Thing** part of IoT refers to a device that can interact with the physical world. These devices are usually small, low-priced computers, running at low speeds and using low power - for example, simple microcontrollers with kilobytes of RAM (as opposed to gigabytes in a PC) running at only a few hundred megahertz (as opposed to gigahertz in a PC), but consuming sometimes so little power they can run for weeks, months or even years on batteries.
These devices interact with the physical world, either by using sensors to gather data from their surroundings, or by controlling outputs or actuators to make physical changes. The typical example of this is a smart thermostat - a device that has a temperature sensor, a means to set a desired temperature such as a dial or touchscreen, and a connection to a heating or cooling system that can be turned on when the temperature detected is outside the desired range. The temperature sensor detects that the room is too cold and an actuator turns the heating on.
These devices interact with the physical world, either by using sensors to gather data from their surroundings or by controlling outputs or actuators to make physical changes. The typical example of this is a smart thermostat - a device that has a temperature sensor, a means to set a desired temperature such as a dial or touchscreen, and a connection to a heating or cooling system that can be turned on when the temperature detected is outside the desired range. The temperature sensor detects that the room is too cold and an actuator turns the heating on.
![A diagram showing temperature and a dial as inputs to an IoT device, and control of a heater as an output](../../../images/basic-thermostat.png)
@ -46,7 +46,7 @@ The **Internet** side of an IoT application consists of applications that the Io
One typical setup would be having some kind of cloud service that the IoT device connects to, and this cloud service handles things like security, as well as receiving messages from the IoT device, and sending messages back to the device. This cloud service would then connect to other applications that can process or store sensor data, or use the sensor data with data from other systems to make decisions.
Devices also don't always connect directly to the Internet themselves via WiFi or wired connections. Some devices use mesh networking to talk to each other over technologies such as bluetooth, connecting via a hub device that has the Internet connection.
Devices also don't always connect directly to the Internet themselves via WiFi or wired connections. Some devices use mesh networking to talk to each other over technologies such as Bluetooth, connecting via a hub device that has an Internet connection.
With the example of a smart thermostat, the thermostat would connect using home WiFi to a cloud service running in the cloud. It would send the temperature data to this cloud service, and from there it will be written to a database of some kind allowing the homeowner to check the current and past temperatures using a phone app. Another service in the cloud would know what temperature the homeowner wants, and send messages back to the IoT device via the cloud service to tell the heating system to turn on or off.
@ -54,7 +54,7 @@ With the example of a smart thermostat, the thermostat would connect using home
***An Internet connected thermostat with mobile app control. Temperature by Vectors Market / Microcontroller by Template / dial by Jamie Dickinson / heater by Pascal Heß / mobile phone by Alice-vector / Cloud by Debi Alpa Nugraha - all from the [Noun Project](https://thenounproject.com)***
An even smarter version could use AI in the cloud with data from other sensors connected to other IoT devices such as occupancy sensors that detect what rooms are in use, as well as data such as weather and even your calendar, to make decisions on how to set the temperature in a smart fashion. For example it could turn your heating off if it reads from your calendar you are on vacation, or turn off the heating on a room by room basis depending on what rooms you use, learning from the data to be more and more accurate over time.
An even smarter version could use AI in the cloud with data from other sensors connected to other IoT devices such as occupancy sensors that detect what rooms are in use, as well as data such as weather and even your calendar, to make decisions on how to set the temperature in a smart fashion. For example, it could turn your heating off if it reads from your calendar you are on vacation, or turn off the heating on a room by room basis depending on what rooms you use, learning from the data to be more and more accurate over time.
![A diagram showing multiple temperature sensors and a dial as inputs to an IoT device, the IoT device with 2 way communication to the cloud, which in turn has 2 way communication to a phone, a calendar and a weather service, and control of a heater as an output from the IoT device](../../../images/smarter-thermostat.png)
@ -64,7 +64,7 @@ An even smarter version could use AI in the cloud with data from other sensors c
### IoT on the Edge
Although the I in IoT stands for Internet, these devices don't have to connect to the Internet. In some cases devices can connect to 'edge' devices - gateway devices that run on your local network meaning you can process data without making a call over the Internet. This can be faster when you have a lot of data or a slow Internet connection, it allows you to run offline where Internet connectivity is not possible such as on a ship or in a disaster area when responding to a humanitarian crisis, and allows you to keep data private. Some devices will contain processing code created using cloud tools, and run this locally to gather and respond to data without using an Internet connection to make a decision.
Although the I in IoT stands for Internet, these devices don't have to connect to the Internet. In some cases, devices can connect to 'edge' devices - gateway devices that run on your local network meaning you can process data without making a call over the Internet. This can be faster when you have a lot of data or a slow Internet connection, it allows you to run offline where Internet connectivity is not possible such as on a ship or in a disaster area when responding to a humanitarian crisis, and allows you to keep data private. Some devices will contain processing code created using cloud tools and run this locally to gather and respond to data without using an Internet connection to make a decision.
One example of this is a smart home device such as an Apple HomePod, Amazon Alexa, or Google Home, which will listen to your voice using AI models trained in the cloud, and will 'wake up' when a certain word or phrase is spoken, and only then send your speech to the Internet for processing, keeping everything else you say private.
@ -76,35 +76,35 @@ With any Internet connection, security is an important consideration. There is a
IoT devices connect to a cloud service, and therefore are only as secure as that cloud service - if your cloud service allows any device to connect then malicious data can be sent, or virus attacks can take place. This can have very real world consequences as IoT devices interact and control other devices. For example, the [Stuxnet worm](https://wikipedia.org/wiki/Stuxnet) manipulated valves in centrifuges to damage them. Hackers have also taken advantage of [poor security to access baby monitors](https://www.npr.org/sections/thetwo-way/2018/06/05/617196788/s-c-mom-says-baby-monitor-was-hacked-experts-say-many-devices-are-vulnerable) and other home surveillance devices.
> 💁 Sometimes IoT devices and the edge devices run on network completely isolated from the Internet to keep the data private and secure. This is know as [air-gapping](https://wikipedia.org/wiki/Air_gap_(networking)).
> 💁 Sometimes IoT devices and edge devices run on a network completely isolated from the Internet to keep the data private and secure. This is known as [air-gapping](https://wikipedia.org/wiki/Air_gap_(networking)).
## Deeper dive into microcontrollers
In the last lesson we introduced microcontrollers. Lets now look deeper into them.
In the last lesson, we introduced microcontrollers. Let's now look deeper into them.
### CPU
The CPU is the 'brain' of the microcontroller. It is the processor that runs your code and can send data to and receive data from any connected devices. CPUs can contain one or more cores - essentially one or more CPUs that can work together to run your code.
CPUs rely on a clock to tick many millions or billions of times a second. Each tick, or cycle, synchronizes the actions that the CPU can take. With each tick, the CPU can execute an instruction from a program, such as to retrieve data from an external device, or perform a mathematical calculation. This regular cycle allows for all actions to be completed before the next instructions is processed.
CPUs rely on a clock to tick many millions or billions of times a second. Each tick, or cycle, synchronizes the actions that the CPU can take. With each tick, the CPU can execute an instruction from a program, such as to retrieve data from an external device or perform a mathematical calculation. This regular cycle allows for all actions to be completed before the next instruction is processed.
The faster the clock cycle, the more instructions that can be processed each second, and therefore the faster the CPU. CPU speeds are measured in [Hertz (Hz)](https://wikipedia.org/wiki/Hertz), a standard unit where 1 Hz means one cycle or clock tick per second.
> 🎓 CPU speeds are often given in MHz or GHz. 1MHz is 1 million Hz, 1GHz is 1 billion Hz.
> 💁 CPUs execute programs using the [fetch-decode-execute cycle](https://wikipedia.org/wiki/Instruction_cycle). Every clock tick the CPU will fetch the next instruction from memory, decode it, then execute it such as using an arithmetic logic unit (ALU) to add 2 numbers. Some executions will take multiple ticks to run, so the next cycle will run at the next tick after the instruction has completed.
> 💁 CPUs execute programs using the [fetch-decode-execute cycle](https://wikipedia.org/wiki/Instruction_cycle). For every clock tick, the CPU will fetch the next instruction from memory, decode it, then execute it such as using an arithmetic logic unit (ALU) to add 2 numbers. Some executions will take multiple ticks to run, so the next cycle will run at the next tick after the instruction has completed.
![The fetch decode execute cycles showing the fetch taking an instruction from the program stored in RAM, then decoding and executing it on a CPU](../../../images/fetch-decode-execute.png)
***CPU by Icon Lauk / ram by Atif Arshad - all from the [Noun Project](https://thenounproject.com)***
Microcontrollers have much lower clock speeds than desktop or laptop computers, or even most smartphones. The Wio Terminal for example has a CPU that runs at 120MHz, or 120,000,000 cycles per second.
Microcontrollers have much lower clock speeds than desktop or laptop computers, or even most smartphones. The Wio Terminal for example has a CPU that runs at 120MHz or 120,000,000 cycles per second.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and see how many times faster it is than the Wio terminal.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and compare how many times faster it is than the Wio terminal.
Each clock cycle draws power and generates heat. The faster the ticks, the more power consumed and more heat generated. PC's have heat sinks and fans to remove heat, without which they would overheat and shut down within seconds. Microcontrollers often have neither as they run much cooler and therefore much slower. PC's run off mains power or large batteries for a few hours, microcontrollers can run for days, months, or even years off small batteries. Microcontrollers can also have cores that run at different speeds, switching to slower low power cores when the demand on the CPU is low to reduce power consumption.
> 💁 Some PCs and Macs are adopting the same mix of fast high power cores and slower low power cores, switching to save battery. For example the M1 chip in the latest Apple laptops can switch between 4 performance cores and 4 efficiency cores to optimize battery life or speed depending on the task being run.
> 💁 Some PCs and Macs are adopting the same mix of fast high power cores and slower low power cores, switching to save battery. For example, the M1 chip in the latest Apple laptops can switch between 4 performance cores and 4 efficiency cores to optimize battery life or speed depending on the task being run.
✅ Do a little research: Read up on CPUs on the [Wikipedia CPU article](https://wikipedia.org/wiki/Central_processing_unit)
@ -112,7 +112,7 @@ Each clock cycle draws power and generates heat. The faster the ticks, the more
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, see if you can find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and see if you can see the CPU through the clear plastic window on the back.
If you are using a Wio Terminal for these lessons, try to find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and try to find the CPU through the clear plastic window on the back.
### Memory
@ -124,7 +124,7 @@ RAM is the memory used by the program to run, containing variables allocated by
> 🎓 Program memory stores your code and stays when there is no power.
> 🎓 RAM is used to run your program, and is reset when there is no power
> 🎓 RAM is used to run your program and is reset when there is no power
Like with the CPU, the memory on a microcontroller is orders of magnitude smaller than a PC or Mac. A typical PC might have 8 Gigabytes (GB) of RAM, or 8,000,0000,000 bytes, with each byte enough space to store a single letter or a number from 0-255. A microcontroller would have only Kilobytes (KB) of RAM, with a kilobyte being 1,000 bytes. The Wio terminal mentioned above has 192KB of RAM, or 192,000 bytes - more than 40,000 times less than an average PC!
@ -150,7 +150,7 @@ Microcontrollers need input and output (I/O) connections to read data from senso
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to see which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to learn which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
### Physical size
@ -180,35 +180,35 @@ You can program microcontrollers using an OS - often referred to as a real-time
![The Arduino logo](../../../images/arduino-logo.svg)
[Arduino](https://www.arduino.cc) is probably the most popular microcontroller framework, especially among students, hobbyists and makers. Arduino is an open source electronics platform combining software and hardware. You can buy Arduino compatible boards from Arduino themselves, or from other manufacturers, then code using the Arduino framework.
[Arduino](https://www.arduino.cc) is probably the most popular microcontroller framework, especially among students, hobbyists and makers. Arduino is an open source electronics platform combining software and hardware. You can buy Arduino compatible boards from Arduino themselves or from other manufacturers, then code using the Arduino framework.
Arduino boards are coded in C or C++. Using C/C++ allows your code to be compiled very small and run fast, something needed on a constrained device such as a microcontroller. The core of an Arduino application is referred to as a sketch, and is C/C++ code with 2 functions - `setup` and `loop`. When the board starts up, the Arduino framework code will run the `setup` function once, then it will run the `loop` function again and again, running it continuously until the power is powered off.
Arduino boards are coded in C or C++. Using C/C++ allows your code to be compiled very small and run fast, something needed on a constrained device such as a microcontroller. The core of an Arduino application is referred to as a sketch and is C/C++ code with 2 functions - `setup` and `loop`. When the board starts up, the Arduino framework code will run the `setup` function once, then it will run the `loop` function again and again, running it continuously until the power is powered off.
You would write your setup code in the `setup` function, such as connecting to WiFi and cloud services or initializing pins for input and output. Your loop code would then contain processing code, such as reading from a sensor and sending the value to the cloud. You would normally include a delay in each loop, for example if you only want sensor data to be sent every 10 seconds you would add a delay of 10 seconds at the end of the loop so the microcontroller can sleep, saving power, then run the loop again when needed 10 seconds later.
You would write your setup code in the `setup` function, such as connecting to WiFi and cloud services or initializing pins for input and output. Your loop code would then contain processing code, such as reading from a sensor and sending the value to the cloud. You would normally include a delay in each loop, for example, if you only want sensor data to be sent every 10 seconds you would add a delay of 10 seconds at the end of the loop so the microcontroller can sleep, saving power, then run the loop again when needed 10 seconds later.
![An arduino sketch running setup first, then running loop repeatedly](../../../images/arduino-sketch.png)
✅ This program architecture is know as an *event loop* or *message loop*. Many applications use this under the hood, and is the standard for most desktop applications that run on OSes like Windows, macOS or Linux. The `loop` listens for messages from user interface components such as buttons, or devices like the keyboard, and responds to them. You can read more in this [article on the event loop](https://wikipedia.org/wiki/Event_loop).
✅ This program architecture is known as an *event loop* or *message loop*. Many applications use this under the hood and is the standard for most desktop applications that run on OSes like Windows, macOS or Linux. The `loop` listens for messages from user interface components such as buttons, or devices like the keyboard, and responds to them. You can read more in this [article on the event loop](https://wikipedia.org/wiki/Event_loop).
Arduino provides standard libraries for interacting with microcontrollers and the I/O pins, with different implementations under the hood to run on different microcontrollers. For example, the [`delay` function](https://www.arduino.cc/reference/en/language/functions/time/delay/) will pause the program for a given period of time, the [`digitalRead` function](https://www.arduino.cc/reference/en/language/functions/digital-io/digitalread/) will read a value of `HIGH` or `LOW` from the given pin, regardless of which board the code is run on. These standard libraries mean that Arduino code written for one board can be recompiled for any other Arduino board and will run, assuming that the pins are the same and the boards support the same features.
There is a large ecosystem of third-party Arduino libraries that allow you to add extra features to your Arduino projects, such as using sensors and actuators, or connecting to cloud IoT services.
There is a large ecosystem of third-party Arduino libraries that allow you to add extra features to your Arduino projects, such as using sensors and actuators or connecting to cloud IoT services.
##### Task
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output to see the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and see this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to see this called each time the device reboots.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output for the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and observe that this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to show this is called each time the device reboots.
## Deeper dive into single-board computers
In the last lesson we introduced single-board computers. Lets now look deeper into them.
In the last lesson, we introduced single-board computers. Let's now look deeper into them.
### Raspberry Pi
![The Raspberry Pi logo](../../../images/raspberry-pi-logo.png)
The [Raspberry Pi Foundation](https://www.raspberrypi.org) is a charity from the UK founded in 2009 to promote the study of computer science, especially at school level. As part of this mission they developed a single-board computer, called the Raspberry Pi. Raspberry Pis are currently available in 3 variants - a full size version, the smaller Pi Zero, and an compute module that can be built into your final IoT device.
The [Raspberry Pi Foundation](https://www.raspberrypi.org) is a charity from the UK founded in 2009 to promote the study of computer science, especially at school level. As part of this mission, they developed a single-board computer, called the Raspberry Pi. Raspberry Pis are currently available in 3 variants - a full size version, the smaller Pi Zero, and a compute module that can be built into your final IoT device.
![A Raspberry Pi 4](../../../images/raspberry-pi-4.jpg)

@ -27,12 +27,12 @@ Sensors are hardware devices that sense the physical world - that is they measur
Some common sensors include:
* Temperature sensors - these sense the air temperature, or the temperature of what they are immersed in. For hobbyists and developer, these are often combined with air pressure and humidity in a single sensor.
* Temperature sensors - these sense the air temperature or the temperature of what they are immersed in. For hobbyists and developer, these are often combined with air pressure and humidity in a single sensor.
* Buttons - they sense when they have been pressed.
* Light sensors - these detect light levels, and can be for specific colors, UV light, IR light, or general visible light.
* Light sensors - these detect light levels and can be for specific colors, UV light, IR light, or general visible light.
* Cameras - these sense a visual representation of the world by taking a photograph or streaming video.
* Accelerometers - these sense movement in multiple directions.
* Microphones - these sense sound, either general sound levels, or directional sound.
* Microphones - these sense sound, either general sound levels or directional sound.
✅ Do some research. What sensors does your phone have?
@ -54,7 +54,7 @@ Sensors are either analog or digital.
Some of the most basic sensors are analog sensors. These sensors receive a voltage from the IoT device, the sensor components adjust this voltage, and the voltage that is returned from the sensor is measured to give the sensor value.
> 🎓 Voltage is a measure of how much push there is to move electricity from one place to another, such as from a positive terminal of a battery to the negative terminal. For example, a standard AA battery is 1.5V (V is the symbol for volts), and can push electricity with the force of 1.5V from it's positive terminal to its negative terminal. Different electrical hardware requires different voltages to work, for example an LED can light with between 2-3V, but a 100W filament lightbulb would need 240V. You can read more about voltage on the [Voltage page on Wikipedia](https://wikipedia.org/wiki/Voltage).
> 🎓 Voltage is a measure of how much push there is to move electricity from one place to another, such as from a positive terminal of a battery to the negative terminal. For example, a standard AA battery is 1.5V (V is the symbol for volts) and can push electricity with the force of 1.5V from it's positive terminal to its negative terminal. Different electrical hardware requires different voltages to work, for example, an LED can light with between 2-3V, but a 100W filament lightbulb would need 240V. You can read more about voltage on the [Voltage page on Wikipedia](https://wikipedia.org/wiki/Voltage).
One example of this is a potentiometer. This is a dial that you can rotate between two positions and the sensor measures the rotation.
@ -66,7 +66,7 @@ The IoT device will send an electrical signal to the potentiometer at a voltage,
> 🎓 This is an oversimplification, and you can read more on potentiometers and variable resistors on the [potentiometer Wikipedia page](https://wikipedia.org/wiki/Potentiometer).
The voltage that comes out the sensor is then read by the IoT device, and the device can respond to it. Depending on the sensor, this voltage can be an arbitrary value, or can map to a standard unit. For example an analog temperature sensor based on a [thermistor](https://wikipedia.org/wiki/Thermistor) changes it's resistance depending on the temperature. The output voltage can then be converted to a temperature in Kelvin, and correspondingly into °C or °F, by calculations in code.
The voltage that comes out of the sensor is then read by the IoT device, and the device can respond to it. Depending on the sensor, this voltage can be an arbitrary value or can map to a standard unit. For example, an analog temperature sensor based on a [thermistor](https://wikipedia.org/wiki/Thermistor) changes it's resistance depending on the temperature. The output voltage can then be converted to a temperature in Kelvin, and correspondingly into °C or °F, by calculations in code.
✅ What do you think happens if the sensor returns a higher voltage than was sent (for example coming from an external power supply)? ⛔️ DO NOT test this out.
@ -74,7 +74,7 @@ The voltage that comes out the sensor is then read by the IoT device, and the de
IoT devices are digital - they can't work with analog values, they only work with 0s and 1s. This means that analog sensor values need to be converted to a digital signal before they can be processed. Many IoT devices have analog-to-digital converters (ADCs) to convert analog inputs to digital representations of their value. Sensors can also work with ADCs via a connector board. For example, in the Seeed Grove ecosystem with a Raspberry Pi, analog sensors connect to specific ports on a 'hat' that sits on the Pi connected to the Pi's GPIO pins, and this hat has an ADC to convert the voltage into a digital signal that can be sent off the Pi's GPIO pins.
Imagine you have an analog light sensor connected to an IoT device that uses 3.3V, and is returning a value of 1V. This 1V doesn't mean anything in the digital world, so needs to be converted. The voltage will be converted to an analog value using a scale depending on the device and sensor. One example is the Seeed Grove light sensor which outputs values from 0 to 1,023. For this sensor running at 3.3V, a 1V output would be a value of 300. An IoT device can't handle 300 as an analog value, so the value would be converted to `0000000100101100`, the binary representation of 300 by the Grove hat. This would then be processed by the IoT device.
Imagine you have an analog light sensor connected to an IoT device that uses 3.3V and is returning a value of 1V. This 1V doesn't mean anything in the digital world, so needs to be converted. The voltage will be converted to an analog value using a scale depending on the device and sensor. One example is the Seeed Grove light sensor which outputs values from 0 to 1,023. For this sensor running at 3.3V, a 1V output would be a value of 300. An IoT device can't handle 300 as an analog value, so the value would be converted to `0000000100101100`, the binary representation of 300 by the Grove hat. This would then be processed by the IoT device.
✅ If you don't know binary, then do a small amount of research to learn how numbers are represented by 0s and 1s. The [BBC Bitesize introduction to binary lesson](https://www.bbc.co.uk/bitesize/guides/zwsbwmn/revision/1) is a great place to start.
@ -82,7 +82,7 @@ From a coding perspective, all this is usually handled by libraries that come wi
### Digital sensors
Digital sensors, like analog sensors, detect the world around them using changes in electrical voltage. The difference is they output a digital signal, either by only measuring two states, or by using a built-in ADC. Digital sensors are becoming more and more common to avoid the need to use an ADC either in a connector board or on the IoT device itself.
Digital sensors, like analog sensors, detect the world around them using changes in electrical voltage. The difference is they output a digital signal, either by only measuring two states or by using a built-in ADC. Digital sensors are becoming more and more common to avoid the need to use an ADC either in a connector board or on the IoT device itself.
The simplest digital sensor is a button or switch. This is a sensor with two states, on or off.
@ -92,18 +92,18 @@ The simplest digital sensor is a button or switch. This is a sensor with two sta
Pins on IoT devices such as GPIO pins can measure this signal directly as a 0 or 1. If the voltage sent is the same as the voltage returned, the value read is 1, otherwise the value read is 0. There is no need to convert the signal, it can only be 1 or 0.
> 💁 Voltages are never exact especially as the components in a sensor will have some resistance, so there is usually a tolerance. For example the GPIO pins on a Raspberry Pi work on 3.3V, and read a return signal above 1.8V as a 1, below 1.8V as 0.
> 💁 Voltages are never exact especially as the components in a sensor will have some resistance, so there is usually a tolerance. For example, the GPIO pins on a Raspberry Pi work on 3.3V, and read a return signal above 1.8V as a 1, below 1.8V as 0.
* 3.3V goes into the button. The button is off so 0V comes out, giving a value of 0
* 3.3V goes into the button. The button is on so 3.3V comes out, giving a value of 1
More advanced digital sensors read analog values, then convert them using on-board ADCs to digital signals. For example a digital temperature sensor will still use a thermocouple in the same way as an analog sensor, and will still measure the change in voltage caused by the resistance of the thermocouple at the current temperature. Instead of returning an analog value and relying on the device or connector board to convert to a digital signal, an ADC built into the sensor will convert the value and send it as a series of 0s and 1s to the IoT device. These 0s and 1s are sent in the same way as the digital signal for a button with 1 being full voltage and 0 being 0v.
More advanced digital sensors read analog values, then convert them using on-board ADCs to digital signals. For example, a digital temperature sensor will still use a thermocouple in the same way as an analog sensor, and will still measure the change in voltage caused by the resistance of the thermocouple at the current temperature. Instead of returning an analog value and relying on the device or connector board to convert to a digital signal, an ADC built into the sensor will convert the value and send it as a series of 0s and 1s to the IoT device. These 0s and 1s are sent in the same way as the digital signal for a button with 1 being full voltage and 0 being 0v.
![A digital temperature sensor converting an analog reading to binary data with 0 as 0 volts and 1 as 5 volts before sending it to an IoT device](../../../images/temperature-as-digital.png)
***A digital temperature sensor. Temperature by Vectors Market / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
Sending digital data allows sensors to become more complex and send more detailed data, even encrypted data for secure sensors. One example is a camera. This is a sensor that captures an image and sends it as digital data containing that image, usually in a compressed format such as JPEG, to be read by the IoT device. It can even stream video by capturing images and sending either the complete image frame by frame, or a compressed video stream.
Sending digital data allows sensors to become more complex and send more detailed data, even encrypted data for secure sensors. One example is a camera. This is a sensor that captures an image and sends it as digital data containing that image, usually in a compressed format such as JPEG, to be read by the IoT device. It can even stream video by capturing images and sending either the complete image frame by frame or a compressed video stream.
## What are actuators?
@ -143,7 +143,7 @@ One example is a dimmable light, such as the ones you might have in your house.
![A light dimmed at a low voltage and brighter at a higher voltage](../../../images/dimmable-light.png)
***A light controlled by the voltage output by an IoT device. Idea by Pause08 / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
***A light controlled by the voltage output of an IoT device. Idea by Pause08 / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
Like with sensors, the actual IoT device works on digital signals, not analog. This means to send an analog signal, the IoT device needs a digital to analog converter (DAC), either on the IoT device directly, or on a connector board. This will convert the 0s and 1s from the IoT device to an analog voltage that the actuator can use.
@ -174,14 +174,14 @@ This means in one second you have 25 5V pulses of 0.02s that rotate the motor, e
***PWM rotation of a motor at 75RPM. motor by Bakunetsu Kaito / Microcontroller by Template - all from the [Noun Project](https://thenounproject.com)***
You can change the motor speed by changing the size of the pulses. For example, with the same motor you can keep the same cycle time of 0.04s, with the on pulse halved to 0.01s, and the off pulse increasing to 0.03s. You have the same number of pulses per second (25), but each on pulse is half the length. A half length pulse only turns the motor one twentieth of a rotation, and at 25 pulses a second will complete 1.25 rotations per second, or 75rpm. By changing the pulse speed of a digital signal you've halved the speed of an analog motor.
You can change the motor speed by changing the size of the pulses. For example, with the same motor you can keep the same cycle time of 0.04s, with the on pulse halved to 0.01s, and the off pulse increasing to 0.03s. You have the same number of pulses per second (25), but each on pulse is half the length. A half length pulse only turns the motor one twentieth of a rotation, and at 25 pulses a second will complete 1.25 rotations per second or 75rpm. By changing the pulse speed of a digital signal you've halved the speed of an analog motor.
```output
25 pulses per second x 0.05 rotations per pulse = 1.25 rotations per second
1.25 rotations per second x 60 seconds in a minute = 75rpm
```
✅ How would you keep the motor rotation smooth, especially at low speeds? Would you use a small number long pulses with long pauses, or lots of very short pulses with very short pauses?
✅ How would you keep the motor rotation smooth, especially at low speeds? Would you use a small number of long pulses with long pauses or lots of very short pulses with very short pauses?
> 💁 Some sensors also use PWM to convert analog signals to digital signals.
@ -189,7 +189,7 @@ You can change the motor speed by changing the size of the pulses. For example,
### Digital actuators
Digital actuators, like digital sensors, either have two states controlled by a high or low voltage, or have a DAC built in so can convert a digital signal to an analog one.
Digital actuators, like digital sensors, either have two states controlled by a high or low voltage or have a DAC built in so can convert a digital signal to an analog one.
One simple digital actuator is an LED. When a device sends a digital signal of 1, a high voltage is sent that lights the LED. When a digital signal of 0 is sent, the voltage drops to 0V and the LED turns off.
@ -207,7 +207,7 @@ More advanced digital actuators, such as screens require the digital data to be
The challenge in the last two lessons was to list as many IoT devices as you can that are in your home, school or workplace and decide if they are built around microcontrollers or single-board computers, or even a mixture of both.
For every device you listed, what sensors and actuators are they connected to? What is the purpose of each sensor and actuator connected to these devices.
For every device you listed, what sensors and actuators are they connected to? What is the purpose of each sensor and actuator connected to these devices?
## Post-lecture quiz

@ -1,12 +1,12 @@
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
from grove.grove_led import GroveLed
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
led = GroveLed(5)
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
if light < 300:

@ -1,9 +1,10 @@
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
time.sleep(1)

@ -20,7 +20,7 @@ Otherwise
### Connect the LED
The Grove LED comes as a module with a selection of LEDs, allowing you to chose the color.
The Grove LED comes as a module with a selection of LEDs, allowing you to choose the color.
#### Task - connect the LED
@ -32,7 +32,7 @@ Connect the LED.
LEDs are light-emitting diodes, and diodes are electronic devices that can only carry current one way. This means the LED needs to be connected the right way round, otherwise it won't work.
One of the legs of the LED is the positive pin, the other is the negative pin. The LED is not perfectly round, and is slightly flatter on one side. The side that is slightly flatter is the negative pin. When you connect the LED to the module, make sure the pin by the rounded side is connected to the socket marked **+** on the outside of the module, and the flatter side is connected to the socket closer to the middle of the module.
One of the legs of the LED is the positive pin, the other is the negative pin. The LED is not perfectly round and is slightly flatter on one side. The slightly flatter side is the negative pin. When you connect the LED to the module, make sure the pin by the rounded side is connected to the socket marked **+** on the outside of the module, and the flatter side is connected to the socket closer to the middle of the module.
1. The LED module has a spin button that allows you to control the brightness. Turn this all the way up to start with by rotating it anti-clockwise as far as it will go using a small Phillips head screwdriver.
@ -44,7 +44,7 @@ Connect the LED.
## Program the nightlight
The nightlight can now be programmed using the Grove sunlight sensor and the Grove LED.
The nightlight can now be programmed using the Grove light sensor and the Grove LED.
### Task - program the nightlight
@ -93,7 +93,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
pi@raspberrypi:~/nightlight $ python3 app.py
@ -105,7 +105,7 @@ Program the nightlight.
Light level: 290
```
1. Cover and uncover the sunlight sensor. Notice how the LED will light up if the light level is 300 or less, and turn off when the light level is greater than 300.
1. Cover and uncover the light sensor. Notice how the LED will light up if the light level is 300 or less, and turn off when the light level is greater than 300.
> 💁 If the LED doesn't turn on, make sure it is connected the right way round, and the spin button is set to full on.

@ -4,33 +4,31 @@ In this part of the lesson, you will add a light sensor to your Raspberry Pi.
## Hardware
The sensor for this lesson is a **sunlight sensor** that uses [photodiodes](https://wikipedia.org/wiki/Photodiode) to convert visible and infrared light to an electrical signal. This is an analog sensor that sends an integer value from 0 to 1,023 indicating a relative amount of light, but this can be used to calculate exact values in [lux](https://wikipedia.org/wiki/Lux) by taking data from the separate infrared and visible light sensors.
The sensor for this lesson is a **light sensor** that uses a [photodiode](https://wikipedia.org/wiki/Photodiode) to convert light to an electrical signal. This is an analog sensor that sends an integer value from 0 to 1,000 indicating a relative amount of light that doesn't map to any standard unit of measurement such as [lux](https://wikipedia.org/wiki/Lux).
The sunlight sensor is an eternal Grove sensor and needs to be connected to the Grove Base hat on the Raspberry Pi.
The light sensor is an eternal Grove sensor and needs to be connected to the Grove Base hat on the Raspberry Pi.
### Connect the sunlight sensor
### Connect the light sensor
The Grove sunlight sensor that is used to detect the light levels needs to be connected to the Raspberry Pi.
The Grove light sensor that is used to detect the light levels needs to be connected to the Raspberry Pi.
#### Task - connect the sunlight sensor
#### Task - connect the light sensor
Connect the sunlight sensor
Connect the light sensor
![A grove sunlight sensor](../../../images/grove-sunlight-sensor.png)
![A grove light sensor](../../../images/grove-light-sensor.png)
1. Insert one end of a Grove cable into the socket on the sunlight sensor module. It will only go in one way round.
1. Insert one end of a Grove cable into the socket on the light sensor module. It will only go in one way round.
1.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to the analog socket marked **A0** on the Grove Base hat attached to the Pi. This socket is the second from the right, on the row of sockets next to the GPIO pins.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to one of the three the I<sup>2</sup>C sockets marked **I2C** on the Grove Base hat attached to the Pi. This socket is the second from the right, on the row of sockets next to the GPIO pins.
![The grove light sensor connected to socket A0](../../../images/pi-light-sensor.png)
> 💁 I<sup>2</sup>C is a way sensors and actuators can communicate with an IoT device. It will be covered in more detail in a later lesson.
## Program the light sensor
![The grove sunlight sensor connected to socket A0](../../../images/pi-sunlight-sensor.png)
The device can now be programmed using the Grove light sensor.
## Program the sunlight sensor
The device can now be programmed using the Grove sunlight sensor.
### Task - program the sunlight sensor
### Task - program the light sensor
Program the device.
@ -38,44 +36,36 @@ Program the device.
1. Open the nightlight project in VS Code that you created in the previous part of this assignment, either running directly on the Pi or connected using the Remote SSH extension.
1. Run the following command to install a pip package for working with the sunlight sensor:
```sh
pip3 install seeed-python-si114x
```
Not all the libraries for the Grove Sensors are installed with the Grove install script you used in an earlier lesson. Some need additional packages.
1. Open the `app.py` file and remove all code from it
1. Add the following code to the `app.py` file to import some required libraries:
```python
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
```
The `import time` statement imports the `time` module that will be used later in this assignment.
The `import seeed_si114x` statement imports the `seeed_si114x` module that has code to interact with the Grove sunlight sensor.
The `from grove.grove_light_sensor_v1_2 import GroveLightSensor` statement imports the `GroveLightSensor` from the Grove Python libraries. This library has code to interact with a Grove light sensor, and was installed globally during the Pi setup.
1. Add the following code after the code above to create an instance of the class that manages the light sensor:
```python
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
```
The line `light_sensor = seeed_si114x.grove_si114x()` creates an instance of the `grove_si114x` sunlight sensor class.
The line `light_sensor = GroveLightSensor(0)` creates an instance of the `GroveLightSensor` class connecting to pin **A0** - the analog Grove pin that the light sensor is connected to.
1. Add an infinite loop after the code above to poll the light sensor value and print it to the console:
```python
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
```
This will read the current sunlight level on a scale of 0-1,023 using the `ReadVisible` property of the `grove_si114x` class. This value is then printed to the console.
This will read the current light level on a scale of 0-1,023 using the `light` property of the `GroveLightSensor` class. This property reads the analog value from the pin. This value is then printed to the console.
1. Add a small sleep of one second at the end of the `loop` as the light levels don't need to be checked continuously. A sleep reduces the power consumption of the device.
@ -89,16 +79,16 @@ Program the device.
python3 app.py
```
You should see sunlight values being output to the console. Cover and uncover the sunlight sensor to see the values change:
Light values will be output to the console. Cover and uncover the light sensor, and the values will change:
```output
pi@raspberrypi:~/nightlight $ python3 app.py
Light level: 259
Light level: 265
Light level: 265
Light level: 584
Light level: 550
Light level: 497
Light level: 634
Light level: 634
Light level: 634
Light level: 230
Light level: 104
Light level: 290
```
> 💁 You can find this code in the [code-sensor/pi](code-sensor/pi) folder.

@ -91,7 +91,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
(.venv) ➜ GroveTest python3 app.py
@ -101,7 +101,7 @@ Program the nightlight.
Light level: 253
```
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. You will see the LED turn on and off.
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. The LED will turn on and off.
![The LED in the CounterFit app turning on and off as the light level changes](../../../images/virtual-device-running-assignment-1-1.gif)

@ -87,7 +87,7 @@ Program the device.
python3 app.py
```
You should see light values being output to the console. Initially this value will be 0.
Light values will be output to the console. Initially this value will be 0.
1. From the CounterFit app, change the value of the light sensor that will be read by the app. You can do this in one of two ways:
@ -95,7 +95,7 @@ Program the device.
* Check the *Random* checkbox, and enter a *Min* and *Max* value, then select the **Set** button. Every time the sensor reads a value, it will read a random number between *Min* and *Max*.
You should see the values you set appearing in the console. Change the *Value* or the *Random* settings to see the value change.
The values you set will be output to in the console. Change the *Value* or the *Random* settings to make the value change.
```output
(.venv) ➜ GroveTest python3 app.py

@ -20,7 +20,7 @@ Otherwise
### Connect the LED
The Grove LED comes as a module with a selection of LEDs, allowing you to chose the color.
The Grove LED comes as a module with a selection of LEDs, allowing you to choose the color.
#### Task - connect the LED
@ -32,7 +32,7 @@ Connect the LED.
LEDs are light-emitting diodes, and diodes are electronic devices that can only carry current one way. This means the LED needs to be connected the right way round, otherwise it won't work.
One of the legs of the LED is the positive pin, the other is the negative pin. The LED is not perfectly round, and is slightly flatter on one side. The side that is slightly flatter is the negative pin. When you connect the LED to the module, make sure the pin by the rounded side is connected to the socket marked **+** on the outside of the module, and the flatter side is connected to the socket closer to the middle of the module.
One of the legs of the LED is the positive pin, the other is the negative pin. The LED is not perfectly round, and is slightly flatter on one side. The slightly flatter side is the negative pin. When you connect the LED to the module, make sure the pin by the rounded side is connected to the socket marked **+** on the outside of the module, and the flatter side is connected to the socket closer to the middle of the module.
1. The LED module has a spin button that allows you to control the brightness. Turn this all the way up to start with by rotating it anti-clockwise as far as it will go using a small Phillips head screwdriver.
@ -83,7 +83,7 @@ Program the nightlight.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal.
1. Connect the Serial Monitor. Light values will be output to the terminal.
```output
> Executing task: platformio device monitor <

@ -40,7 +40,7 @@ Program the device.
Serial.println(light);
```
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can see it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can read it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
1. Add a small delay of one second (1,000ms) at the end of the `loop` as the light levels don't need to be checked continuously. A delay reduces the power consumption of the device.
@ -50,7 +50,7 @@ Program the device.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal to see the values change.
1. Connect the Serial Monitor.Light values will be output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal, and the values will change.
```output
> Executing task: platformio device monitor <

@ -88,7 +88,7 @@ Messages can be sent with a quality of service (QoS), which determines the guara
Although the name is Message Queueing (initials in MQTT), it doesn't actually support message queues. This means that if a client disconnects, then reconnects it won't receive messages sent during the disconnection, except for those messages that it had already started to process using the QoS process. Messages can have a retained flag set on them. If this is set, the MQTT broker will store the last message sent on a topic with this flag, and send this to any clients who later subscribe to the topic. This way, the clients will always get the latest message.
MQTT also supports a keep alive function that checks to see if the connection is still alive during long gaps between messages.
MQTT also supports a keep alive function that checks if the connection is still alive during long gaps between messages.
> 🦟 [Mosquitto from the Eclipse Foundation](https://mosquitto.org) has a free MQTT broker you can run yourself to experiment with MQTT, along with a public MQTT broker you can use to test your code, hosted at [test.mosquitto.org](https://test.mosquitto.org).
@ -198,13 +198,13 @@ Configure a Python virtual environment and install the MQTT pip packages.
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get this version:
```sh
python --version
```
You should see the following:
The output will be similar to the following:
```output
(.venv) ➜ nightlight-server python --version
@ -249,7 +249,7 @@ Write the server code.
code .
```
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. This will be reported in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -257,7 +257,7 @@ Write the server code.
![VS Code Kill the active terminal instance button](../../../images/vscode-kill-terminal.png)
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, with the call to activate this appearing in the terminal. The name of the virtual environment (`.venv`) will also be in the prompt:
```output
➜ nightlight source .venv/bin/activate
@ -311,7 +311,7 @@ Write the server code.
The app will start listening to messages from the IoT device.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. You will see messages being received in the terminal.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. Messages being received will be printed to the terminal.
```output
(.venv) ➜ nightlight-server python app.py
@ -319,7 +319,7 @@ Write the server code.
Message received: {'light': 400}
```
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to recieve the messages being sent.
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to receive the messages being sent.
> 💁 You can find this code in the [code-server/server](code-server/server) folder.
@ -382,7 +382,7 @@ The next step for our Internet controlled nightlight is for the server code to s
1. Run the code as before
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal:
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal:
```output
(.venv) ➜ nightlight-server python app.py

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -4,7 +4,7 @@ In this part of the lesson, you will subscribe to commands sent from an MQTT bro
## Subscribe to commands
The next step is to subscribe to the commands sent from the MQTT broker, and respond to them.
The next step is to subscribe to the commands sent from the MQTT broker and respond to them.
### Task
@ -20,7 +20,7 @@ Subscribe to commands.
server_command_topic = id + '/commands'
```
The `server_command_topic` is the MQTT topic the device will subscribe to to receive LED commands.
The `server_command_topic` is the MQTT topic the device will subscribe to receive LED commands.
1. Add the following code just above the main loop, after the `mqtt_client.loop_start()` line:
@ -40,13 +40,13 @@ Subscribe to commands.
This code defines a function, `handle_command`, that reads a message as a JSON document and looks for the value of the `led_on` property. If it is set to `True` the LED is turned on, otherwise it is turned off.
The MQTT client subscribes on the topic that the server will send messages on, and sets the `handle_command` function to be called when a message is received.
The MQTT client subscribes on the topic that the server will send messages on and sets the `handle_command` function to be called when a message is received.
> 💁 The `on_message` handler is called for all topics subscribed to. If you later write code that listens to multiple topics, you can get the topic that the message was sent to from the `message` object passed to the handler function.
1. Run the code in the same way as you ran the code from the previous part of the assignment. If you are using a virtual IoT device, then make sure the CounterFit app is running and the light sensor and LED have been created on the correct pins.
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal. You will also see the LED being turned on and off depending on the light level.
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal. The LED will also be turned on and off depending on the light level.
> 💁 You can find this code in the [code-commands/virtual-device](code-commands/virtual-device) folder or the [code-commands/pi](code-commands/pi) folder.

@ -6,11 +6,11 @@ In this part of the lesson, you will connect your Wio Terminal to an MQTT broker
## Install the WiFi and MQTT Arduino libraries
To communicate with the MQTT broker, you need to install some Arduino libraries to use the WiFi chip in the Wio Terminal, and communicate with MQTT. When developing for Arduino devices, you can use a wide range of libraries that contain open-source code and implement a huge range of capabilities. Seeed publish libraries for the Wio Terminal that allow it to communicate over WiFi. Other developers have published libraries to communicate with MQTT brokers, and you will be using these with your device.
To communicate with the MQTT broker, you need to install some Arduino libraries to use the WiFi chip in the Wio Terminal, and communicate with MQTT. When developing for Arduino devices, you can use a wide range of libraries that contain open-source code and implement a huge range of capabilities. Seeed publishes libraries for the Wio Terminal that allows it to communicate over WiFi. Other developers have published libraries to communicate with MQTT brokers, and you will be using these with your device.
These libraries are provided as source code that can be imported automatically into PlatformIO, and compiled for your device. This way Arduino libraries will work on any device that supports the Arduino framework, assuming that the device has any specific hardware needed by that library. Some libraries, such as the Seeed WiFi libraries, are specific to certain hardware.
These libraries are provided as source code that can be imported automatically into PlatformIO and compiled for your device. This way Arduino libraries will work on any device that supports the Arduino framework, assuming that the device has any specific hardware needed by that library. Some libraries, such as the Seeed WiFi libraries, are specific to certain hardware.
Libraries can be installed globally and compiled in if needed, or into a specific project. For this assignment, the libraries will be installed into the project.
Libraries can be installed globally and compiled if needed, or into a specific project. For this assignment, the libraries will be installed into the project.
✅ You can learn more about library management and how to find and install libraries in the [PlatformIO library documentation](https://docs.platformio.org/en/latest/librarymanager/index.html).
@ -24,8 +24,8 @@ Install the Arduino libraries.
```ini
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
@ -33,7 +33,7 @@ Install the Arduino libraries.
This imports the Seeed WiFi libraries. The `@ <number>` syntax refers to a specific version number of the library.
> 💁 You can remove the `@ <number>` to always use the latest version of the libraries, but there's no guarantees the later versions will work with the code below. The code here has been tested with this version of the libraries.
> 💁 You can remove the `@ <number>` to always use the latest version of the libraries, but there are no guarantees the later versions will work with the code below. The code here has been tested with this version of the libraries.
This is all you need to do to add the libraries. Next time PlatformIO builds the project it will download the source code for these libraries and compile it into your project.
@ -142,7 +142,7 @@ Connect to the MQTT broker.
const string CLIENT_NAME = ID + "nightlight_client";
```
Replace `<ID>` with a unique ID that will be used the name of this device client, and later for the topics that this device publishes and subscribes to. The *test.mosquitto.org* broker is public and used by many people, including other students working through this assignment. Having a unique MQTT client name and topic names ensures your code won't clash with anyone elses. You will also need this ID when you are creating the server code later in this assignment.
Replace `<ID>` with a unique ID that will be used the name of this device client, and later for the topics that this device publishes and subscribes to. The *test.mosquitto.org* broker is public and used by many people, including other students working through this assignment. Having a unique MQTT client name and topic names ensures your code won't clash with anyone else's. You will also need this ID when you are creating the server code later in this assignment.
> 💁 You can use a website like [GUIDGen](https://www.guidgen.com) to generate a unique ID.
@ -157,7 +157,7 @@ Connect to the MQTT broker.
PubSubClient client(wioClient);
```
This code creates a WiFi client using the Wio Terminal WiFI libraries, and uses it to create an MQTT client.
This code creates a WiFi client using the Wio Terminal WiFI libraries and uses it to create an MQTT client.
1. Below this code, add the following:
@ -183,7 +183,7 @@ Connect to the MQTT broker.
}
```
This function tests the connection to the MQTT broker and reconnects if it is not connected. It loops all the time it is not connected, and attempts to connect using the unique client name defined in the config header file.
This function tests the connection to the MQTT broker and reconnects if it is not connected. It loops all the time it is not connected and attempts to connect using the unique client name defined in the config header file.
If the connection fails, it retries after 5 seconds.
@ -215,7 +215,7 @@ Connect to the MQTT broker.
This code starts by reconnecting to the MQTT broker. These connections can be broken easily, so it's worth regularly checking and reconnecting if necessary. It then calls the `loop` method on the MQTT client to process any messages that are coming in on the topic subscribed to. This app is single-threaded, so messages cannot be received on a background thread, therefore time on the main thread needs to be allocated to processing any messages that are waiting on the network connection.
Finally a delay of 2 seconds ensures the light levels are not sent too often and reduces the power consumption of the device.
Finally, a delay of 2 seconds ensures the light levels are not sent too often and reduces the power consumption of the device.
1. Upload the code to your Wio Terminal, and use the Serial Monitor to see the device connecting to WiFi and MQTT.

@ -19,7 +19,7 @@ Once you have temperature data, you can use the Jupyter Notebook in this repo to
1. Install some pip packages for Jupyter notebooks, along with libraries needed to manage and plot the data:
```sh
pip install -U pip
pip install --upgrade pip
pip install pandas
pip install matplotlib
pip install jupyter

@ -16,8 +16,8 @@ lib_deps =
seeed-studio/Grove Temperature And Humidity Sensor @ 1.0.1
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -25,7 +25,7 @@ In this lesson we'll cover:
## What is the cloud?
Before the cloud, when a company wants to provide services to their employees (such as databases or file storage), or to the public (such as websites), they would build and run a data center. This ranged from a room with a small number of computers, to a building with many computers. The company would manage everything, including:
Before the cloud, when a company wanted to provide services to their employees (such as databases or file storage), or to the public (such as websites), they would build and run a data center. This ranged from a room with a small number of computers, to a building with many computers. The company would manage everything, including:
* Buying computers
* Hardware maintenance
@ -51,7 +51,7 @@ These data centers can be multiple square kilometers in size. The images above w
✅ Do some research: Read up on the major clouds such as [Azure from Microsoft](https://azure.microsoft.com/?WT.mc_id=academic-17441-jabenn) or [GCP from Google](https://cloud.google.com). How many data centers do they have, and where are they in the world?
Using the cloud keeps costs down for companies, and allows them to focus on what they do best, leaving the cloud computing expertise in the hands of the provider. Companies no longer need to rent or buy data center space or pay different providers for connectivity, power and expert employees. Instead, they can pay one monthly bill to the cloud provider to have everything taken care off.
Using the cloud keeps costs down for companies, and allows them to focus on what they do best, leaving the cloud computing expertise in the hands of the provider. Companies no longer need to rent or buy data center space, pay different providers for connectivity and power, or employ experts. Instead, they can pay one monthly bill to the cloud provider to have everything taken care off.
The cloud provider can then use economies of scale to drive costs down, buying computers in bulk at lower costs, investing in tooling to reduce their workload for maintenance, even designing and building their own hardware to improve their cloud offering.

@ -4,16 +4,16 @@ from grove.grove_relay import GroveRelay
import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
connection_string = "<connection_string>"
connection_string = '<connection_string>'
adc = ADC()
relay = GroveRelay(5)
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -32,7 +32,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -7,16 +7,16 @@ from counterfit_shims_grove.grove_relay import GroveRelay
import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
connection_string = "<connection_string>"
connection_string = '<connection_string>'
adc = ADC()
relay = GroveRelay(5)
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -35,7 +35,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -43,9 +43,9 @@ The next step is to connect your device to IoT Hub.
```python
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
```
1. Run this code. You will see your device connect.
@ -66,7 +66,7 @@ Now that your device is connected, you can send telemetry to the IoT Hub instead
1. Add the following code inside the `while True` loop, just before the sleep:
```python
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
```

@ -110,7 +110,7 @@ The next step is to connect your device to IoT Hub.
initTime();
```
1. Add the following variable declaration to the top of the file, just below the included directives:
1. Add the following variable declaration to the top of the file, just below the include directives:
```cpp
IOTHUB_DEVICE_CLIENT_LL_HANDLE _device_ll_handle;

@ -254,12 +254,12 @@ You are now ready to create the event trigger.
1. From the VS Code terminal run the following command from inside the `soil-moisture-trigger` folder:
```sh
func new --name iot_hub_trigger --template "Azure Event Hub trigger"
func new --name iot-hub-trigger --template "Azure Event Hub trigger"
```
This creates a new Function called `iot_hub_trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This creates a new Function called `iot-hub-trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This will create a folder inside the `soil-moisture-trigger` folder called `iot_hub_trigger` that contains this function. This folder will have the following files inside it:
This will create a folder inside the `soil-moisture-trigger` folder called `iot-hub-trigger` that contains this function. This folder will have the following files inside it:
* `__init__.py` - this is the Python code file that contains the trigger, using the standard Python file name convention to turn this folder into a Python module.
@ -313,7 +313,7 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
func start
```
The Functions app will start up, and will discover the `iot_hub_trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
The Functions app will start up, and will discover the `iot-hub-trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
```output
(.venv) ➜ soil-moisture-trigger func start
@ -325,23 +325,23 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
Functions:
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
For detailed output, run func with --verbose flag.
[2021-05-05T02:44:07.517Z] Worker process started and initialized.
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot_hub_trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot-hub-trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.205Z] Trigger Details: PartionId: 0, Offset: 1011240-1011632, EnqueueTimeUtc: 2021-05-04T19:04:04.2030000Z-2021-05-04T19:04:04.3900000Z, SequenceNumber: 2546-2547, Count: 2
[2021-05-05T02:44:09.352Z] Python EventHub trigger processed an event: {"soil_moisture":628}
[2021-05-05T02:44:09.354Z] Python EventHub trigger processed an event: {"soil_moisture":624}
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot_hub_trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot-hub-trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
```
Each call to the function will be surrounded by a `Executing 'Functions.iot_hub_trigger'`/`Executed 'Functions.iot_hub_trigger'` block in the output, so you can how many messages were processed in each function call.
Each call to the function will be surrounded by a `Executing 'Functions.iot-hub-trigger'`/`Executed 'Functions.iot-hub-trigger'` block in the output, so you can how many messages were processed in each function call.
> If you get the following error:
```output
The listener for function 'Functions.iot_hub_trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
The listener for function 'Functions.iot-hub-trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
```
Then check Azurite is running and you have set the `AzureWebJobsStorage` in the `local.settings.json` file to `UseDevelopmentStorage=true`.
@ -561,7 +561,7 @@ Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in soil-moisture-sensor:
iot_hub_trigger - [eventHubTrigger]
iot-hub-trigger - [eventHubTrigger]
```
Make sure your IoT device is running. Change the moisture levels by adjusting the soil moisture, or moving the sensor in and out of the soil. You will see the relay turn on and off as the soil moisture changes.

@ -35,7 +35,7 @@ Some hints:
relay_on: [GET,POST] http://localhost:7071/api/relay_on
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
```
Paste the URL into your browser and hit `return`, or `Ctrl+click` (`Cmd+click` on macOS) the link in the terminal window in VS Code to open it in your default browser. This will run the trigger.

@ -13,9 +13,9 @@ x509 = X509("./soil-moisture-sensor-x509-cert.pem", "./soil-moisture-sensor-x509
device_client = IoTHubDeviceClient.create_from_x509_certificate(x509, host_name, device_id)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -34,7 +34,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -16,9 +16,9 @@ x509 = X509("./soil-moisture-sensor-x509-cert.pem", "./soil-moisture-sensor-x509
device_client = IoTHubDeviceClient.create_from_x509_certificate(x509, host_name, device_id)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -37,7 +37,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -4,7 +4,7 @@ import pynmea2
import json
from azure.iot.device import IoTHubDeviceClient, Message
connection_string = "<connection_string>"
connection_string = '<connection_string>'
serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1)
serial.reset_input_buffer()
@ -12,9 +12,9 @@ serial.flush()
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def printGPSData(line):
msg = pynmea2.parse(line)

@ -7,15 +7,15 @@ import pynmea2
import json
from azure.iot.device import IoTHubDeviceClient, Message
connection_string = "<connection_string>"
connection_string = '<connection_string>'
serial = counterfit_shims_serial.Serial('/dev/ttyAMA0')
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def send_gps_data(line):
msg = pynmea2.parse(line)

@ -158,7 +158,7 @@ Once data is flowing into your IoT Hub, you can write some serverless code to li
1. Use the Azurite app as a local storage emulator
Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
1. Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
```output
Python EventHub trigger processed an event: {"gps": {"lat": 47.73481, "lon": -122.25701}}
@ -258,7 +258,7 @@ The data will be saved as a JSON blob with the following format:
> pip install --upgrade pip
> ```
1. In the `__init__.py` file for the `iot_hub_trigger`, add the following import statements:
1. In the `__init__.py` file for the `iot-hub-trigger`, add the following import statements:
```python
import json

@ -4,16 +4,16 @@ from grove.grove_relay import GroveRelay
import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
connection_string = "<connection_string>"
connection_string = '<connection_string>'
adc = ADC()
relay = GroveRelay(5)
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -32,7 +32,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -7,16 +7,16 @@ from counterfit_shims_grove.grove_relay import GroveRelay
import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
connection_string = "<connection_string>"
connection_string = '<connection_string>'
adc = ADC()
relay = GroveRelay(5)
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print("Connecting")
print('Connecting')
device_client.connect()
print("Connected")
print('Connected')
def handle_method_request(request):
print("Direct method received - ", request.name)
@ -35,7 +35,7 @@ while True:
soil_moisture = adc.read(0)
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ "soil_moisture": soil_moisture }))
message = Message(json.dumps({ 'soil_moisture': soil_moisture }))
device_client.send_message(message)
time.sleep(10)

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,8 +13,8 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,12 +13,12 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi
seeed-studio/Seeed Arduino FS
seeed-studio/Seeed Arduino SFUD
seeed-studio/Seeed Arduino rpcUnified
seeed-studio/Seeed_Arduino_mbedtls
seeed-studio/Seeed Arduino RTC
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
build_flags =
-w

@ -23,6 +23,10 @@ In this lesson we'll cover:
* [Using developer devices to simulate multiple IoT devices](#using-developer-devices-to-simulate-multiple-iot-devices)
* [Moving to production](#moving-to-production)
> 🗑 This is the last lesson in this project, so after completing this lesson and the assignment, don't forget to clean up your cloud services. You will need the services to complete the assignment, so make sure to complete that first.
>
> Refer to [the clean up your project guide](../../../clean-up.md) if necessary for instructions on how to do this.
## Architect complex IoT applications
IoT applications are made up of many components. This includes a variety of things, and a variety of internet services.

@ -0,0 +1,5 @@
.pio
.vscode/.browse.c_cpp.db*
.vscode/c_cpp_properties.json
.vscode/launch.json
.vscode/ipch

@ -0,0 +1,7 @@
{
// See http://go.microsoft.com/fwlink/?LinkId=827846
// for the documentation about the extensions.json format
"recommendations": [
"platformio.platformio-ide"
]
}

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,16 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Grove Ranging sensor - VL53L0X @ ^1.1.1

@ -0,0 +1,31 @@
#include <Arduino.h>
#include "Seeed_vl53l0x.h"
Seeed_vl53l0x VL53L0X;
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
VL53L0X.VL53L0X_common_init();
VL53L0X.VL53L0X_high_accuracy_ranging_init();
}
void loop()
{
VL53L0X_RangingMeasurementData_t RangingMeasurementData;
memset(&RangingMeasurementData, 0, sizeof(VL53L0X_RangingMeasurementData_t));
VL53L0X.PerformSingleRangingMeasurement(&RangingMeasurementData);
Serial.print("Distance = ");
Serial.print(RangingMeasurementData.RangeMilliMeter);
Serial.println(" mm");
delay(1000);
}

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -89,7 +89,7 @@ Program the device.
Distance = 151 mm
```
The rangefinder is on the back of the sensor, so make sure you use hte correct side when measuring distance.
The rangefinder is on the back of the sensor, so make sure you use the correct side when measuring distance.
![The rangefinder on the back of the time of flight sensor pointing at a banana](../../../images/time-of-flight-banana.png)

@ -38,3 +38,62 @@ The Wio Terminal can now be programmed to use the attached time of flight sensor
1. Create a brand new Wio Terminal project using PlatformIO. Call this project `distance-sensor`. Add code in the `setup` function to configure the serial port.
1. Add a library dependency for the Seeed Grove time of flight distance sensor library to the projects `platformio.ini` file:
```ini
lib_deps =
seeed-studio/Grove Ranging sensor - VL53L0X @ ^1.1.1
```
1. In `main.cpp`, add the following below the existing include directives to declare an instance of the `Seeed_vl53l0x` class to interact with the time of flight sensor:
```cpp
#include "Seeed_vl53l0x.h"
Seeed_vl53l0x VL53L0X;
```
1. Add the following to the bottom of the `setup` function to initialize the sensor:
```cpp
VL53L0X.VL53L0X_common_init();
VL53L0X.VL53L0X_high_accuracy_ranging_init();
```
1. In the `loop` function, read a value from the sensor:
```cpp
VL53L0X_RangingMeasurementData_t RangingMeasurementData;
memset(&RangingMeasurementData, 0, sizeof(VL53L0X_RangingMeasurementData_t));
VL53L0X.PerformSingleRangingMeasurement(&RangingMeasurementData);
```
This code initializes a data structure to read data into, then passes it into the `PerformSingleRangingMeasurement` method where it will be populated with the distance measurement.
1. Below this, write out the distance measurement, then delay for 1 second:
```cpp
Serial.print("Distance = ");
Serial.print(RangingMeasurementData.RangeMilliMeter);
Serial.println(" mm");
delay(1000);
```
1. Build, upload and run this code. You will be able to see distance measurements with the serial monitor. Position objects near the sensor and you will see the distance measurement:
```output
Distance = 29 mm
Distance = 28 mm
Distance = 30 mm
Distance = 151 mm
```
The rangefinder is on the back of the sensor, so make sure you use the correct side when measuring distance.
![The rangefinder on the back of the time of flight sensor pointing at a banana](../../../images/time-of-flight-banana.png)
> 💁 You can find this code in the [code-proximity/wio-terminal](code-proximity/wio-terminal) folder.
😀 Your proximity sensor program was a success!

@ -1,5 +1,20 @@
# Consumer IoT - build a smart voice assistant
The fod has been grown, driven to a processing plant, sorted for quality, sold in the store and now it's time to cook! One of the core pieces of any kitchen is a timer. Initially these started as hour glasses - your food was cooked when all the sand trickled down into the bottom bulb. They then went clockwork, then electric.
The latest iterations are now part of our smart devices. In kitchens in homes all throughout the world you'll hear cooks shouting "Hey Siri - set a 10 minute timer", or "Alexa - cancel my bread timer". No longer do you have to walk back to the kitchen to check on a timer, you can do it from your phone, or a call out across the room.
In these 4 lessons you'll learn how to build a smart timer, using AI to recognize your voice, understand what you are asking for, and reply with information about your timer. You'll also add support for multiple languages.
> 💁 These lessons will use some cloud resources. If you don't complete all the lessons in this project, make sure you [Clean up your project](../clean-up.md).
## Topics
1. [Recognize speech with an IoT device](./lessons/1-speech-recognition/README.md)
1. [Understand language](./lessons/2-language-understanding/README.md)
1. [Set a timer and provide spoken feedback](./lessons/3-spoken-feedback/README.md)
1. [Support multiple languages](./lessons/4-multiple-language-support/README.md)
## Credits
All the lessons were written with ♥️ by [Jim Bennett](https://GitHub.com/JimBobBennett)

@ -0,0 +1,225 @@
# Recognize speech with an IoT device
Add a sketchnote if possible/appropriate
This video gives an overview of the Azure speech service, a topic that will be covered in this lesson:
[![How to get started using your Cognitive Services Speech resource from the Microsoft Azure YouTube channel](https://img.youtube.com/vi/iW0Fw0l3mrA/0.jpg)](https://www.youtube.com/watch?v=iW0Fw0l3mrA)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
## Introduction
'Alexa, set a 12 minute timer'
'Alexa, timer status'
'Alexa set a 8 minute timer called steam broccoli'
Smart devices are becoming more and more pervasive. Not just as smart speakers like HomePods, Echos and Google Homes, but embedded in our phones, watches, and even light fittings and thermostats.
> 💁 I have at least 19 devices in my home that have voice assistants, and that's just the ones I know about!
Voice control increases accessibility by allowing folks with limited movement to interact with devices. Whether it is a permanent disability such as being born without arms, to temporary disabilities such as broken arms, or having your hands full of shopping or young children, being able to control our houses from our voice instead of our hands opens up a world of access. Shouting 'Hey Siri, close my garage door' whilst dealing with a baby change and an unruly toddler can be a small but effective improvement on life.
One of the more popular uses for voice assistants is setting timers, especially kitchen timers. Being able to set multiple timers with just your voice is a great help in the kitchen - no need to stop kneading dough, stirring soup, or clean dumpling filling off your hands to use a physical timer.
In this lesson you will learn about building voice recognition into IoT devices. You'll learn about microphones as sensors, how to capture audio from a microphone attached to an IoT device, and how to use AI to convert what is heard into text. Throughout the rest of this project you will build a smart kitchen timer, able to set timers using your voice with multiple languages.
In this lesson we'll cover:
* [Microphones](#microphones)
* [Capture audio from your IoT device](#capture-audio-from-your-iot-device)
* [Speech to text](#speech-to-text)
* [Convert speech to text](#convert-speech-to-text)
## Microphones
Microphones are analog sensors that convert sound waves into electrical signals. Vibrations in air cause components in the microphone to move tiny amounts, and these cause tiny changes in electrical signals. These changes are then amplified to generate an electrical output.
### Microphone types
Microphones come in a variety of types:
* Dynamic - Dynamic microphones have magnet attached to a moving diaphragm that moves in a coil of wire creating an electrical current. This is the opposite of most loudspeakers, that use an electrical current to move a magnet in a coil of wire, moving a diaphragm to create sound. This means speakers can be used a dynamic microphones, and dynamic microphones can be used as speakers. In devices such as intercoms where a user is either listening or speaking, but not both, one device can act as both a speaker and a microphone.
Dynamic microphones don't need power to work, the electrical signal is created entirely from the microphone.
![Patti Smith singing into a Shure SM58 (dynamic cardioid type) microphone](../../../images/dynamic-mic.jpg)
***Beni Köhler / [Creative Commons Attribution-Share Alike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/deed.en)***
* Ribbon - Ribbon microphones are similar to dynamic microphones, except they have a metal ribbon instead of a diaphragm. This ribbon moves in a magnetic field generating an electrical current. Like dynamic microphones, ribbon microphones don't need power to work.
![Edmund Lowe, American actor, standing at radio microphone (labeled for (NBC) Blue Network), holding script, 1942](../../../images/ribbon-mic.jpg)
* Condenser - Condenser microphones have a thin metal diaphragm and a fixed metal backplate. Electricity is applied to both of these and as the diaphragm vibrates the static charge between the plates changes generating a signal. Condenser microphones need power to work - called *Phantom power*.
![C451B small-diaphragm condenser microphone by AKG Acoustics](../../../images/condenser-mic.jpg)
***[Harumphy](https://en.wikipedia.org/wiki/User:Harumphy) at [en.wikipedia](https://en.wikipedia.org/) / [Creative Commons Attribution-Share Alike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/deed.en)***
* MEMS - Microelectromechanical systems microphones, or MEMS, are microphones on a chip. They have a pressure sensitive diaphragm etched onto a silicon chip, and work similar to a condenser microphone. These microphones can be tiny, and integrated into circuitry.
![A MEMS microphone on a circuit board](../../../images/mems-microphone.png)
In the image above, the chip labelled **LEFT** is a MEMS microphone, with a tiny diaphragm less than a millimeter wide.
✅ Do some research: What microphones do you have around you - either in your computer, your phone, your headset or in other devices. What type of microphones are they?
### Digital audio
Audio is an analog signal carrying very fine-grained information. To convert this signal to digital, the audio needs to be sampled many thousands of times a second.
> 🎓 Sampling is converting the audio signal into a digital value that represents the signal at that point in time.
![A line chart showing a signal, with discrete points at fixed intervals](../../../images/sampling.png)
Digital audio is sampled using Pulse Code Modulation, or PCM. PCM involves reading the voltage of the signal, and selecting the closest discrete value to that voltage using a defined size.
> 💁 You can think of PCM as the sensor version of pulse width modulation, or PWM (PWM was covered back in [lesson 3 of the getting started project](../../../1-getting-started/lessons/3-sensors-and-actuators/README.md#pulse-width-modulation)). PCM involves converting an analog signal to digital, PWM involves converting a digital signal to analog.
For example most streaming music services offer 16-bit or 24-bit audio. This means they convert the voltage into a value that fits into a 16-bit integer, or 24-bit integer. 16-bit audio fits the value into a number ranging from -32,768 to 32,767, 24-bit is in the range 8,388,608 to 8,388,607. The more bits, the closer the sample is to what our ears actually hear.
> 💁 You may have hard of 8-bit audio, often referred to as LoFi. This is audio sampled using only 8-bits, so -128 to 127. The first computer audio was limited to 8 bits due to hardware limitations, so this is often seen in retro gaming.
These samples are taken many thousands of times per second, using well-defined sample rates measured in KHz (thousands of readings per second). Streaming music services use 48KHz for most audio, but some 'loseless' audio uses up to 96KHz or even 192KHz. The higher the sample rate, the closer to the original the audio will be, up to a point. There is debate whether humans can tell the difference above 48KHz.
✅ Do some research: If you use a streaming music service, what sample rate and size does it use? If you use CDs, what is the sample rate and size of CD audio?
## Capture audio from your IoT device
Your IoT device can be connected to a microphone to capture audio, ready for conversion to text. It can also be connected to speakers to output audio. In later lessons this will be used to give audio feedback, but it is useful to set up speakers now to test the microphone.
### Task - configure your microphone and speakers
Work through the relevant guide to configure the microphone and speakers for your IoT device:
* [Arduino - Wio Terminal](wio-terminal-microphone.md)
* [Single-board computer - Raspberry Pi](pi-microphone.md)
* [Single-board computer - Virtual device](virtual-device-microphone.md)
### Task - capture audio
Work through the relevant guide to capture audio on your IoT device:
* [Arduino - Wio Terminal](wio-terminal-audio.md)
* [Single-board computer - Raspberry Pi](pi-audio.md)
* [Single-board computer - Virtual device](virtual-device-audio.md)
## Speech to text
Speech to text, or speech recognition, involves using AI to convert words in an audio signal to text.
### Speech recognition models
To convert speech to text, samples from the audio signal are grouped together and fed into a machine learning model based around a Recurrent Neural network (RNN). This is a type of machine learning model that can use previous data to make a decision about incoming data. For example, the RNN could detect one block of audio samples as the sound 'Hel', and when it receives another that it thinks is the sound 'lo', it can combine this with the previous sound, find that 'Hello' is a valid word and select that as the outcome.
ML models always accept data of the same size every time. The image classifier you built in an earlier lesson resizes images to a fixed size and processes them. The same with speech models, they have to process fixed sized audio chunks. The speech models need to be able to combine the outputs of multiple predictions to get the answer, to allow it to distinguish between 'Hi' and 'Highway', or 'flock' and 'floccinaucinihilipilification'.
Speech models are also advanced enough to understand context, and can correct the words they detect as more sounds are processed. For example, if you say "I went to the shops to get two bananas and an apple too", you would use three words that sound the same, but are spelled differently - to, two and too. Speech models are able to understand the context and use the appropriate spelling of the word.
> 💁 Some speech services allow customization to make them work better in noisy environments such as factories, or with industry-specific words such as chemical names. These customizations are trained by providing sample audio and a transcription, and work using transfer learning, the same as how you trained an image classifier using only a few images in an earlier lesson.
### Privacy
When using speech to text in a consumer IoT device, privacy is incredibly important. These devices listen to audio continuously, so as a consumer you don't want everything you say being sent to the cloud and converted to text. Not only will this use a lot of Internet bandwidth, it also has massive privacy implications, especially when some smart device makers randomly select audio for [humans to validate against the text generated to help improve their model](https://www.theverge.com/2019/4/10/18305378/amazon-alexa-ai-voice-assistant-annotation-listen-private-recordings).
You only want your smart device to send audio to the cloud for processing when you are using it, not when it hears audio in your home, audio that could include private meetings or intimate interactions. The way most smart devices work is with a *wake word*, a key phrase such as "Alexa", "Hey Siri", or "OK Google" that causes the device to 'wake up' and listen to what you are saying up until it detects a break in your speech, indicating you have finished talking to the device.
> 🎓 Wake word detection is also referred to as *Keyword spotting* or *Keyword recognition*.
These wake words are detected on the device, not in the cloud. These smart devices have small AI models that run on the device that listen for the wake work, and when it is detected, start streaming the audio to the cloud for recognition. These models are very specialized, and just listen for the wake word.
> 💁 Some tech companies are adding more privacy to their devices and doing some of the speech to text conversion on the device. Apple have announced that as part of their 2021 iOS and macOS updates they will support the speech to text conversion on device, and be able to handle many requests without needing to use the cloud. This is thanks to having powerful processors in their devices that can run ML models.
✅ What do you think are the privacy and ethical implications of storing the audio sent to the cloud? Should this audio be stored, and if so, how? Do you thing the use of recordings for law enforcement is a good trade off for the loss of privacy?
Wake word detection usually uses a technique know an TinyML, that is converting ML models to be able to run on microcontrollers. These models are small in size, and consume very little power to run.
To avoid the complexity of training and using a wake word model, the smart timer you are building in this lesson will use a button to turn on the speech recognition.
> 💁 If you want to try creating a wake word detection model to run on the Wio Terminal or Raspberry Pi, check out this [Responding to your voice tutorial by Edge Impulse](https://docs.edgeimpulse.com/docs/responding-to-your-voice). If you want to use your computer to do this, you can try the [Get started with Custom Keyword quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/keyword-recognition-overview?WT.mc_id=academic-17441-jabenn).
## Convert speech to text
Just like with image classification in the last project, there are pre-built AI services that can take speech as an audio file and convert it to text. Once such service is the Speech Service, part of the Cognitive Services, pre-built AI services you can use in your apps.
### Task - configure a speech AI resource
1. Create a Resource Group for this project called `smart-timer`
1. Use the following command to create a free speech resource:
```sh
az cognitiveservices account create --name smart-timer \
--resource-group smart-timer \
--kind SpeechServices \
--sku F0 \
--yes \
--location <location>
```
Replace `<location>` with the location you used when creating the Resource Group.
1. You will need an API key to access the speech resource from your code. Run the following command to get the key:
```sh
az cognitiveservices account keys list --name smart-timer \
--resource-group smart-timer \
--output table
```
Take a copy of one of the keys.
### Task - convert speech to text
Work through the relevant guide to convert speech to text on your IoT device:
* [Arduino - Wio Terminal](wio-terminal-speech-to-text.md)
* [Single-board computer - Raspberry Pi](pi-speech-to-text.md)
* [Single-board computer - Virtual device](virtual-device-speech-to-text.md)
### Task - send converted speech to an IoT services
To use the results of the speech to text conversion, you need to send it to the cloud. There it will be interpreted and responses sent back to the IoT device as commands.
1. Create a new IoT Hub in the `smart-timer` resource group, and register a new device called `smart-timer`.
1. Connect your IoT device to this IoT Hub using what you have learned in previous lessons, and send the speech as telemetry. Use a JSON document in this format:
```json
{
"speech" : "<converted speech>"
}
```
Where `<converted speech>` is the output from the speech to text call. You only need to send speech that has content, if the call returns an empty string it can be ignored.
1. Verify that messages are being sent by monitoring the Event Hub compatible endpoint using the `az iot hub monitor-events` command.
> 💁 You can find this code in the [code-iot-hub/virtual-iot-device](code-iot-hub/virtual-iot-device), [code-iot-hub/pi](code-iot-hub/pi), or [code-iot-hub/wio-terminal](code-iot-hub/wio-terminal) folder.
---
## 🚀 Challenge
Speech recognition has been around for a long time, and is continuously improving. Research the current capabilities and compare how these have evolved over time, including how accurate machine transcriptions are compared to human.
What do you think the future holds for speech recognition?
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
## Review & Self Study
* Read about the different microphone types and how they work on the [What's the difference between dynamic and condenser microphones article on Musician's HQ](https://musicianshq.com/whats-the-difference-between-dynamic-and-condenser-microphones/).
* Read more on the Cognitive Services speech service on the [Speech service documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/?WT.mc_id=academic-17441-jabenn)
* Read about keyword spotting on the [Keyword recognition documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/keyword-recognition-overview?WT.mc_id=academic-17441-jabenn)
## Assignment
[](assignment.md)

@ -0,0 +1,9 @@
#
## Instructions
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |

@ -0,0 +1,93 @@
import io
import json
import pyaudio
import requests
import time
import wave
from azure.iot.device import IoTHubDeviceClient, Message
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
rate = 48000
def capture_audio():
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': api_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
def convert_speech_to_text(buffer):
url = f'https://{location}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': f'audio/wav; codecs=audio/pcm; samplerate={rate}',
'Accept': 'application/json;text/xml'
}
params = {
'language': language
}
response = requests.post(url, headers=headers, params=params, data=buffer)
response_json = json.loads(response.text)
if response_json['RecognitionStatus'] == 'Success':
return response_json['DisplayText']
else:
return ''
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
text = convert_speech_to_text(buffer)
if len(text) > 0:
message = Message(json.dumps({ 'speech': text }))
device_client.send_message(message)

@ -0,0 +1,33 @@
import json
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer
from azure.iot.device import IoTHubDeviceClient, Message
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
speech_config = SpeechConfig(subscription=api_key,
region=location,
speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=speech_config)
def recognized(args):
if len(args.result.text) > 0:
message = Message(json.dumps({ 'speech': args.result.text }))
device_client.send_message(message)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
while True:
time.sleep(1)

@ -0,0 +1,61 @@
import io
import pyaudio
import time
import wave
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
rate = 48000
def capture_audio():
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
def play_audio(buffer):
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
output_device_index = speaker_card_number,
output = True)
with wave.open(buffer, 'rb') as wf:
data = wf.readframes(4096)
while len(data) > 0:
stream.write(data)
data = wf.readframes(4096)
stream.close()
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
play_audio(buffer)

@ -0,0 +1,82 @@
import io
import json
import pyaudio
import requests
import time
import wave
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
rate = 48000
def capture_audio():
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
api_key = '<key>'
location = '<location>'
language = '<language>'
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': api_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
def convert_speech_to_text(buffer):
url = f'https://{location}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': f'audio/wav; codecs=audio/pcm; samplerate={rate}',
'Accept': 'application/json;text/xml'
}
params = {
'language': language
}
response = requests.post(url, headers=headers, params=params, data=buffer)
response_json = json.loads(response.text)
if response_json['RecognitionStatus'] == 'Success':
return response_json['DisplayText']
else:
return ''
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
text = convert_speech_to_text(buffer)
print(text)

@ -0,0 +1,22 @@
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer
api_key = '<key>'
location = '<location>'
language = '<language>'
speech_config = SpeechConfig(subscription=api_key,
region=location,
speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=speech_config)
def recognized(args):
print(args.result.text)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
while True:
time.sleep(1)

@ -0,0 +1,213 @@
# Capture audio - Raspberry Pi
In this part of the lesson, you will write code to capture audio on your Raspberry Pi. Audio capture will be controlled by a button.
## Hardware
The Raspberry Pi needs a button to control the audio capture.
The button you will use is a Grove button. This is a digital sensor that turns a signal on or off. These buttons can be configured to send a high signal when the button is pressed, and low when it is not, or low when pressed and high when not.
If you are using a ReSpeaker 2-Mics Pi HAT as a microphone, then there is no need to connect a button as this hat has one fitted already. Skip to the next section.
### Connect the button
The button can be connected to the Grove base hat.
#### Task - connect the button
![A grove button](../../../images/grove-button.png)
1. Insert one end of a Grove cable into the socket on the button module. It will only go in one way round.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to the digital socket marked **D5** on the Grove Base hat attached to the Pi. This socket is the second from the left, on the row of sockets next to the GPIO pins.
![The grove button connected to socket D5](../../../images/pi-button.png)
## Capture audio
You can capture audio from the microphone using Python code.
### Task - capture audio
1. Power up the Pi and wait for it to boot
1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension.
1. The PyAudio Pip package has functions to record and play back audio. This package depends on some audio libraries that need to be installed first. Run the following commands in the terminal to install these:
```sh
sudo apt update
sudo apt install libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev libasound2-plugins --yes
```
1. Install the PyAudio Pip package.
```sh
pip3 install pyaudio
```
1. Create a new folder called `smart-timer` and add a file called `app.py` to this folder.
1. Add the following imports to the top of this file:
```python
import io
import pyaudio
import time
import wave
from grove.factory import Factory
```
This imports the `pyaudio` module, some standard Python modules to handle wave files, and the `grove.factory` module to import a `Factory` to create a button class.
1. Below this, add code to create a Grove button.
If you are using the ReSpeaker 2-Mics Pi HAT, use the following code:
```python
# The button on the ReSpeaker 2-Mics Pi HAT
button = Factory.getButton("GPIO-LOW", 17)
```
This creates a button on port **D17**, the port that the button on the ReSpeaker 2-Mics Pi HAT is connected to. This button is set to send a low signal when pressed.
If you are not using the ReSpeaker 2-Mics Pi HAT, and are using a Grove button connected to the base hat, use this code.
```python
button = Factory.getButton("GPIO-HIGH", 5)
```
This creates a button on port **D5** that is set to send a high signal when pressed.
1. Below this, create an instance of the PyAudio class to handle audio:
```python
audio = pyaudio.PyAudio()
```
1. Declare the hardware card number for the microphone and speaker. This will be the number of the card you found by running `arecord -l` and `aplay -l` earlier in this lesson.
```python
microphone_card_number = <microphone card number>
speaker_card_number = <speaker card number>
```
Replace `<microphone card number>` with the number of your microphones card.
Replace `<speaker card number>` with the number of your speakers card, the same number you set in the `alsa.conf` file.
1. Below this, declare the sample rate to use for the audio capture and playback. You may need to change this depending on the hardware you are using.
```python
rate = 48000 #48KHz
```
If you get sample rate errors when running this code later, change this value to `44100` or `16000`. The higher the value, the better the quality of the sound.
1. Below this, create a new function called `capture_audio`. This will be called to capture audio from the microphone:
```python
def capture_audio():
```
1. Inside this function, add the following to capture the audio:
```python
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
```
This code opens an audio input stream using the PyAudio object. This stream will capture audio from the microphone at 16KHz, capturing it in buffers of 4096 bytes in size.
The code then loops whilst the Grove button is pressed, reading these 4096 byte buffers into an array each time.
> 💁 You can read more on the options passed to the `open` method in the [PyAudio documentation](https://people.csail.mit.edu/hubert/pyaudio/docs/).
Once the button is released, the stream is stopped and closed.
1. Add the following to the end of this function:
```python
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
```
This code creates a binary buffer, and writes all the captured audio to it as a [WAV file](https://wikipedia.org/wiki/WAV). This is a standard way to write uncompressed audio to a file. This buffer is then returned.
1. Add the following `play_audio` function to play back the audio buffer:
```python
def play_audio(buffer):
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
output_device_index = speaker_card_number,
output = True)
with wave.open(buffer, 'rb') as wf:
data = wf.readframes(4096)
while len(data) > 0:
stream.write(data)
data = wf.readframes(4096)
stream.close()
```
This function opens another audio stream, this time for output - to play the audio. It uses the same settings as the input stream. The buffer is then opened as a wave file and written to the output stream in 4096 byte chunks, playing the audio. The stream is then closed.
1. Add the following code below the `capture_audio` function to loop until the button is pressed. Once the button is pressed, the audio is captured, then played.
```python
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
play_audio(buffer)
```
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and you will hear the recording.
You may get some ALSA errors when the PyAudio instance is created. This is due to configuration on the Pi for audio devices you don't have. You can ignore these errors.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.front
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
```
If you get the following error:
```output
OSError: [Errno -9997] Invalid sample rate
```
then change the `rate` to either 44100 or 16000.
> 💁 You can find this code in the [code-record/pi](code-record/pi) folder.
😀 Your audio recording program was a success!

@ -0,0 +1,143 @@
# Configure your microphone and speakers - Raspberry Pi
In this part of the lesson, you will add a microphone and speakers to your Raspberry Pi.
## Hardware
The Raspberry Pi needs a microphone.
The Pi doesn't have a microphone built in, you will need to add an external microphone. There are multiple ways to do this:
* USB microphone
* USB headset
* USB all in one speakerphone
* USB audio adapter and microphone with a 3.5mm jack
* [ReSpeaker 2-Mics Pi HAT](https://www.seeedstudio.com/ReSpeaker-2-Mics-Pi-HAT.html)
> 💁 Bluetooth microphones are not all supported on the Raspberry Pi, so if you have a bluetooth microphone or headset, you may have issues pairing or capturing audio.
Raspberry Pis come with a 3.5mm headphone jack. You can use this to connect headphones, a headset or a speaker. You can also add speakers using:
* HDMI audio through a monitor or TV
* USB speakers
* USB headset
* USB all in one speakerphone
* [ReSpeaker 2-Mics Pi HAT](https://www.seeedstudio.com/ReSpeaker-2-Mics-Pi-HAT.html) with a speaker attached, either to the 3.5mm jack or to the JST port
## Connect and configure the microphone and speakers
The microphone and speakers need to be connected, and configured.
### Task - connect and configure the microphone
1. Connect the microphone using the appropriate method. For example, connect it via one of the USB ports.
1. If you are using the ReSpeaker 2-Mics Pi HAT, you can remove the Grove base hat, then fit the ReSpeaker hat in it's place.
![A raspberry pi with a ReSpeaker hat](../../../images/pi-respeaker-hat.png)
You will need a Grove button later in this lesson, but one is built into this hat, so the Grove base hat is not needed.
Once the hat is fitted, you will need to install some drivers. Refer to the [Seeed getting started instructions](https://wiki.seeedstudio.com/ReSpeaker_2_Mics_Pi_HAT_Raspberry/#getting-started) for driver installation instructions.
> ⚠️ The instructions use `git` to clone a repository. If you don't have `git` installed on your Pi, you can install it by running the following command:
>
> ```sh
> sudo apt install git --yes
> ```
1. Run the following command in your Terminal either on the Pi, or connected using VS Code and a remote SSH session to see information about the connected microphone:
```sh
arecord -l
```
You will see a list of connected microphones. It will be something like the following:
```output
pi@raspberrypi:~ $ arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: M0 [eMeet M0], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
```
Assuming you only have one microphone, you should only see one entry. Configuration of mics can be tricky on Linux, so it is easiest to only use one microphone and unplug any others.
Note down the card number, as you will need this later. In the output above the card number is 1.
### Task - connect and configure the speaker
1. Connect the speakers using the appropriate method.
1. Run the following command in your Terminal either on the Pi, or connected using VS Code and a remote SSH session to see information about the connected speakers:
```sh
aplay -l
```
You will see a list of connected speakers. It will be something like the following:
```output
pi@raspberrypi:~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Headphones [bcm2835 Headphones], device 0: bcm2835 Headphones [bcm2835 Headphones]
Subdevices: 8/8
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
Subdevice #4: subdevice #4
Subdevice #5: subdevice #5
Subdevice #6: subdevice #6
Subdevice #7: subdevice #7
card 1: M0 [eMeet M0], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
```
You will always see `card 0: Headphones` as this is the built-in headphone jack. If you have added additional speakers, such as a USB speaker, you will see this listed as well.
1. If you are using an additional speaker, and not a speaker or headphones connected to the built-in headphone jack, you need to configure it as the default. To do this run the following command:
```sh
sudo nano /usr/share/alsa/alsa.conf
```
This will open a configuration file in `nano`, a terminal-based text editor. Scroll down using the arrow keys on your keyboard until you find the following line:
```output
defaults.pcm.card 0
```
Change the value from `0` to the card number of the card you want to use from the list that came back from the call to `aplay -l`. For example, in the output above there is a second sound card called `card 1: M0 [eMeet M0], device 0: USB Audio [USB Audio]`, using card 1. To use this, I would update the line to be:
```output
defaults.pcm.card 1
```
Set this value to the appropriate card number. You can navigate to the number using the arrow keys on your keyboard, then delete and type the new number as normal when editing text files.
1. Save the changes and close the file by pressing `Ctrl+x`. Press `y` to save the file, then `return` to select the file name.
### Task - test the microphone and speaker
1. Run the following command to record 5 seconds of audio through the microphone:
```sh
arecord --format=S16_LE --duration=5 --rate=16000 --file-type=wav out.wav
```
Whilst this command is running, make noise into the microphone such as by speaking, singing, beat boxing, playing an instrument or whatever takes your fancy.
1. After 5 seconds, the recording will stop. Run the following command to play back the audio:
```sh
aplay --format=S16_LE --rate=16000 out.wav
```
You will hear the audio bing played back through the speakers. Adjust the output volume on your speaker as necessary.
1. If you need to adjust the volume of the built-in microphone port, or adjust the gain of the microphone, you can use the `alsamixer` utility. You can read more on this utility on thw [Linux alsamixer man page](https://linux.die.net/man/1/alsamixer)
1. If you get errors playing back the audio, check the card you set as the `defaults.pcm.card` in the `alsa.conf` file.

@ -0,0 +1,106 @@
# Speech to text - Raspberry Pi
In this part of the lesson, you will write code to convert speech in the captured audio to text using the speech service.
## Send the audio to the speech service
The audio can be sent to the speech service using the REST API. To use the speech service, first you need to request an access token, then use that token to access the REST API. These access tokens expire after 10 minutes, so your code should request them on a regular basis to ensure they are always up to date.
### Task - get an access token
1. Open the `smart-timer` project on your Pi.
1. Remove the `play_audio` function. This is no longer needed as you don't want a smart timer to repeat back to you what you said.
1. Add the following imports to the top of the `app.py` file:
```python
import requests
import json
```
1. Add the following code above the `while True` loop to declare some settings for the speech service:
```python
api_key = '<key>'
location = '<location>'
language = '<language>'
```
Replace `<key>` with the API key for your speech service. Replace `<location>` with the location you used when you created the speech service resource.
Replace `<language>` with the locale name for language you will be speaking in, for example `en-GB` for English, or `zn-HK` for Cantonese. You can find a list of the supported languages and their locale names in the [Language and voice support documentation on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support?WT.mc_id=academic-17441-jabenn#speech-to-text).
1. Below this, add the following function to get an access token:
```python
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': api_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
```
This calls a token issuing endpoint, passing the API key as a header. This call returns an access token that can be used to call the speech services.
1. Below this, declare a function to convert speech in the captured audio to text using the REST API:
```python
def convert_speech_to_text(buffer):
```
1. Inside this function, set up the REST API URL and headers:
```python
url = f'https://{location}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': f'audio/wav; codecs=audio/pcm; samplerate={rate}',
'Accept': 'application/json;text/xml'
}
params = {
'language': language
}
```
This builds a URL using the location of the speech services resource. It then populates the headers with the access token from the `get_access_token` function, as well as the sample rate used to capture the audio. Finally it defines some parameters to be passed with the URL containing the language in the audio.
1. Below this, add the following code to call the REST API and get back the text:
```python
response = requests.post(url, headers=headers, params=params, data=buffer)
response_json = json.loads(response.text)
if response_json['RecognitionStatus'] == 'Success':
return response_json['DisplayText']
else:
return ''
```
This calls the URL and decodes the JSON value that comes in the response. The `RecognitionStatus` value in the response indicates if the call was able to extract speech into text successfully, and if this is `Success` then the text is returned from the function, otherwise an empty string is returned.
1. Finally replace the call to `play_audio` in the `while True` loop with a call to the `convert_speech_to_text` function, as well as printing the text to the console:
```python
text = convert_speech_to_text(buffer)
print(text)
```
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and the audio will be converted to text and printed to the console.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py
Hello world.
Welcome to IoT for beginners.
```
Try different types of sentences, along with sentences where words sound the same but have different meanings. For example, if you are speaking in English, say 'I want to buy two bananas and an apple too', and notice how it will use the correct to, two and too based on the context of the word, not just it's sound.
> 💁 You can find this code in the [code-speech-to-text/pi](code-speech-to-text/pi) folder.
😀 Your speech to text program was a success!

@ -0,0 +1,3 @@
# Capture audio - Virtual IoT device
The Python libraries that you will be using later in this lesson to convert speech to text have built-in audio capture on Windows, macOS and Linux. You don't need to do anything here.

@ -0,0 +1,12 @@
# Configure your microphone and speakers - Virtual IoT Hardware
The virtual IoT hardware will use a microphone and speakers attached to your computer.
If your computer doesn't have a microphone and speakers built in, you will need to attach these using hardware of your choice, such as:
* USB microphone
* USB speakers
* Speakers built into your monitor and connected over HDMI
* Bluetooth headset
Refer to your hardware manufacturers instructions to install and configure this hardware.

@ -0,0 +1,95 @@
# Speech to text - Virtual IoT device
In this part of the lesson, you will write code to convert speech captured from your microphone to text using the speech service.
## Convert speech to text
On Windows, Linux, and macOS, the speech services Python SDK can be used to listen to your microphone and convert any speech that is detected to text. It will listen continuously, detecting the audio levels and sending the speech for conversion to text when the audio level drops, such as at the end of a block of speech.
### Task - convert speech to text
1. Create a new Python app on your computer in a folder called `smart-timer` with a single file called `app.py` and a Python virtual environment.
1. Install the Pip package for the speech services. Make sure you are installing this from a terminal with the virtual environment activated.
```sh
pip install azure-cognitiveservices-speech
```
> ⚠️ If you get the following error:
>
> ```output
> ERROR: Could not find a version that satisfies the requirement azure-cognitiveservices-speech (from versions: none)
> ERROR: No matching distribution found for azure-cognitiveservices-speech
> ```
>
> You will need to update Pip. Do this with the following command, then try to install the package again
>
> ```sh
> pip install --upgrade pip
> ```
1. Add the following imports to the `app,py` file:
```python
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer
```
This imports some classes used to recognize speech.
1. Add the following code to declare some configuration:
```python
api_key = '<key>'
location = '<location>'
language = '<language>'
speech_config = SpeechConfig(subscription=api_key,
region=location,
speech_recognition_language=language)
```
Replace `<key>` with the API key for your speech service. Replace `<location>` with the location you used when you created the speech service resource.
Replace `<language>` with the locale name for language you will be speaking in, for example `en-GB` for English, or `zn-HK` for Cantonese. You can find a list of the supported languages and their locale names in the [Language and voice support documentation on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support?WT.mc_id=academic-17441-jabenn#speech-to-text).
This configuration is then used to create a `SpeechConfig` object that will be used to configure the speech services.
1. Add the following code to create a speech recognizer:
```python
recognizer = SpeechRecognizer(speech_config=speech_config)
```
1. The speech recognizer runs on a background thread, listening for audio and converting any speech in it to text. You can get the text using a callback function - a function you define and pass to the recognizer. Every time speech is detected, the callback is called. Add the following code to define a callback that prints the text to the console, and pass this callback to the recognizer:
```python
def recognized(args):
print(args.result.text)
recognizer.recognized.connect(recognized)
```
1. The recognizer only starts listening when you explicitly start it. Add the following code to start the recognition. This runs in the background, so your application will also need an infinite loop that sleeps to keep the application running.
```python
recognizer.start_continuous_recognition()
while True:
time.sleep(1)
```
1. Run this app. Speak into your microphone and the audio converted to text will be output to the console.
```output
(.venv) ➜ smart-timer python3 app.py
Hello world.
Welcome to IoT for beginners.
```
Try different types of sentences, along with sentences where words sound the same but have different meanings. For example, if you are speaking in English, say 'I want to buy two bananas and an apple too', and notice how it will use the correct to, two and too based on the context of the word, not just it's sound.
> 💁 You can find this code in the [code-speech-to-text/virtual-iot-device](code-speech-to-text/virtual-iot-device) folder.
😀 Your speech to text program was a success!

@ -0,0 +1,3 @@
# Capture audio - Wio Terminal
Coming soon!

@ -0,0 +1,3 @@
# Configure your microphone and speakers - Wio Terminal
Coming soon!

@ -0,0 +1,3 @@
# Speech to text - Wio Terminal
Coming soon!

@ -0,0 +1,431 @@
# Understand language
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
## Introduction
In the last lesson you converted speech to text. For this to be used to program a smart timer, your code will need to have an understanding of what was said. You could assume the user will speak a fixed phrase, such as "Set a 3 minute timer", and parse that expression to get how long the timer should be, but this isn't very user-friendly. If a user were to say "Set a timer for 3 minutes", you or I would understand what they mean, but your code would not, it would be expecting a fixed phrase.
This is where language understanding comes in, using AI models to interpret text and return the details that are needed, for example being able to take both "Set a 3 minute timer" and "Set a timer for 3 minutes", and understand that a timer is required for 3 minutes.
In this lesson you will learn about language understanding models, how to create them, train them, and use them from your code.
In this lesson we'll cover:
* [Language understanding](#language-understanding)
* [Create a language understanding model](create-a-language-understanding-model)
* [Intents and entities](#intents-and-entities)
* [Use the language understanding model](#use-the-language-understanding-model)
## Language understanding
Humans have used language to communicate for hundreds of thousands of years. We communicate with words, sounds, or actions and understand what is said, both the meaning of the words, sounds or actions, but also their context. We understand sincerity and sarcasm, allowing the same words to mean different things depending on the tone of our voice.
✅ Think about some of the conversations you have had recently. How much of the conversation would be hard for a computer to understand because it needs context?
Language understanding, also called natural-language understanding is part of a field of artificial intelligence called natural-language processing (or NLP), and deals with reading comprehension, trying to understand the details of words or sentences. If you use a voice assistant such as Alexa or Siri, you have used language understanding services. These are the behind-the-scenes AI services that convert "Alexa, play the latest album by Taylor Swift" into my daughter dancing around the living room to her favorite tunes.
> 💁 Computers, despite all their advances, still have a long way to go to truly understand text. When we refer to language understanding with computers, we don't mean anything anywhere near as advanced as human communication, instead we mean taking some words and extracting key details.
As humans, we understand language without really thinking about it. If I asked another human to "play the latest album by Taylor Swift" then they would instinctively know what I meant. For a computer, this is harder. It would have to take the words, converted from speech to text, and work out the following pieces of information:
* Music needs to be played
* The music is by the artist Taylor Swift
* The specific music is a whole album of multiple tracks in order
* Taylor Swift has many albums, so they need to be sorted by chronological order and the most recently published is the one required
✅ Think of some other sentences you have spoken when making requests, such as ordering coffee or asking a family member to pass you something. Try to break then down into the pieces of information a computer would need to extract to understand the sentence.
Language understanding models are AI models that are trained to extract certain details from language, and then are trained for specific tasks using transfer learning, in the same way you trained a Custom Vision model using a small set of images. You can take a model, then train it using the text you want it to understand.
## Create a language understanding model
![The LUIS logo](../../../images/luis-logo.png)
You can create language understanding models using LUIS, a language understanding service from Microsoft that is part of Cognitive Services.
### Task - create an authoring resource
To use LUIS, you need to create an authoring resource.
1. Use the following command to create an authoring resource in your `smart-timer` resource group:
```python
az cognitiveservices account create --name smart-timer-luis-authoring \
--resource-group smart-timer \
--kind LUIS.Authoring \
--sku F0 \
--yes \
--location <location>
```
Replace `<location>` with the location you used when creating the Resource Group.
> ⚠️ LUIS isn't available in all regions, so if you get the following error:
>
> ```output
> InvalidApiSetId: The account type 'LUIS.Authoring' is either invalid or unavailable in given region.
> ```
>
> pick a different region.
This will create a free-tier LUIS authoring resource.
### Task - create a language understanding app
1. Open the LUIS portal at [luis.ai](https://luis.ai?WT.mc_id=academic-17441-jabenn) in your browser, and sign in with the same account you have been using for Azure.
1. Follow the instructions on the dialog to select your Azure subscription, then select the `smart-timer-luis-authoring` resource you have just created.
1. From the *Conversation apps* list, select the **New app** button to create a new application. Name the new app `smart-timer`, and set the *Culture* to your language.
> 💁 There is a field for a prediction resource. You can create a second resource just for prediction, but the free authoring resource allows 1,000 predictions a month which should be enough for development, so you can leave this blank.
1. Read through the guide that appears once you cerate the app to get an understanding of the steps you need to take to train the language understanding model. Close this guide when you are done.
## Intents and entities
Language understanding is based around *intents* and *entities*. Intents are what the intent of the words are, for example playing music, setting a timer, or ordering food. Entities are what the intent is referring to, such as the album, the length of the timer, or the type of food. Each sentence that the model interprets should have at least one intent, and optionally one or more entities.
Some examples:
| Sentence | Intent | Entities |
| --------------------------------------------------- | ---------------- | ------------------------------------------ |
| "Play the latest album by Taylor Swift" | *play music* | *the latest album by Taylor Swift* |
| "Set a 3 minute timer" | *set a timer* | *3 minutes* |
| "Cancel my timer" | *cancel a timer* | None |
| "Order 3 large pineapple pizzas and a caesar salad" | *order food* | *3 large pineapple pizzas*, *caesar salad* |
✅ With the sentences you though about earlier, what would be the intent and any entities in that sentence?
To train LUIS, first you set the entities. These can be a fixed list of terms, or learned from the text. For example, you could provide a fixed list of food available from your menu, with variations (or synonyms) of each word, such as *egg plant* and *aubergine* as variations of *aubergine*. LUIS also has pre-built entities that can be used, such as numbers and locations.
For setting a timer, you could have one entity using the pre-built number entities for the time, and another for the units, such as minutes and seconds. Each unit would have multiple variations to cover the singular and plural forms - such as minute and minutes.
Once the entities are defined, you create intents. These are learned by the model based on example sentences that you provide (known as utterances). For example, for a *set timer* intent, you might provide the following sentences:
* `set a 1 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a 9 minute 30 second timer`
You then tell LUIS what parts of these sentences map to the entities:
![The sentence set a timer for 1 minute and 12 seconds broken into entities](../../../images/sentence-as-intent-entities.png)
The sentence `set a timer for 1 minute and 12 seconds` has the intent of `set timer`. It also has 2 entities with 2 values each:
| | time | unit |
| ---------- | ---: | ------ |
| 1 minute | 1 | minute |
| 12 seconds | 12 | second |
To train a good model, you need a range of different example sentences to cover the many different ways someone might ask for the same thing.
> 💁 As with any AI model, the more data and the more accurate the data you use to train, the better the model.
✅ Think about the different ways you might ask the same thing and expect a human to understand.
### Task - add entities to the language understanding models
For the timer, you need to add 2 entities - one for the unit of time (minutes or seconds), and one for the number of minutes or seconds.
You can find instructions for using the LUIS portal in the [Quickstart: Build your app in LUIS portal documentation on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/luis-get-started-create-app?WT.mc_id=academic-17441-jabenn).
1. From the LUIS portal, select the *Entities* tab and add the *number* prebuilt entity by selecting the **Add prebuilt entity** button, then selecting *number* from the list.
1. Create a new entity for the time unit using the **Create** button. Name the entity `time unit` and set the type to *List*. Add values for `minute` and `second` to the *Normalized values* list, adding the singular and plural forms to the *synonyms* list. Press `return` after adding each synonym to add it to the list.
| Normalized value | Synonyms |
| ---------------- | --------------- |
| minute | minute, minutes |
| second | second, seconds |
### Task - add intents to the language understanding models
1. From the *Intents* tab, select the **Create** button to create a new intent. Name this intent `set timer`.
1. In the examples, enter different ways to set a timer using both minutes, seconds and minutes and seconds combined. Examples could be:
* `set a 1 second timer`
* `set a 4 minute timer`
* `set a 9 minute 30 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a timer for 3 minutes and 1 second`
* `set a timer for 1 minute1 and 1 second`
* `set a timer for 30 seconds`
* `set a timer for 1 second`
1. As you enter each example, LUIS will start detecting entities, and will underline and label any it finds.
![The examples with the numbers and time units underlined by LUIS](../../../images/luis-intent-examples.png)
### Task - train and test the model
1. Once the entities and intents are configured, you can train the model using the **Train** button on the top menu. Select this button, and the model should train in a few seconds. The button will be greyed out whilst training, and be re-enabled once done.
1. Select the **Test** button from the top menu to test the language understanding model. Enter text such as `set a timer for 5 minutes and 4 seconds` and press return. The sentence will appear in a box under the text box that you typed it in to, and blow that will be the *top intent*, or the intent that was detected with the highest probability. This should be `set timer`. The intent name will be followed by the probability that the intent detected was the right one.
1. Select the **Inspect** option to see a breakdown of the results. You will see the top-scoring intent with it's percentage probability, along with lists of the entities detected.
1. Close the *Test* pane when you are done testing.
### Task - publish the model
To use this model from code, you need to publish it. When publishing from LUIS, you can publish to either a staging environment for testing, or a product environment for a full release. In this lesson, a staging environment is fine.
1. From the LUIS portal, select the **Publish** button from the top menu.
1. Make sure *Staging slot* is selected, then select **Done**. You will see a notification when the app is published.
1. You can test this using curl. To build the curl command, you need three values - the endpoint, the application ID (App ID) and an API key. These can be accessed from the **MANAGE** tab that can be selected from the top menu.
1. From the *Settings* section, copy the App ID
1. From the *Azure Resources* section, select *Authoring Resource*, and copy the *Primary Key* and *Endpoint URL*
1. Run the following curl command in your command prompt or terminal:
```sh
curl "<endpoint url>/luis/prediction/v3.0/apps/<app id>/slots/staging/predict" \
--request GET \
--get \
--data "subscription-key=<primary key>" \
--data "verbose=false" \
--data "show-all-intents=true" \
--data-urlencode "query=<sentence>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section.
Replace `<app id>` with the App ID from the *Settings* section.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section.
Replace `<sentence>` with the sentence you want to test with.
1. The output of this call will be a JSON document that details the query, the top intent, and a list of entities broken down by type.
```JSON
{
"query": "set a timer for 45 minutes and 12 seconds",
"prediction": {
"topIntent": "set timer",
"intents": {
"set timer": {
"score": 0.97031575
},
"None": {
"score": 0.02205793
}
},
"entities": {
"number": [
45,
12
],
"time-unit": [
[
"minute"
],
[
"second"
]
]
}
}
}
```
The JSON above came from querying with `set a timer for 45 minutes and 12 seconds`:
* The `set timer` was the top intent with a probability of 97%.
* Two *number* entities were detected, `45` and `12`.
* Two *time-unit* entities were detected, `minute` and `second`.
## Use the language understanding model
Once published, the LUIS model can be called from code. In the last lesson you sent the recognized speech to an IoT Hub, and you can use serverless code to respond to this and understand what was sent.
### Task - create a serverless functions app
1. Create an Azure Functions app called `smart-timer-trigger`.
1. Add an IoT Hub event trigger to this app called `speech-trigger`.
1. Set the Event Hub compatible endpoint connection string for your IoT Hub in the `local.settings.json` file, and use the key for that entry in the `function.json` file.
1. Use the Azurite app as a local storage emulator.
1. Run your functions app and your IoT device to ensure speech is arriving at the IoT Hub.
```output
Python EventHub trigger processed an event: {"speech": "Set a 3 minute timer."}
```
### Task - use the language understanding model
1. The SDK for LUIS is available via a Pip package. Add the following line to the `requirements.txt` file to add the dependency on this package:
```sh
azure-cognitiveservices-language-luis
```
1. Make sure the VS Code terminal has the virtual environment activated, and run the following command to install the Pip packages:
```sh
pip install -r requirements.txt
```
1. Add new entries to the `local.settings.json` file for your LUIS API Key, Endpoint URL, and App ID from the **MANAGE** tab of the LUIS portal:
```JSON
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section of the **MANAGE** tab. This will be `https://<location>.api.cognitive.microsoft.com/`.
Replace `<app id>` with the App ID from the *Settings* section of the **MANAGE** tab.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section of the **MANAGE** tab.
1. Add the following imports to the `__init__.py` file:
```python
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
```
This imports some system libraries, as well as the libraries to interact with LUIS.
1. In the `main` method, before it loops through all the events, add the following code:
```python
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
```
This loads the values you added to the `local.settings.json` file for your LUIS app, creates a credentials object with your API key, then creates a LUIS client object to interact with your LUIS app.
1. Predictions are requested from LUIS by sending a prediction request - a JSON document containing the text to predict. Create this with the following code inside the `for event in events` loop:
```python
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
```
This code extracts the speech that was sent to the IoT Hub and uses it to build the prediction request.
1. This request can then be sent to LUIS, using the staging slot that your app was published to:
```python
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
```
1. The prediction response contains the top intent - the intent with the highest prediction score, along with the entities. If the top intent is `set timer`, then the entities can be read to get the time needed for the timer:
```python
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
```
The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers - 4 and 17.
The `time unit` entities will be an array of arrays of strings, with each time unit as an array of strings inside the array. For example, if you said *"Set a four minute 17 second timer."*, then the `time unit` array will contain 2 arrays with single values each - `['minute']` and `['second']`.
The JSON version of these entities for *"Set a four minute 17 second timer."* is:
```json
{
"number": [4, 17],
"time unit": [
["minute"],
["second"]
]
}
```
This code also defines a count for the total time for the timer in seconds. This will be populated by the values from the entities.
1. The entities aren't linked, but we can make some assumptions about them. They will be in the order spoken, so the position in the array can be used to determine which number matches to which time unit. For example:
* *"Set a 30 second timer"* - this will have one number, `30`, and one time unit, `second` so the single number will match the single time unit.
* *"Set a 2 minute and 30 second timer"* - this will have two numbers, `2` and `30`, and two time units, `minute` and `second` so the first number will be for the first time unit (2 minutes), and the second number for the second time unit (30 seconds).
The following code gets the count of items in the number entities, and uses that to extract the first item from each array, then the second and so on:
```python
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
```
For *"Set a four minute 17 second timer."*, this will loop twice, giving the following values:
| loop count | `number` | `time_unit` |
| ---------: | -------: | ----------- |
| 0 | 4 | minute |
| 1 | 17 | second |
1. Inside this loop, use the number and time unit to calculate the total time for the timer, adding 60 seconds for each minute, and the number of seconds for any seconds.
```python
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
```
1. Finally, outside this loop through the entities, log the total time for the timer:
```python
logging.info(f'Timer required for {total_time} seconds')
```
1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output:
```output
[2021-06-16T01:38:33.316Z] Executing 'Functions.speech-trigger' (Reason='(null)', Id=39720c37-b9f1-47a9-b213-3650b4d0b034)
[2021-06-16T01:38:33.329Z] Trigger Details: PartionId: 0, Offset: 3144-3144, EnqueueTimeUtc: 2021-06-16T01:38:32.7970000Z-2021-06-16T01:38:32.7970000Z, SequenceNumber: 8-8, Count: 1
[2021-06-16T01:38:33.605Z] Python EventHub trigger processed an event: {"speech": "Set a four minute 17 second timer."}
[2021-06-16T01:38:35.076Z] Timer required for 257 seconds
[2021-06-16T01:38:35.128Z] Executed 'Functions.speech-trigger' (Succeeded, Id=39720c37-b9f1-47a9-b213-3650b4d0b034, Duration=1894ms)
```
> 💁 You can find this code in the [code/functions](code/functions) folder.
---
## 🚀 Challenge
There are many ways to request the same thing, such as setting a timer. Think of different ways to do this, and use them as examples in your LUIS app. Test these out, to see how well your model can cope with multiple ways to request a timer.
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
## Review & Self Study
* Read more about LUIS and it's capabilities on the [Language Understanding (LUIS) documentation page on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/?WT.mc_id=academic-17441-jabenn)
* Read more about language understanding on the [Natural-language understanding page on Wikipedia](https://wikipedia.org/wiki/Natural-language_understanding)
## Assignment
[Cancel the timer](assignment.md)

@ -0,0 +1,14 @@
# Cancel the timer
## Instructions
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven.
Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized.
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| Add the cancel timer intent to the LUIS app | Was able to add the intent and train the model | Was able to add the intent but not train the model | Was unable to add the intent and train the model |
| Handle the intent in the serverless app | Was able to detect the intent as the top intent and log it | Was able to detect the intent as the top intent | Was unable to detect the intent as the top intent |

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,11 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"IOT_HUB_CONNECTION_STRING": "<connection string>",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
}
}

@ -0,0 +1,4 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis

@ -0,0 +1,43 @@
from typing import List
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
def main(events: List[func.EventHubEvent]):
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
for event in events:
logging.info('Python EventHub trigger processed an event: %s',
event.get_body().decode('utf-8'))
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
logging.info(f'Timer required for {total_time} seconds')

@ -0,0 +1,15 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "samples-workitems",
"connection": "IOT_HUB_CONNECTION_STRING",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}

@ -0,0 +1,33 @@
# Set a timer and provide spoken feedback
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
## Introduction
In this lesson you will learn about
In this lesson we'll cover:
* [Thing 1](#thing-1)
## Thing 1
---
## 🚀 Challenge
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
## Review & Self Study
## Assignment
[](assignment.md)

@ -0,0 +1,9 @@
#
## Instructions
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |

@ -0,0 +1,33 @@
# Support multiple languages
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
## Introduction
In this lesson you will learn about
In this lesson we'll cover:
* [Thing 1](#thing-1)
## Thing 1
---
## 🚀 Challenge
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
## Review & Self Study
## Assignment
[](assignment.md)

@ -0,0 +1,9 @@
#
## Instructions
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |

@ -10,5 +10,5 @@ to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simpl
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
For more information read the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

@ -83,6 +83,12 @@ We have two choices of IoT hardware to use for the projects depending on persona
| 16 | [Manufacturing](./4-manufacturing) | Check fruit quality from an IoT device | Learn about using your fruit quality detector from an IoT device | [Check fruit quality from an IoT device](./4-manufacturing/lessons/2-check-fruit-from-device/README.md) |
| 17 | [Manufacturing](./4-manufacturing) | Run your fruit detector on the edge | Learn about running your fruit detector on an IoT device on the edge | [Run your fruit detector on the edge](./4-manufacturing/lessons/3-run-fruit-detector-edge/README.md) |
| 18 | [Manufacturing](./4-manufacturing) | Trigger fruit quality detection from a sensor | Learn about triggering fruit quality detection from a sensor | [Trigger fruit quality detection from a sensor](./4-manufacturing/lessons/4-trigger-fruit-detector/README.md) |
| 19 | [Retail](./5-retail) | | |
| 20 | [Retail](./5-retail) | | |
| 21 | [Consumer](./6-consumer) | Recognize speech with an IoT device | Learn how to recognize speech from an IoT device to build a smart timer | [Recognize speech with an IoT device](./6-consumer/lessons/1-speech-recognition/README.md) |
| 22 | [Consumer](./6-consumer) | Understand language | Learn how to understand sentences spoken to an IoT device | [Understand language](./6-consumer/lessons/2-language-understanding/README.md) |
| 23 | [Consumer](./6-consumer) | Set a timer and provide spoken feedback | Learn how to set a timer on an IoT device and give spoken feedback on when the timer is set and when it finishes | [Set a timer and provide spoken feedback](./6-consumer/lessons/3-spoken-feedback/README.md) |
| 24 | [Consumer](./6-consumer) | Support multiple languages | Learn how to support multiple languages, both being spoken to and the responses from your smart timer | [Support multiple languages](./6-consumer/lessons/4-multiple-language-support/README.md) |
## Offline access

@ -24,6 +24,10 @@ All the device code for Arduino is in C++. To complete all the assignments you w
These are specific to using the Wio terminal Arduino device, and are not relevant to using the Raspberry Pi.
* [ArduCam Mini 2MP Plus - OV2640](https://www.arducam.com/product/arducam-2mp-spi-camera-b0067-arduino/)
* [ReSpeaker 2-Mics Pi HAT](https://www.seeedstudio.com/ReSpeaker-2-Mics-Pi-HAT.html)
* [Breadboard Jumper Wires](https://www.seeedstudio.com/Breadboard-Jumper-Wire-Pack-241mm-200mm-160mm-117m-p-234.html)
* Headphones or other speaker with a 3.5mm jack, or a JST speaker such as:
* [Mono Enclosed Speaker - 2W 6 Ohm](https://www.seeedstudio.com/Mono-Enclosed-Speaker-2W-6-Ohm-p-2832.html)
* [Grove speaker plus](https://www.seeedstudio.com/Grove-Speaker-Plus-p-4592.html)
* *Optional* - microSD Card 16GB or less for testing image capture, along with a connector to use the SD card with your computer if you don't have one built-in. **NOTE** - the Wio Terminal only supports SD cards up to 16GB, it does not support higher capacities.
@ -45,11 +49,16 @@ These are specific to using the Raspberry Pi, and are not relevant to using the
* [Grove Pi base hat](https://wiki.seeedstudio.com/Grove_Base_Hat_for_Raspberry_Pi)
* [Raspberry Pi Camera module](https://www.raspberrypi.org/products/camera-module-v2/)
* Microphone and speaker:
* Any USB Microphone
* Any USB speaker, or speaker with a 3.5mm cable, or using HDMI audio if your Raspberry Pi is connected to a monitor with speakers
or
Use one of the following (or equivalent):
* Any USB Microphone with any USB speaker, or speaker with a 3.5mm jack cable, or using HDMI audio output if your Raspberry Pi is connected to a monitor or TV with speakers
* Any USB headset with a built in microphone
* [ReSpeaker 2-Mics Pi HAT](https://www.seeedstudio.com/ReSpeaker-2-Mics-Pi-HAT.html) with
* Headphones or other speaker with a 3.5mm jack, or a JST speaker such as:
* [Mono Enclosed Speaker - 2W 6 Ohm](https://www.seeedstudio.com/Mono-Enclosed-Speaker-2W-6-Ohm-p-2832.html)
* [USB Speakerphone](https://www.amazon.com/USB-Speakerphone-Conference-Business-Microphones/dp/B07Q3D7F8S/ref=sr_1_1?dchild=1&keywords=m0&qid=1614647389&sr=8-1)
* [Grove Sunlight sensor](https://www.seeedstudio.com/Grove-Sunlight-Sensor.html)
* [Grove Light sensor](https://www.seeedstudio.com/Grove-Light-Sensor-v1-2-LS06-S-phototransistor.html)
* [Grove button](https://www.seeedstudio.com/Grove-Button.html)
## Sensors and actuators
@ -60,7 +69,7 @@ Most of the sensors and actuators needed are used by both the Arduino and Raspbe
* [Grove capacitive soil moisture sensor](https://www.seeedstudio.com/Grove-Capacitive-Moisture-Sensor-Corrosion-Resistant.html)
* [Grove relay](https://www.seeedstudio.com/Grove-Relay.html)
* [Grove GPS (Air530)](https://www.seeedstudio.com/Grove-GPS-Air530-p-4584.html)
* [Grove - Time of flight Distance Sensor](https://www.seeedstudio.com/Grove-Time-of-Flight-Distance-Sensor-VL53L0X.html)
* [Grove Time of flight Distance Sensor](https://www.seeedstudio.com/Grove-Time-of-Flight-Distance-Sensor-VL53L0X.html)
## Optional hardware
@ -74,6 +83,8 @@ The lessons on automated watering work using a relay. As an option, you can conn
## Virtual hardware
The virtual hardware route will provide simulators for the sensors and actuators, implemented in Python. Depending on your hardware availability, you can run this on your normal development device, such as a Mac, PC, or run it on a Raspberry Pi and simulate only the hardware you don't have. For example, if you have the camera but not the Grove sensors, you will be able to run the virtual device code on your Pi and simulate the Grove sensors, but use a physical camera.
The virtual hardware route will provide simulators for the sensors and actuators, implemented in Python. Depending on your hardware availability, you can run this on your normal development device, such as a Mac, PC, or run it on a Raspberry Pi and simulate only the hardware you don't have. For example, if you have the Raspberry Pi camera but not the Grove sensors, you will be able to run the virtual device code on your Pi and simulate the Grove sensors, but use a physical camera.
The virtual hardware will use the [CounterFit project](https://github.com/CounterFit-IoT/CounterFit).
To complete these lessons you will need to have a web cam, microphone and audio output such as speakers or headphones. These can be built in or external, and need to be configured to work with your operating system and available for use from all applications.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save