Merge branch 'main' into arabic

pull/87/head
Jim Bennett 4 years ago committed by GitHub
commit 1feeaa29bd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,10 +1,12 @@
{
"cSpell.words": [
"ADCs",
"Alexa",
"Geospatial",
"Kbps",
"Mbps",
"Seeed",
"Siri",
"Twilio",
"UART",
"UDID",

@ -44,7 +44,7 @@ The **T** in IoT stands for **Things** - devices that interact with the physical
Devices for production or commercial use, such as consumer fitness trackers, or industrial machine controllers, are usually custom-made. They use custom circuit boards, maybe even custom processors, designed to meet the needs of a particular task, whether that's being small enough to fit on a wrist, or rugged enough to work in a high temperature, high stress or high vibration factory environment.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't see on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't have on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
These developer kits usually fall into two categories - microcontrollers and single-board computers. These will be introduced here, and we'll go into more detail in the next lesson.

@ -235,7 +235,7 @@ Create the Hello World app.
> 💁 You need to explicitly call `python3` to run this code just in case you have Python 2 installed in addition to Python 3 (the latest version). If you have Python2 installed then calling `python` will use Python 2 instead of Python 3
You should see the following output:
The following output will appear in the terminal:
```output
pi@raspberrypi:~/nightlight $ python3 app.py

@ -0,0 +1,13 @@
# 调查一个物联网IoT项目
## 指示
从智慧农场到智慧城市、医疗保健监测系统、交通或利用公共空间等世界上有非常多IoT项目。
在互联网寻找一个让你感兴趣的IoT项目的细节最好是离你不太远的。解释一下项目的好处与坏处例如它带来的益处、它带来的任何麻烦以及他怎么顾及隐私。
## 评分表
| 条件 | 优秀 | 一般 | 需改进 |
| -------- | --------- | -------- | ----------------- |
| 解释项目的好处与坏处 | 把项目的好处与坏处解释得很清楚 |简要地解释项目的好处与坏处 | 没有解释项目的好处与坏处 |

@ -67,13 +67,13 @@ Configure a Python virtual environment and install the pip packages for CounterF
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh
python --version
```
You should see the following:
The output should contain the following:
```output
(.venv) ➜ nightlight python --version
@ -122,7 +122,7 @@ Create a Python application to print `"Hello World"` to the console.
> 💁 If your terminal returns `command not found` on macOS it means VS Code has not been added to your PATH. You can add VS Code to your PATH by following the instructions in the [Launching from the command line section of the VS Code documentation](https://code.visualstudio.com/docs/setup/mac?WT.mc_id=academic-17441-jabenn#_launching-from-the-command-line) and run the command afterwards. VS Code is installed to your PATH by default on Windows and Linux.
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. The selected virtual environment will appear in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -136,9 +136,9 @@ Create a Python application to print `"Hello World"` to the console.
(.venv) ➜ nightlight
```
If you don't see `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
If you don't have `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and the the call to activate this will appear in the terminal. The prompt will also have the name of the virtual environment (`.venv`):
```output
➜ nightlight source .venv/bin/activate
@ -159,7 +159,7 @@ Create a Python application to print `"Hello World"` to the console.
python app.py
```
You should see the following output:
The following will be in the output:
```output
(.venv) ➜ nightlight python app.py
@ -184,7 +184,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![The Counter Fit app running in a browser](../../../images/counterfit-first-run.png)
You will see it marked as *Disconnected*, with the LED in the top-right corner turned off.
It will be marked as *Disconnected*, with the LED in the top-right corner turned off.
1. Add the following code to the top of `app.py`:
@ -201,7 +201,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![VS Code Create a new integrated terminal button](../../../images/vscode-new-terminal.png)
1. In this new terminal, run the `app.py` file as before. You will see the status of CounterFit change to **Connected** and the LED light up.
1. In this new terminal, run the `app.py` file as before. The status of CounterFit will change to **Connected** and the LED will light up.
![Counter Fit showing as connected](../../../images/counterfit-connected.png)

@ -40,7 +40,7 @@ Create the PlatformIO project.
1. Launch VS Code
1. You should see the PlatformIO icon on the side menu bar:
1. The PlatformIO icon will be on the side menu bar:
![The Platform IO menu option](../../../images/vscode-platformio-menu.png)
@ -83,7 +83,7 @@ The VS Code explorer will show a number of files and folders created by the Plat
#### Files
* `main.cpp` - this file in the `src` folder contains the entry point for your application. If you open the file, you will see the following:
* `main.cpp` - this file in the `src` folder contains the entry point for your application. Open this file, and it will contain the following code:
```cpp
#include <Arduino.h>
@ -101,7 +101,7 @@ The VS Code explorer will show a number of files and folders created by the Plat
* `.gitignore` - this file lists the files and directories to be ignored when adding your code to git source code control, such as uploading to a repository on GitHub.
* `platformio.ini` - this file contains configuration for your device and app. If you open this file, you will see the following:
* `platformio.ini` - this file contains configuration for your device and app. Open this file, and it will contain the following code:
```ini
[env:seeed_wio_terminal]
@ -166,7 +166,7 @@ Write the Hello World app.
1. The code will be compiled and uploaded to the Wio Terminal
> 💁 If you are using macOS you will see a notification about a *DISK NOT EJECTED PROPERLY*. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
> 💁 If you are using macOS, a notification about a *DISK NOT EJECTED PROPERLY* will appear. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
⚠️ If you get errors about the upload port being unavailable, first make sure you have the Wio Terminal connected to your computer, and switched on using the switch on the left hand side of the screen. The green light on the bottom should be on. If you still get the error, pull the on/off switch down twice in quick succession to force the Wio Terminal into bootloader mode and try the upload again.
@ -191,7 +191,7 @@ PlatformIO has a Serial Monitor that can monitor data sent over the USB cable fr
Hello World
```
You will see `Hello World` appear every 5 seconds.
`Hello World` will print to the serial monitor every 5 seconds.
> 💁 You can find this code in the [code/wio-terminal](code/wio-terminal) folder.

@ -100,7 +100,7 @@ The faster the clock cycle, the more instructions that can be processed each sec
Microcontrollers have much lower clock speeds than desktop or laptop computers, or even most smartphones. The Wio Terminal for example has a CPU that runs at 120MHz or 120,000,000 cycles per second.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and see how many times faster it is than the Wio terminal.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and compare how many times faster it is than the Wio terminal.
Each clock cycle draws power and generates heat. The faster the ticks, the more power consumed and more heat generated. PC's have heat sinks and fans to remove heat, without which they would overheat and shut down within seconds. Microcontrollers often have neither as they run much cooler and therefore much slower. PC's run off mains power or large batteries for a few hours, microcontrollers can run for days, months, or even years off small batteries. Microcontrollers can also have cores that run at different speeds, switching to slower low power cores when the demand on the CPU is low to reduce power consumption.
@ -112,7 +112,7 @@ Each clock cycle draws power and generates heat. The faster the ticks, the more
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, see if you can find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and see if you can see the CPU through the clear plastic window on the back.
If you are using a Wio Terminal for these lessons, try to find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and try to find the CPU through the clear plastic window on the back.
### Memory
@ -150,7 +150,7 @@ Microcontrollers need input and output (I/O) connections to read data from senso
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to see which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to learn which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
### Physical size
@ -198,7 +198,7 @@ There is a large ecosystem of third-party Arduino libraries that allow you to ad
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output to see the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and see this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to see this called each time the device reboots.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output for the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and observe that this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to show this is called each time the device reboots.
## Deeper dive into single-board computers

@ -1,12 +1,12 @@
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
from grove.grove_led import GroveLed
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
led = GroveLed(5)
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
if light < 300:

@ -1,9 +1,10 @@
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
time.sleep(1)

@ -44,7 +44,7 @@ Connect the LED.
## Program the nightlight
The nightlight can now be programmed using the Grove sunlight sensor and the Grove LED.
The nightlight can now be programmed using the Grove light sensor and the Grove LED.
### Task - program the nightlight
@ -93,7 +93,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
pi@raspberrypi:~/nightlight $ python3 app.py
@ -105,7 +105,7 @@ Program the nightlight.
Light level: 290
```
1. Cover and uncover the sunlight sensor. Notice how the LED will light up if the light level is 300 or less, and turn off when the light level is greater than 300.
1. Cover and uncover the light sensor. Notice how the LED will light up if the light level is 300 or less, and turn off when the light level is greater than 300.
> 💁 If the LED doesn't turn on, make sure it is connected the right way round, and the spin button is set to full on.

@ -4,33 +4,31 @@ In this part of the lesson, you will add a light sensor to your Raspberry Pi.
## Hardware
The sensor for this lesson is a **sunlight sensor** that uses [photodiodes](https://wikipedia.org/wiki/Photodiode) to convert visible and infrared light to an electrical signal. This is an analog sensor that sends an integer value from 0 to 1,023 indicating a relative amount of light, but this can be used to calculate exact values in [lux](https://wikipedia.org/wiki/Lux) by taking data from the separate infrared and visible light sensors.
The sensor for this lesson is a **light sensor** that uses a [photodiode](https://wikipedia.org/wiki/Photodiode) to convert light to an electrical signal. This is an analog sensor that sends an integer value from 0 to 1,000 indicating a relative amount of light that doesn't map to any standard unit of measurement such as [lux](https://wikipedia.org/wiki/Lux).
The sunlight sensor is an eternal Grove sensor and needs to be connected to the Grove Base hat on the Raspberry Pi.
The light sensor is an eternal Grove sensor and needs to be connected to the Grove Base hat on the Raspberry Pi.
### Connect the sunlight sensor
### Connect the light sensor
The Grove sunlight sensor that is used to detect the light levels needs to be connected to the Raspberry Pi.
The Grove light sensor that is used to detect the light levels needs to be connected to the Raspberry Pi.
#### Task - connect the sunlight sensor
#### Task - connect the light sensor
Connect the sunlight sensor
Connect the light sensor
![A grove sunlight sensor](../../../images/grove-sunlight-sensor.png)
![A grove light sensor](../../../images/grove-light-sensor.png)
1. Insert one end of a Grove cable into the socket on the sunlight sensor module. It will only go in one way round.
1. Insert one end of a Grove cable into the socket on the light sensor module. It will only go in one way round.
1.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to the analog socket marked **A0** on the Grove Base hat attached to the Pi. This socket is the second from the right, on the row of sockets next to the GPIO pins.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to one of the three the I<sup>2</sup>C sockets marked **I2C** on the Grove Base hat attached to the Pi. This socket is the second from the right, on the row of sockets next to the GPIO pins.
![The grove light sensor connected to socket A0](../../../images/pi-light-sensor.png)
> 💁 I<sup>2</sup>C is a way sensors and actuators can communicate with an IoT device. It will be covered in more detail in a later lesson.
## Program the light sensor
![The grove sunlight sensor connected to socket A0](../../../images/pi-sunlight-sensor.png)
The device can now be programmed using the Grove light sensor.
## Program the sunlight sensor
The device can now be programmed using the Grove sunlight sensor.
### Task - program the sunlight sensor
### Task - program the light sensor
Program the device.
@ -38,44 +36,36 @@ Program the device.
1. Open the nightlight project in VS Code that you created in the previous part of this assignment, either running directly on the Pi or connected using the Remote SSH extension.
1. Run the following command to install a pip package for working with the sunlight sensor:
```sh
pip3 install seeed-python-si114x
```
Not all the libraries for the Grove Sensors are installed with the Grove install script you used in an earlier lesson. Some need additional packages.
1. Open the `app.py` file and remove all code from it
1. Add the following code to the `app.py` file to import some required libraries:
```python
import time
import seeed_si114x
from grove.grove_light_sensor_v1_2 import GroveLightSensor
```
The `import time` statement imports the `time` module that will be used later in this assignment.
The `import seeed_si114x` statement imports the `seeed_si114x` module that has code to interact with the Grove sunlight sensor.
The `from grove.grove_light_sensor_v1_2 import GroveLightSensor` statement imports the `GroveLightSensor` from the Grove Python libraries. This library has code to interact with a Grove light sensor, and was installed globally during the Pi setup.
1. Add the following code after the code above to create an instance of the class that manages the light sensor:
```python
light_sensor = seeed_si114x.grove_si114x()
light_sensor = GroveLightSensor(0)
```
The line `light_sensor = seeed_si114x.grove_si114x()` creates an instance of the `grove_si114x` sunlight sensor class.
The line `light_sensor = GroveLightSensor(0)` creates an instance of the `GroveLightSensor` class connecting to pin **A0** - the analog Grove pin that the light sensor is connected to.
1. Add an infinite loop after the code above to poll the light sensor value and print it to the console:
```python
while True:
light = light_sensor.ReadVisible
light = light_sensor.light
print('Light level:', light)
```
This will read the current sunlight level on a scale of 0-1,023 using the `ReadVisible` property of the `grove_si114x` class. This value is then printed to the console.
This will read the current light level on a scale of 0-1,023 using the `light` property of the `GroveLightSensor` class. This property reads the analog value from the pin. This value is then printed to the console.
1. Add a small sleep of one second at the end of the `loop` as the light levels don't need to be checked continuously. A sleep reduces the power consumption of the device.
@ -89,16 +79,16 @@ Program the device.
python3 app.py
```
You should see sunlight values being output to the console. Cover and uncover the sunlight sensor to see the values change:
Light values will be output to the console. Cover and uncover the light sensor, and the values will change:
```output
pi@raspberrypi:~/nightlight $ python3 app.py
Light level: 259
Light level: 265
Light level: 265
Light level: 584
Light level: 550
Light level: 497
Light level: 634
Light level: 634
Light level: 634
Light level: 230
Light level: 104
Light level: 290
```
> 💁 You can find this code in the [code-sensor/pi](code-sensor/pi) folder.

@ -91,7 +91,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
(.venv) ➜ GroveTest python3 app.py
@ -101,7 +101,7 @@ Program the nightlight.
Light level: 253
```
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. You will see the LED turn on and off.
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. The LED will turn on and off.
![The LED in the CounterFit app turning on and off as the light level changes](../../../images/virtual-device-running-assignment-1-1.gif)

@ -87,7 +87,7 @@ Program the device.
python3 app.py
```
You should see light values being output to the console. Initially this value will be 0.
Light values will be output to the console. Initially this value will be 0.
1. From the CounterFit app, change the value of the light sensor that will be read by the app. You can do this in one of two ways:
@ -95,7 +95,7 @@ Program the device.
* Check the *Random* checkbox, and enter a *Min* and *Max* value, then select the **Set** button. Every time the sensor reads a value, it will read a random number between *Min* and *Max*.
You should see the values you set appearing in the console. Change the *Value* or the *Random* settings to see the value change.
The values you set will be output to the console. Change the *Value* or the *Random* settings to make the value change.
```output
(.venv) ➜ GroveTest python3 app.py

@ -83,7 +83,7 @@ Program the nightlight.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal.
1. Connect the Serial Monitor. Light values will be output to the terminal.
```output
> Executing task: platformio device monitor <

@ -40,7 +40,7 @@ Program the device.
Serial.println(light);
```
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can see it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can read it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
1. Add a small delay of one second (1,000ms) at the end of the `loop` as the light levels don't need to be checked continuously. A delay reduces the power consumption of the device.
@ -50,7 +50,7 @@ Program the device.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal to see the values change.
1. Connect the Serial Monitor.Light values will be output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal, and the values will change.
```output
> Executing task: platformio device monitor <

@ -88,7 +88,7 @@ Messages can be sent with a quality of service (QoS), which determines the guara
Although the name is Message Queueing (initials in MQTT), it doesn't actually support message queues. This means that if a client disconnects, then reconnects it won't receive messages sent during the disconnection, except for those messages that it had already started to process using the QoS process. Messages can have a retained flag set on them. If this is set, the MQTT broker will store the last message sent on a topic with this flag, and send this to any clients who later subscribe to the topic. This way, the clients will always get the latest message.
MQTT also supports a keep alive function that checks to see if the connection is still alive during long gaps between messages.
MQTT also supports a keep alive function that checks if the connection is still alive during long gaps between messages.
> 🦟 [Mosquitto from the Eclipse Foundation](https://mosquitto.org) has a free MQTT broker you can run yourself to experiment with MQTT, along with a public MQTT broker you can use to test your code, hosted at [test.mosquitto.org](https://test.mosquitto.org).
@ -198,13 +198,13 @@ Configure a Python virtual environment and install the MQTT pip packages.
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh
python --version
```
You should see the following:
The output will be similar to the following:
```output
(.venv) ➜ nightlight-server python --version
@ -249,7 +249,7 @@ Write the server code.
code .
```
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. This will be reported in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -257,7 +257,7 @@ Write the server code.
![VS Code Kill the active terminal instance button](../../../images/vscode-kill-terminal.png)
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, with the call to activate this appearing in the terminal. The name of the virtual environment (`.venv`) will also be in the prompt:
```output
➜ nightlight source .venv/bin/activate
@ -311,7 +311,7 @@ Write the server code.
The app will start listening to messages from the IoT device.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. You will see messages being received in the terminal.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. Messages being received will be printed to the terminal.
```output
(.venv) ➜ nightlight-server python app.py
@ -319,7 +319,7 @@ Write the server code.
Message received: {'light': 400}
```
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to recieve the messages being sent.
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to receive the messages being sent.
> 💁 You can find this code in the [code-server/server](code-server/server) folder.
@ -382,7 +382,7 @@ The next step for our Internet controlled nightlight is for the server code to s
1. Run the code as before
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal:
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal:
```output
(.venv) ➜ nightlight-server python app.py

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -46,7 +46,7 @@ Subscribe to commands.
1. Run the code in the same way as you ran the code from the previous part of the assignment. If you are using a virtual IoT device, then make sure the CounterFit app is running and the light sensor and LED have been created on the correct pins.
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal. You will also see the LED is being turned on and off depending on the light level.
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal. The LED will also be turned on and off depending on the light level.
> 💁 You can find this code in the [code-commands/virtual-device](code-commands/virtual-device) folder or the [code-commands/pi](code-commands/pi) folder.

@ -24,8 +24,8 @@ Install the Arduino libraries.
```ini
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -16,8 +16,8 @@ lib_deps =
seeed-studio/Grove Temperature And Humidity Sensor @ 1.0.1
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -46,7 +46,7 @@ Despite the name, serverless does actually use servers. The naming is because yo
As an IoT developer, the serverless model is ideal. You can write a function that is called in response to messages sent from any IoT device that is connected to your cloud-hosted IoT service. Your code will handle all messages sent, but only be running when needed.
✅ Look back at the code you wrote as server code listening to messages over MQTT. As is, how might this run in the cloud using serverless? How do you think the code might be changed to support serverless computing?
✅ Look back at the code you wrote as server code listening to messages over MQTT. How might this run in the cloud using serverless? How do you think the code might be changed to support serverless computing?
> 💁 The serverless model is moving to other cloud services in addition to running code. For example, serverless databases are available in the cloud using a serverless pricing model where you pay per request made against the database, such as a query or insert, usually using pricing based on how much work is done to service the request. For example a single select of one row against a primary key will cost less than a complicated operation joining many tables and returning thousands of rows.
@ -213,6 +213,23 @@ The Azure Functions CLI can be used to create a new Functions app.
1. Make sure the Python virtual environment is running in the VS Code terminal. Terminate it and restart it if necessary.
1. There may be warnings in the output:
```output
(.venv) ➜ soil-moisture-trigger func start
Found Python version 3.9.1 (python3).
Azure Functions Core Tools
Core Tools Version: 3.0.3442 Commit hash: 6bfab24b2743f8421475d996402c398d2fe4a9e0 (64-bit)
Function Runtime Version: 3.0.15417.0
[2021-06-16T08:18:28.315Z] Cannot create directory for shared memory usage: /dev/shm/AzureFunctions
[2021-06-16T08:18:28.316Z] System.IO.FileSystem: Access to the path '/dev/shm/AzureFunctions' is denied. Operation not permitted.
[2021-06-16T08:18:30.361Z] No job functions found.
```
but don't worry about them as long as the Functions app starts correctly and lists the running functions. As mentioned in this question on the [Docs Q&A](https://docs.microsoft.com/answers/questions/396617/azure-functions-core-tools-error-osx-devshmazurefu.html?WT.mc_id=academic-17441-jabenn) it can be ignored.
## Create an IoT Hub event trigger
The Functions app is the shell of your serverless code. To respond to IoT hub events, you can add an IoT Hub trigger to this app. This trigger needs to connect to the stream of messages that are sent to the IoT Hub and respond to them. To get this stream of messages, your trigger needs to connect to the IoT Hubs *event hub compatible endpoint*.
@ -254,12 +271,12 @@ You are now ready to create the event trigger.
1. From the VS Code terminal run the following command from inside the `soil-moisture-trigger` folder:
```sh
func new --name iot_hub_trigger --template "Azure Event Hub trigger"
func new --name iot-hub-trigger --template "Azure Event Hub trigger"
```
This creates a new Function called `iot_hub_trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This creates a new Function called `iot-hub-trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This will create a folder inside the `soil-moisture-trigger` folder called `iot_hub_trigger` that contains this function. This folder will have the following files inside it:
This will create a folder inside the `soil-moisture-trigger` folder called `iot-hub-trigger` that contains this function. This folder will have the following files inside it:
* `__init__.py` - this is the Python code file that contains the trigger, using the standard Python file name convention to turn this folder into a Python module.
@ -313,7 +330,7 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
func start
```
The Functions app will start up, and will discover the `iot_hub_trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
The Functions app will start up, and will discover the `iot-hub-trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
```output
(.venv) ➜ soil-moisture-trigger func start
@ -325,23 +342,23 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
Functions:
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
For detailed output, run func with --verbose flag.
[2021-05-05T02:44:07.517Z] Worker process started and initialized.
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot_hub_trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot-hub-trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.205Z] Trigger Details: PartionId: 0, Offset: 1011240-1011632, EnqueueTimeUtc: 2021-05-04T19:04:04.2030000Z-2021-05-04T19:04:04.3900000Z, SequenceNumber: 2546-2547, Count: 2
[2021-05-05T02:44:09.352Z] Python EventHub trigger processed an event: {"soil_moisture":628}
[2021-05-05T02:44:09.354Z] Python EventHub trigger processed an event: {"soil_moisture":624}
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot_hub_trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot-hub-trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
```
Each call to the function will be surrounded by a `Executing 'Functions.iot_hub_trigger'`/`Executed 'Functions.iot_hub_trigger'` block in the output, so you can how many messages were processed in each function call.
Each call to the function will be surrounded by a `Executing 'Functions.iot-hub-trigger'`/`Executed 'Functions.iot-hub-trigger'` block in the output, so you can how many messages were processed in each function call.
> If you get the following error:
```output
The listener for function 'Functions.iot_hub_trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
The listener for function 'Functions.iot-hub-trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
```
Then check Azurite is running and you have set the `AzureWebJobsStorage` in the `local.settings.json` file to `UseDevelopmentStorage=true`.
@ -370,7 +387,7 @@ To connect to the Registry Manager, you need a connection string.
Replace `<hub_name>` with the name you used for your IoT Hub.
The connection string is requested for the *ServiceConnect* policy using the `--policy-name service` parameter. When you request a connection string, you can specify what permissions that connection string will allow. The ServiceConnect policy allows yor code to connect and send messages to IoT devices.
The connection string is requested for the *ServiceConnect* policy using the `--policy-name service` parameter. When you request a connection string, you can specify what permissions that connection string will allow. The ServiceConnect policy allows your code to connect and send messages to IoT devices.
✅ Do some research: Read up on the different policies in the [IoT Hub permissions documentation](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-security#iot-hub-permissions?WT.mc_id=academic-17441-jabenn)
@ -561,7 +578,7 @@ Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in soil-moisture-sensor:
iot_hub_trigger - [eventHubTrigger]
iot-hub-trigger - [eventHubTrigger]
```
Make sure your IoT device is running. Change the moisture levels by adjusting the soil moisture, or moving the sensor in and out of the soil. You will see the relay turn on and off as the soil moisture changes.

@ -35,7 +35,7 @@ Some hints:
relay_on: [GET,POST] http://localhost:7071/api/relay_on
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
```
Paste the URL into your browser and hit `return`, or `Ctrl+click` (`Cmd+click` on macOS) the link in the terminal window in VS Code to open it in your default browser. This will run the trigger.

@ -39,7 +39,7 @@ If your IoT application is not secure, there are a number of risks:
These are real world scenarios, and happen all the time. Some examples were given in earlier lessons, but here are some more:
* In 2018 hackers used an open WiFi access point on a fish tank thermostat to gain access to a casino's network to steal data. [The Hacker News - Casino Gets Hacked Through Its Internet-Connected Fish Tank Thermometer](https://thehackernews.com/2018/04/iot-hacking-thermometer.html)
* In 2018, hackers used an open WiFi access point on a fish tank thermostat to gain access to a casino's network to steal data. [The Hacker News - Casino Gets Hacked Through Its Internet-Connected Fish Tank Thermometer](https://thehackernews.com/2018/04/iot-hacking-thermometer.html)
* In 2016, the Mirai Botnet launched a denial of service attack against Dyn, an Internet service provider, taking down large portions of the Internet. This botnet used malware to connect to IoT devices such as DVRs and cameras that used default usernames and passwords, and from there launched the attack. [The Guardian - DDoS attack that disrupted internet was largest of its kind in history, experts say](https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-mirai-botnet)
* Spiral Toys had a database of users of their CloudPets connected toys publicly available over the Internet. [Troy Hunt - Data from connected CloudPets teddy bears leaked and ransomed, exposing kids' voice messages](https://www.troyhunt.com/data-from-connected-cloudpets-teddy-bears-leaked-and-ransomed-exposing-kids-voice-messages/).
* Strava tagged runners that you ran past and showed their routes, allowing strangers to effectively see where you live. [Kim Komndo - Fitness app could lead a stranger right to your home — change this setting](https://www.komando.com/security-privacy/strava-fitness-app-privacy/755349/).
@ -90,7 +90,7 @@ Unfortunately, not everything is secure. Some devices have no security, others a
Encryption comes in two types - symmetric and asymmetric.
**Symmetric** encryption uses the same key to encrypt and decrypt the data. Both the sender and receive need to know the same key. This is the least secure type, as the key needs to be shared somehow. For a sender to send an encrypted message to a recipient, the sender first might have to send the recipient the key.
**Symmetric** encryption uses the same key to encrypt and decrypt the data. Both the sender and receiver need to know the same key. This is the least secure type, as the key needs to be shared somehow. For a sender to send an encrypted message to a recipient, the sender first might have to send the recipient the key.
![Symmetric key encryption uses the same key to encrypt and decrypt a message](../../../images/send-message-symmetric-key.png)
@ -148,7 +148,7 @@ After the connection, all data sent to the IoT Hub from the device, or to the de
### X.509 certificates
When you are using a asymmetric encryption with a public/private key pair, you need to provide your public key to anyone who wants to send you data. The problem is, how can the recipient of your key be sure it's actually your public key, not someone else pretending to be you? Instead of providing a key, you can instead provide your public key inside a certificate that has been verified by a trusted third party, called an X.509 certificate.
When you are using asymmetric encryption with a public/private key pair, you need to provide your public key to anyone who wants to send you data. The problem is, how can the recipient of your key be sure it's actually your public key, not someone else pretending to be you? Instead of providing a key, you can instead provide your public key inside a certificate that has been verified by a trusted third party, called an X.509 certificate.
X.509 certificates are digital documents that contain the public key part of the public/private key pair. They are usually issued by one of a number of trusted organizations called [Certification authorities](https://wikipedia.org/wiki/Certificate_authority) (CAs), and digitally signed by the CA to indicate the key is valid and comes from you. You trust the certificate and that the public key is from who the certificate says it is from, because you trust the CA, similar to how you would trust a passport or driving license because you trust the country issuing it. Certificates cost money, so you can also 'self-sign', that is create a certificate yourself that is signed by you, for testing purposes.
@ -162,7 +162,7 @@ When using X.509 certificates, both the sender and the recipient will have their
![Instead of sharing a public key, you can share a certificate. The user of the certificate can verify that it comes from you by checking with the certificate authority who signed it.](../../../images/send-message-certificate.png)
***nstead of sharing a public key, you can share a certificate. The user of the certificate can verify that it comes from you by checking with the certificate authority who signed it. Certificate by alimasykurm from the [Noun Project](https://thenounproject.com)***
***Instead of sharing a public key, you can share a certificate. The user of the certificate can verify that it comes from you by checking with the certificate authority who signed it. Certificate by alimasykurm from the [Noun Project](https://thenounproject.com)***
One big advantage of using X.509 certificates is that they can be shared between devices. You can create one certificate, upload it to IoT Hub, and use this for all your devices. Each device then just needs to know the private key to decrypt the messages it receives from IoT Hub.
@ -176,7 +176,7 @@ The certificate used by your device to encrypt messages it sends to the IoT Hub
The steps to generate an X.509 certificate are:
1. Create a public/private key pair. One of the most widely used algorithm to generate a public/private key pair is called [RSA](https://wikipedia.org/wiki/RSA_(cryptosystem)).
1. Create a public/private key pair. One of the most widely used algorithm to generate a public/private key pair is called [RivestShamirAdleman](https://wikipedia.org/wiki/RSA_(cryptosystem))(RSA).
1. Submit the public key with associated data for signing, either by a CA, or by self-signing

@ -48,7 +48,9 @@ The next step is to connect your device to IoT Hub using the X.509 certificates.
This will connect using the X.509 certificate instead of a connection string.
1, RUn your code. Monitor the messages sent to IoT Hub, and send direct method requests as before. You will see the device connecting and sending soil moisture readings, as well as receiving direct method requests.
1. Delete the line with `connection_string` variable.
1. Run your code. Monitor the messages sent to IoT Hub, and send direct method requests as before. You will see the device connecting and sending soil moisture readings, as well as receiving direct method requests.
> 💁 You can find this code in the [code/pi](code/pi) or [code/virtual-device](code/virtual-device) folder.

@ -158,7 +158,7 @@ Once data is flowing into your IoT Hub, you can write some serverless code to li
1. Use the Azurite app as a local storage emulator
Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
1. Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
```output
Python EventHub trigger processed an event: {"gps": {"lat": 47.73481, "lon": -122.25701}}
@ -258,7 +258,7 @@ The data will be saved as a JSON blob with the following format:
> pip install --upgrade pip
> ```
1. In the `__init__.py` file for the `iot_hub_trigger`, add the following import statements:
1. In the `__init__.py` file for the `iot-hub-trigger`, add the following import statements:
```python
import json

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,8 +13,8 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,12 +13,12 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi
seeed-studio/Seeed Arduino FS
seeed-studio/Seeed Arduino SFUD
seeed-studio/Seeed Arduino rpcUnified
seeed-studio/Seeed_Arduino_mbedtls
seeed-studio/Seeed Arduino RTC
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
build_flags =
-w

@ -1,7 +1,20 @@
# Retail - using IoT to manage stock levels
The last stage for feed before it reaches consumers is retail - the markets, greengrocers, supermarkets and stores that sell produce to consumers. These stores want to ensure they have produce out on shelves for consumers to see and buy.
One of the most manual, time consuming tasks in food stores, especially in large supermarkets, is making sure the shelves are stocked. Checking individual shelves to ensure any gaps are filled with produce from store rooms.
IoT can help with this, using AI models running on IoT devices to count stock, using machine learning models that don't just classify images, but can detect individual objects and count them.
In these 2 lessons you'll learn how to train image-based AI models to count stock, and run these models on IoT devices.
> 💁 These lessons will use some cloud resources. If you don't complete all the lessons in this project, make sure you [Clean up your project](../clean-up.md).
## Topics
1. [Train a stock detector](./lessons/1-train-stock-detector/README.md)
1. [Check stock from an IoT device](./lessons/2-check-stock-device/README.md)
## Credits
All the lessons were written with ♥️ by [Jim Bennett](https://GitHub.com/JimBobBennett)

@ -0,0 +1,33 @@
# Train a stock detector
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/37)
## Introduction
In this lesson you will learn about
In this lesson we'll cover:
* [Thing 1](#thing-1)
## Thing 1
---
## 🚀 Challenge
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/38)
## Review & Self Study
## Assignment
[](assignment.md)

@ -0,0 +1,9 @@
#
## Instructions
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |

@ -0,0 +1,9 @@
# Dummy File
This file acts as a placeholder for the `translations` folder. <br>
**Please remove this file after adding the first translation**
For the instructions, follow the directives in the [translations guide](https://github.com/microsoft/IoT-For-Beginners/blob/main/TRANSLATIONS.md) .
## THANK YOU
We truly appreciate your efforts!

@ -0,0 +1,33 @@
# Check stock from an IoT device
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/39)
## Introduction
In this lesson you will learn about
In this lesson we'll cover:
* [Thing 1](#thing-1)
## Thing 1
---
## 🚀 Challenge
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/40)
## Review & Self Study
## Assignment
[](assignment.md)

@ -0,0 +1,9 @@
#
## Instructions
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |

@ -0,0 +1,9 @@
# Dummy File
This file acts as a placeholder for the `translations` folder. <br>
**Please remove this file after adding the first translation**
For the instructions, follow the directives in the [translations guide](https://github.com/microsoft/IoT-For-Beginners/blob/main/TRANSLATIONS.md) .
## THANK YOU
We truly appreciate your efforts!

@ -1,8 +1,8 @@
# Consumer IoT - build a smart voice assistant
The fod has been grown, driven to a processing plant, sorted for quality, sold in the store and now it's time to cook! One of the core pieces of any kitchen is a timer. Initially these started as simple hour glasses - your food was cooked when all the sand trickled down into the bottom bulb. They then went clockwork, then electric.
The fod has been grown, driven to a processing plant, sorted for quality, sold in the store and now it's time to cook! One of the core pieces of any kitchen is a timer. Initially these started as hour glasses - your food was cooked when all the sand trickled down into the bottom bulb. They then went clockwork, then electric.
The latest iterations are now part of our smart devices. In kitchens all throughout the world you'll head chefs shouting "Hey Siri - set a 10 minute timer", or "Alexa - cancel my bread timer". No longer do you have to walk back to the kitchen to check on a timer, you can do it from your phone, or a call out across the room.
The latest iterations are now part of our smart devices. In kitchens in homes all throughout the world you'll hear cooks shouting "Hey Siri - set a 10 minute timer", or "Alexa - cancel my bread timer". No longer do you have to walk back to the kitchen to check on a timer, you can do it from your phone, or a call out across the room.
In these 4 lessons you'll learn how to build a smart timer, using AI to recognize your voice, understand what you are asking for, and reply with information about your timer. You'll also add support for multiple languages.
@ -12,7 +12,7 @@ In these 4 lessons you'll learn how to build a smart timer, using AI to recogniz
1. [Recognize speech with an IoT device](./lessons/1-speech-recognition/README.md)
1. [Understand language](./lessons/2-language-understanding/README.md)
1. [Provide spoken feedback](./lessons/3-spoken-feedback/README.md)
1. [Set a timer and provide spoken feedback](./lessons/3-spoken-feedback/README.md)
1. [Support multiple languages](./lessons/4-multiple-language-support/README.md)
## Credits

@ -2,11 +2,13 @@
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
This video gives an overview of the Azure speech service, a topic that will be covered in this lesson:
[![How to get started using your Cognitive Services Speech resource from the Microsoft Azure YouTube channel](https://img.youtube.com/vi/iW0Fw0l3mrA/0.jpg)](https://www.youtube.com/watch?v=iW0Fw0l3mrA)
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/41)
## Introduction
@ -113,7 +115,7 @@ Speech to text, or speech recognition, involves using AI to convert words in an
### Speech recognition models
To convert speech to text, samples from the audio signal are grouped together and fed into a machine learning model based around a Recurrent Neural network (RNN). This is a type of machine learning model that can use previous data to make a decision about incoming data. For example, the RNN could detect one block of audio samples as the sound 'Hel', and when it receives another that it thinks is the sound 'lo', it can combine this with the previous sound, see that 'Hello' is a valid word and select that as the outcome.
To convert speech to text, samples from the audio signal are grouped together and fed into a machine learning model based around a Recurrent Neural network (RNN). This is a type of machine learning model that can use previous data to make a decision about incoming data. For example, the RNN could detect one block of audio samples as the sound 'Hel', and when it receives another that it thinks is the sound 'lo', it can combine this with the previous sound, find that 'Hello' is a valid word and select that as the outcome.
ML models always accept data of the same size every time. The image classifier you built in an earlier lesson resizes images to a fixed size and processes them. The same with speech models, they have to process fixed sized audio chunks. The speech models need to be able to combine the outputs of multiple predictions to get the answer, to allow it to distinguish between 'Hi' and 'Highway', or 'flock' and 'floccinaucinihilipilification'.
@ -194,7 +196,7 @@ To use the results of the speech to text conversion, you need to send it to the
}
```
Where `<converted speech>` is the output from the speech to text call.
Where `<converted speech>` is the output from the speech to text call. You only need to send speech that has content, if the call returns an empty string it can be ignored.
1. Verify that messages are being sent by monitoring the Event Hub compatible endpoint using the `az iot hub monitor-events` command.
@ -204,13 +206,13 @@ To use the results of the speech to text conversion, you need to send it to the
## 🚀 Challenge
Speech recognition has been around for a long time, and is continuously improving. Research the current capabilities and see how these have evolved over time, including how accurate machine transcriptions are compared to human.
Speech recognition has been around for a long time, and is continuously improving. Research the current capabilities and compare how these have evolved over time, including how accurate machine transcriptions are compared to human.
What do you think the future holds for speech recognition?
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/42)
## Review & Self Study

@ -10,14 +10,6 @@ from azure.iot.device import IoTHubDeviceClient, Message
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
@ -52,6 +44,13 @@ def capture_audio():
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {

@ -190,7 +190,7 @@ You can capture audio from the microphone using Python code.
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and you will hear the recording.
You may see some ALSA errors when the PyAudio instance is created. This is due to configuration on the Pi for audio devices you don't have. You can ignore these errors.
You may get some ALSA errors when the PyAudio instance is created. This is due to configuration on the Pi for audio devices you don't have. You can ignore these errors.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py
@ -200,7 +200,7 @@ You can capture audio from the microphone using Python code.
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
```
If you see the following error:
If you get the following error:
```output
OSError: [Errno -9997] Invalid sample rate

@ -91,7 +91,7 @@ The audio can be sent to the speech service using the REST API. To use the speec
print(text)
```
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and you will see the audio converted to text in the output.
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and the audio will be converted to text and printed to the console.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py

@ -16,7 +16,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
pip install azure-cognitiveservices-speech
```
> ⚠️ If you see the following error:
> ⚠️ If you get the following error:
>
> ```output
> ERROR: Could not find a version that satisfies the requirement azure-cognitiveservices-speech (from versions: none)
@ -80,7 +80,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
time.sleep(1)
```
1. Run this app. Speak into your microphone and you will see the audio converted to text in the console.
1. Run this app. Speak into your microphone and the audio converted to text will be output to the console.
```output
(.venv) ➜ smart-timer python3 app.py

@ -6,28 +6,430 @@ Add a sketchnote if possible/appropriate
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/43)
## Introduction
In this lesson you will learn about
In the last lesson you converted speech to text. For this to be used to program a smart timer, your code will need to have an understanding of what was said. You could assume the user will speak a fixed phrase, such as "Set a 3 minute timer", and parse that expression to get how long the timer should be, but this isn't very user-friendly. If a user were to say "Set a timer for 3 minutes", you or I would understand what they mean, but your code would not, it would be expecting a fixed phrase.
This is where language understanding comes in, using AI models to interpret text and return the details that are needed, for example being able to take both "Set a 3 minute timer" and "Set a timer for 3 minutes", and understand that a timer is required for 3 minutes.
In this lesson you will learn about language understanding models, how to create them, train them, and use them from your code.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Language understanding](#language-understanding)
* [Create a language understanding model](create-a-language-understanding-model)
* [Intents and entities](#intents-and-entities)
* [Use the language understanding model](#use-the-language-understanding-model)
## Language understanding
Humans have used language to communicate for hundreds of thousands of years. We communicate with words, sounds, or actions and understand what is said, both the meaning of the words, sounds or actions, but also their context. We understand sincerity and sarcasm, allowing the same words to mean different things depending on the tone of our voice.
✅ Think about some of the conversations you have had recently. How much of the conversation would be hard for a computer to understand because it needs context?
Language understanding, also called natural-language understanding is part of a field of artificial intelligence called natural-language processing (or NLP), and deals with reading comprehension, trying to understand the details of words or sentences. If you use a voice assistant such as Alexa or Siri, you have used language understanding services. These are the behind-the-scenes AI services that convert "Alexa, play the latest album by Taylor Swift" into my daughter dancing around the living room to her favorite tunes.
> 💁 Computers, despite all their advances, still have a long way to go to truly understand text. When we refer to language understanding with computers, we don't mean anything anywhere near as advanced as human communication, instead we mean taking some words and extracting key details.
As humans, we understand language without really thinking about it. If I asked another human to "play the latest album by Taylor Swift" then they would instinctively know what I meant. For a computer, this is harder. It would have to take the words, converted from speech to text, and work out the following pieces of information:
* Music needs to be played
* The music is by the artist Taylor Swift
* The specific music is a whole album of multiple tracks in order
* Taylor Swift has many albums, so they need to be sorted by chronological order and the most recently published is the one required
✅ Think of some other sentences you have spoken when making requests, such as ordering coffee or asking a family member to pass you something. Try to break then down into the pieces of information a computer would need to extract to understand the sentence.
Language understanding models are AI models that are trained to extract certain details from language, and then are trained for specific tasks using transfer learning, in the same way you trained a Custom Vision model using a small set of images. You can take a model, then train it using the text you want it to understand.
## Create a language understanding model
![The LUIS logo](../../../images/luis-logo.png)
You can create language understanding models using LUIS, a language understanding service from Microsoft that is part of Cognitive Services.
### Task - create an authoring resource
To use LUIS, you need to create an authoring resource.
1. Use the following command to create an authoring resource in your `smart-timer` resource group:
```python
az cognitiveservices account create --name smart-timer-luis-authoring \
--resource-group smart-timer \
--kind LUIS.Authoring \
--sku F0 \
--yes \
--location <location>
```
Replace `<location>` with the location you used when creating the Resource Group.
> ⚠️ LUIS isn't available in all regions, so if you get the following error:
>
> ```output
> InvalidApiSetId: The account type 'LUIS.Authoring' is either invalid or unavailable in given region.
> ```
>
> pick a different region.
This will create a free-tier LUIS authoring resource.
### Task - create a language understanding app
1. Open the LUIS portal at [luis.ai](https://luis.ai?WT.mc_id=academic-17441-jabenn) in your browser, and sign in with the same account you have been using for Azure.
1. Follow the instructions on the dialog to select your Azure subscription, then select the `smart-timer-luis-authoring` resource you have just created.
1. From the *Conversation apps* list, select the **New app** button to create a new application. Name the new app `smart-timer`, and set the *Culture* to your language.
> 💁 There is a field for a prediction resource. You can create a second resource just for prediction, but the free authoring resource allows 1,000 predictions a month which should be enough for development, so you can leave this blank.
1. Read through the guide that appears once you cerate the app to get an understanding of the steps you need to take to train the language understanding model. Close this guide when you are done.
## Intents and entities
Language understanding is based around *intents* and *entities*. Intents are what the intent of the words are, for example playing music, setting a timer, or ordering food. Entities are what the intent is referring to, such as the album, the length of the timer, or the type of food. Each sentence that the model interprets should have at least one intent, and optionally one or more entities.
Some examples:
| Sentence | Intent | Entities |
| --------------------------------------------------- | ---------------- | ------------------------------------------ |
| "Play the latest album by Taylor Swift" | *play music* | *the latest album by Taylor Swift* |
| "Set a 3 minute timer" | *set a timer* | *3 minutes* |
| "Cancel my timer" | *cancel a timer* | None |
| "Order 3 large pineapple pizzas and a caesar salad" | *order food* | *3 large pineapple pizzas*, *caesar salad* |
✅ With the sentences you though about earlier, what would be the intent and any entities in that sentence?
To train LUIS, first you set the entities. These can be a fixed list of terms, or learned from the text. For example, you could provide a fixed list of food available from your menu, with variations (or synonyms) of each word, such as *egg plant* and *aubergine* as variations of *aubergine*. LUIS also has pre-built entities that can be used, such as numbers and locations.
For setting a timer, you could have one entity using the pre-built number entities for the time, and another for the units, such as minutes and seconds. Each unit would have multiple variations to cover the singular and plural forms - such as minute and minutes.
Once the entities are defined, you create intents. These are learned by the model based on example sentences that you provide (known as utterances). For example, for a *set timer* intent, you might provide the following sentences:
* `set a 1 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a 9 minute 30 second timer`
You then tell LUIS what parts of these sentences map to the entities:
![The sentence set a timer for 1 minute and 12 seconds broken into entities](../../../images/sentence-as-intent-entities.png)
The sentence `set a timer for 1 minute and 12 seconds` has the intent of `set timer`. It also has 2 entities with 2 values each:
| | time | unit |
| ---------- | ---: | ------ |
| 1 minute | 1 | minute |
| 12 seconds | 12 | second |
To train a good model, you need a range of different example sentences to cover the many different ways someone might ask for the same thing.
> 💁 As with any AI model, the more data and the more accurate the data you use to train, the better the model.
✅ Think about the different ways you might ask the same thing and expect a human to understand.
### Task - add entities to the language understanding models
For the timer, you need to add 2 entities - one for the unit of time (minutes or seconds), and one for the number of minutes or seconds.
You can find instructions for using the LUIS portal in the [Quickstart: Build your app in LUIS portal documentation on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/luis-get-started-create-app?WT.mc_id=academic-17441-jabenn).
1. From the LUIS portal, select the *Entities* tab and add the *number* prebuilt entity by selecting the **Add prebuilt entity** button, then selecting *number* from the list.
1. Create a new entity for the time unit using the **Create** button. Name the entity `time unit` and set the type to *List*. Add values for `minute` and `second` to the *Normalized values* list, adding the singular and plural forms to the *synonyms* list. Press `return` after adding each synonym to add it to the list.
| Normalized value | Synonyms |
| ---------------- | --------------- |
| minute | minute, minutes |
| second | second, seconds |
### Task - add intents to the language understanding models
1. From the *Intents* tab, select the **Create** button to create a new intent. Name this intent `set timer`.
1. In the examples, enter different ways to set a timer using both minutes, seconds and minutes and seconds combined. Examples could be:
* `set a 1 second timer`
* `set a 4 minute timer`
* `set a four minute six second timer`
* `set a 9 minute 30 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a timer for 3 minutes and 1 second`
* `set a timer for three minutes and one second`
* `set a timer for 1 minute and 1 second`
* `set a timer for 30 seconds`
* `set a timer for 1 second`
Mix up numbers as words and numerics so the model learns to handle both.
1. As you enter each example, LUIS will start detecting entities, and will underline and label any it finds.
![The examples with the numbers and time units underlined by LUIS](../../../images/luis-intent-examples.png)
### Task - train and test the model
1. Once the entities and intents are configured, you can train the model using the **Train** button on the top menu. Select this button, and the model should train in a few seconds. The button will be greyed out whilst training, and be re-enabled once done.
1. Select the **Test** button from the top menu to test the language understanding model. Enter text such as `set a timer for 5 minutes and 4 seconds` and press return. The sentence will appear in a box under the text box that you typed it in to, and blow that will be the *top intent*, or the intent that was detected with the highest probability. This should be `set timer`. The intent name will be followed by the probability that the intent detected was the right one.
1. Select the **Inspect** option to see a breakdown of the results. You will see the top-scoring intent with it's percentage probability, along with lists of the entities detected.
1. Close the *Test* pane when you are done testing.
### Task - publish the model
To use this model from code, you need to publish it. When publishing from LUIS, you can publish to either a staging environment for testing, or a product environment for a full release. In this lesson, a staging environment is fine.
1. From the LUIS portal, select the **Publish** button from the top menu.
1. Make sure *Staging slot* is selected, then select **Done**. You will see a notification when the app is published.
## Thing 1
1. You can test this using curl. To build the curl command, you need three values - the endpoint, the application ID (App ID) and an API key. These can be accessed from the **MANAGE** tab that can be selected from the top menu.
1. From the *Settings* section, copy the App ID
1. From the *Azure Resources* section, select *Authoring Resource*, and copy the *Primary Key* and *Endpoint URL*
1. Run the following curl command in your command prompt or terminal:
```sh
curl "<endpoint url>/luis/prediction/v3.0/apps/<app id>/slots/staging/predict" \
--request GET \
--get \
--data "subscription-key=<primary key>" \
--data "verbose=false" \
--data "show-all-intents=true" \
--data-urlencode "query=<sentence>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section.
Replace `<app id>` with the App ID from the *Settings* section.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section.
Replace `<sentence>` with the sentence you want to test with.
1. The output of this call will be a JSON document that details the query, the top intent, and a list of entities broken down by type.
```JSON
{
"query": "set a timer for 45 minutes and 12 seconds",
"prediction": {
"topIntent": "set timer",
"intents": {
"set timer": {
"score": 0.97031575
},
"None": {
"score": 0.02205793
}
},
"entities": {
"number": [
45,
12
],
"time-unit": [
[
"minute"
],
[
"second"
]
]
}
}
}
```
The JSON above came from querying with `set a timer for 45 minutes and 12 seconds`:
* The `set timer` was the top intent with a probability of 97%.
* Two *number* entities were detected, `45` and `12`.
* Two *time-unit* entities were detected, `minute` and `second`.
## Use the language understanding model
Once published, the LUIS model can be called from code. In the last lesson you sent the recognized speech to an IoT Hub, and you can use serverless code to respond to this and understand what was sent.
### Task - create a serverless functions app
1. Create an Azure Functions app called `smart-timer-trigger`.
1. Add an IoT Hub event trigger to this app called `speech-trigger`.
1. Set the Event Hub compatible endpoint connection string for your IoT Hub in the `local.settings.json` file, and use the key for that entry in the `function.json` file.
1. Use the Azurite app as a local storage emulator.
1. Run your functions app and your IoT device to ensure speech is arriving at the IoT Hub.
```output
Python EventHub trigger processed an event: {"speech": "Set a 3 minute timer."}
```
### Task - use the language understanding model
1. The SDK for LUIS is available via a Pip package. Add the following line to the `requirements.txt` file to add the dependency on this package:
```sh
azure-cognitiveservices-language-luis
```
1. Make sure the VS Code terminal has the virtual environment activated, and run the following command to install the Pip packages:
```sh
pip install -r requirements.txt
```
1. Add new entries to the `local.settings.json` file for your LUIS API Key, Endpoint URL, and App ID from the **MANAGE** tab of the LUIS portal:
```JSON
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section of the **MANAGE** tab. This will be `https://<location>.api.cognitive.microsoft.com/`.
Replace `<app id>` with the App ID from the *Settings* section of the **MANAGE** tab.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section of the **MANAGE** tab.
1. Add the following imports to the `__init__.py` file:
```python
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
```
This imports some system libraries, as well as the libraries to interact with LUIS.
1. In the `main` method, before it loops through all the events, add the following code:
```python
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
```
This loads the values you added to the `local.settings.json` file for your LUIS app, creates a credentials object with your API key, then creates a LUIS client object to interact with your LUIS app.
1. Predictions are requested from LUIS by sending a prediction request - a JSON document containing the text to predict. Create this with the following code inside the `for event in events` loop:
```python
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
```
This code extracts the speech that was sent to the IoT Hub and uses it to build the prediction request.
1. This request can then be sent to LUIS, using the staging slot that your app was published to:
```python
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
```
1. The prediction response contains the top intent - the intent with the highest prediction score, along with the entities. If the top intent is `set timer`, then the entities can be read to get the time needed for the timer:
```python
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
```
The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers - 4 and 17.
The `time unit` entities will be an array of arrays of strings, with each time unit as an array of strings inside the array. For example, if you said *"Set a four minute 17 second timer."*, then the `time unit` array will contain 2 arrays with single values each - `['minute']` and `['second']`.
The JSON version of these entities for *"Set a four minute 17 second timer."* is:
```json
{
"number": [4, 17],
"time unit": [
["minute"],
["second"]
]
}
```
This code also defines a count for the total time for the timer in seconds. This will be populated by the values from the entities.
1. The entities aren't linked, but we can make some assumptions about them. They will be in the order spoken, so the position in the array can be used to determine which number matches to which time unit. For example:
* *"Set a 30 second timer"* - this will have one number, `30`, and one time unit, `second` so the single number will match the single time unit.
* *"Set a 2 minute and 30 second timer"* - this will have two numbers, `2` and `30`, and two time units, `minute` and `second` so the first number will be for the first time unit (2 minutes), and the second number for the second time unit (30 seconds).
The following code gets the count of items in the number entities, and uses that to extract the first item from each array, then the second and so on:
```python
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
```
For *"Set a four minute 17 second timer."*, this will loop twice, giving the following values:
| loop count | `number` | `time_unit` |
| ---------: | -------: | ----------- |
| 0 | 4 | minute |
| 1 | 17 | second |
1. Inside this loop, use the number and time unit to calculate the total time for the timer, adding 60 seconds for each minute, and the number of seconds for any seconds.
```python
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
```
1. Finally, outside this loop through the entities, log the total time for the timer:
```python
logging.info(f'Timer required for {total_time} seconds')
```
1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output:
```output
[2021-06-16T01:38:33.316Z] Executing 'Functions.speech-trigger' (Reason='(null)', Id=39720c37-b9f1-47a9-b213-3650b4d0b034)
[2021-06-16T01:38:33.329Z] Trigger Details: PartionId: 0, Offset: 3144-3144, EnqueueTimeUtc: 2021-06-16T01:38:32.7970000Z-2021-06-16T01:38:32.7970000Z, SequenceNumber: 8-8, Count: 1
[2021-06-16T01:38:33.605Z] Python EventHub trigger processed an event: {"speech": "Set a four minute 17 second timer."}
[2021-06-16T01:38:35.076Z] Timer required for 257 seconds
[2021-06-16T01:38:35.128Z] Executed 'Functions.speech-trigger' (Succeeded, Id=39720c37-b9f1-47a9-b213-3650b4d0b034, Duration=1894ms)
```
> 💁 You can find this code in the [code/functions](code/functions) folder.
---
## 🚀 Challenge
There are many ways to request the same thing, such as setting a timer. Think of different ways to do this, and use them as examples in your LUIS app. Test these out, to see how well your model can cope with multiple ways to request a timer.
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/44)
## Review & Self Study
* Read more about LUIS and it's capabilities on the [Language Understanding (LUIS) documentation page on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/?WT.mc_id=academic-17441-jabenn)
* Read more about language understanding on the [Natural-language understanding page on Wikipedia](https://wikipedia.org/wiki/Natural-language_understanding)
## Assignment
[](assignment.md)
[Cancel the timer](assignment.md)

@ -1,9 +1,14 @@
#
# Cancel the timer
## Instructions
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven.
Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized.
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |
| Add the cancel timer intent to the LUIS app | Was able to add the intent and train the model | Was able to add the intent but not train the model | Was unable to add the intent and train the model |
| Handle the intent in the serverless app | Was able to detect the intent as the top intent and log it | Was able to detect the intent as the top intent | Was unable to detect the intent as the top intent |

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,11 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"IOT_HUB_CONNECTION_STRING": "<connection string>",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
}
}

@ -0,0 +1,4 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis

@ -0,0 +1,43 @@
from typing import List
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
def main(events: List[func.EventHubEvent]):
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
for event in events:
logging.info('Python EventHub trigger processed an event: %s',
event.get_body().decode('utf-8'))
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
logging.info(f'Timer required for {total_time} seconds')

@ -0,0 +1,15 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "samples-workitems",
"connection": "IOT_HUB_CONNECTION_STRING",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}

@ -1,4 +1,4 @@
# Provide spoken feedback
# Set a timer and provide spoken feedback
Add a sketchnote if possible/appropriate
@ -6,17 +6,73 @@ Add a sketchnote if possible/appropriate
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/45)
## Introduction
In this lesson you will learn about
Smart assistants are not one-way communication devices. You speak to them, and they respond:
"Alexa, set a 3 minute timer"
"Ok, your timer is set for 3 minutes"
In the last 2 lessons you learned how to take speech and create text, then extract a set timer request from that text. In this lesson you will learn how to set the timer on the IoT device, responding to the user with spoken words confirming their timer, and alerting them when their timer is finished.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Text to speech](#text-to-speech)
* [Set the timer](#set-the-timer)
* [Convert text to speech](#convert-text-to-speech)
## Text to speech
## Set the timer
The timer can be set by sending a command from the serverless code, instructing the IoT device to set the timer. This command will contain the time in seconds till the timer needs to go off.
### Task - set the timer using a command
1. In your serverless code, add code to send a direct method request to your IoT device
> ⚠️ You can refer to [the instructions for sending direct method requests in lesson 5 of the farm project if needed](../../../2-farm/lessons/5-migrate-application-to-the-cloud/README.md#send-direct-method-requests-from-serverless-code).
You will need to set up the connection string for the IoT Hub with the service policy (*NOT* the device) in your `local.settings.json` file and add the `azure-iot-hub` pip package to your `requirements.txt` file. The device ID can be extracted from the event.
1. The direct method you send needs to be called `set-timer`, and will need to send the length of the timer as a JSON property called `time`. Use the following code to build the `CloudToDeviceMethod` using the `total_time` calculated from the data extracted by LUIS:
```python
payload = {
'time': total_time
}
direct_method = CloudToDeviceMethod(method_name='set-timer', payload=json.dumps(payload))
```
> 💁 You can find this code in the [code-command/functions](code-command/functions) folder.
### Task - respond to the command on the IoT device
1. On your IoT device, respond to the command.
> ⚠️ You can refer to [the instructions for handling direct method requests from IoT devices in lesson 4 of the farm project if needed](../../../2-farm/lessons/4-migrate-your-plant-to-the-cloud#task---connect-your-iot-device-to-the-cloud).
1. Work through the relevant guide to set a timer for the required time:
* [Arduino - Wio Terminal](wio-terminal-set-timer.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-set-timer.md)
> 💁 You can find this code in the [code-command/wio-terminal](code-command/wio-terminal), [code-command/virtual-device](code-command/virtual-device), or [code-command/pi](code-command/pi) folder.
## Convert text to speech
The same speech service you used to convert speech to text can be used to convert text back into speech, and this can be played through a microphone on your IoT device.
### Task - convert text to speech
Work through the relevant guide to convert text to speech using your IoT device:
## Thing 1
* [Arduino - Wio Terminal](wio-terminal-text-to-speech.md)
* [Single-board computer - Raspberry Pi](pi-text-to-speech.md)
* [Single-board computer - Virtual device](virtual-device-text-to-speech.md)
---
@ -24,7 +80,7 @@ In this lesson we'll cover:
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/46)
## Review & Self Study

@ -6,7 +6,7 @@ Add a sketchnote if possible/appropriate
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/33)
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/47)
## Introduction
@ -24,7 +24,7 @@ In this lesson we'll cover:
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/48)
## Review & Self Study

@ -10,5 +10,5 @@ to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simpl
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
For more information read the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

@ -83,6 +83,12 @@ We have two choices of IoT hardware to use for the projects depending on persona
| 16 | [Manufacturing](./4-manufacturing) | Check fruit quality from an IoT device | Learn about using your fruit quality detector from an IoT device | [Check fruit quality from an IoT device](./4-manufacturing/lessons/2-check-fruit-from-device/README.md) |
| 17 | [Manufacturing](./4-manufacturing) | Run your fruit detector on the edge | Learn about running your fruit detector on an IoT device on the edge | [Run your fruit detector on the edge](./4-manufacturing/lessons/3-run-fruit-detector-edge/README.md) |
| 18 | [Manufacturing](./4-manufacturing) | Trigger fruit quality detection from a sensor | Learn about triggering fruit quality detection from a sensor | [Trigger fruit quality detection from a sensor](./4-manufacturing/lessons/4-trigger-fruit-detector/README.md) |
| 19 | [Retail](./5-retail) | Train a stock detector | Learn how to use object detection to train a stock detector to count stock in a shop | [Train a stock detector](./5-retail/lessons/1-train-stock-detector/README.md) |
| 20 | [Retail](./5-retail) | Check stock from an IoT device | Learn how to check stock from an IoT device using an object detection model | [Check stock from an IoT device](./5-retail/lessons/2-check-stock-device/README.md) |
| 21 | [Consumer](./6-consumer) | Recognize speech with an IoT device | Learn how to recognize speech from an IoT device to build a smart timer | [Recognize speech with an IoT device](./6-consumer/lessons/1-speech-recognition/README.md) |
| 22 | [Consumer](./6-consumer) | Understand language | Learn how to understand sentences spoken to an IoT device | [Understand language](./6-consumer/lessons/2-language-understanding/README.md) |
| 23 | [Consumer](./6-consumer) | Set a timer and provide spoken feedback | Learn how to set a timer on an IoT device and give spoken feedback on when the timer is set and when it finishes | [Set a timer and provide spoken feedback](./6-consumer/lessons/3-spoken-feedback/README.md) |
| 24 | [Consumer](./6-consumer) | Support multiple languages | Learn how to support multiple languages, both being spoken to and the responses from your smart timer | [Support multiple languages](./6-consumer/lessons/4-multiple-language-support/README.md) |
## Offline access

@ -57,7 +57,7 @@ These are specific to using the Raspberry Pi, and are not relevant to using the
* Headphones or other speaker with a 3.5mm jack, or a JST speaker such as:
* [Mono Enclosed Speaker - 2W 6 Ohm](https://www.seeedstudio.com/Mono-Enclosed-Speaker-2W-6-Ohm-p-2832.html)
* [USB Speakerphone](https://www.amazon.com/USB-Speakerphone-Conference-Business-Microphones/dp/B07Q3D7F8S/ref=sr_1_1?dchild=1&keywords=m0&qid=1614647389&sr=8-1)
* [Grove Sunlight sensor](https://www.seeedstudio.com/Grove-Sunlight-Sensor.html)
* [Grove Light sensor](https://www.seeedstudio.com/Grove-Light-Sensor-v1-2-LS06-S-phototransistor.html)
* [Grove button](https://www.seeedstudio.com/Grove-Button.html)
## Sensors and actuators

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 496 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

@ -904,7 +904,7 @@
},
{
"answerText": "A Functions App, a Storage Account, and Application Settings",
"isCorrect": "frue"
"isCorrect": "true"
}
]
}

Loading…
Cancel
Save