* Adding content

* Update en.json

* Update README.md

* Update TRANSLATIONS.md

* Adding lesson tempolates

* Fixing code files with each others code in

* Update README.md

* Adding lesson 16

* Adding virtual camera

* Adding Wio Terminal camera capture

* Adding wio terminal code

* Adding SBC classification to lesson 16

* Adding challenge, review and assignment

* Adding images and using new Azure icons

* Update README.md

* Update iot-reference-architecture.png

* Adding structure for JulyOT links

* Removing icons

* Sketchnotes!

* Create lesson-1.png

* Starting on lesson 18

* Updated sketch

* Adding virtual distance sensor

* Adding Wio Terminal image classification

* Update README.md

* Adding structure for project 6 and wio terminal distance sensor

* Adding some of the smart timer stuff

* Updating sketchnotes

* Adding virtual device speech to text

* Adding chapter 21

* Language tweaks

* Lesson 22 stuff

* Update en.json

* Bumping seeed libraries

* Adding functions lab to lesson 22

* Almost done with LUIS

* Update README.md
pull/92/head
Jim Bennett 3 years ago committed by GitHub
parent 2367ac901d
commit 7ec6d1aa8f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,10 +1,12 @@
{
"cSpell.words": [
"ADCs",
"Alexa",
"Geospatial",
"Kbps",
"Mbps",
"Seeed",
"Siri",
"Twilio",
"UART",
"UDID",

@ -44,7 +44,7 @@ The **T** in IoT stands for **Things** - devices that interact with the physical
Devices for production or commercial use, such as consumer fitness trackers, or industrial machine controllers, are usually custom-made. They use custom circuit boards, maybe even custom processors, designed to meet the needs of a particular task, whether that's being small enough to fit on a wrist, or rugged enough to work in a high temperature, high stress or high vibration factory environment.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't see on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
As a developer either learning about IoT or creating a device prototype, you'll need to start with a developer kit. These are general-purpose IoT devices designed for developers to use, often with features that you wouldn't have on a production device, such as a set of external pins to connect sensors or actuators to, hardware to support debugging, or additional resources that would add unnecessary cost when doing a large manufacturing run.
These developer kits usually fall into two categories - microcontrollers and single-board computers. These will be introduced here, and we'll go into more detail in the next lesson.

@ -235,7 +235,7 @@ Create the Hello World app.
> 💁 You need to explicitly call `python3` to run this code just in case you have Python 2 installed in addition to Python 3 (the latest version). If you have Python2 installed then calling `python` will use Python 2 instead of Python 3
You should see the following output:
The following output will appear in the terminal:
```output
pi@raspberrypi:~/nightlight $ python3 app.py

@ -67,13 +67,13 @@ Configure a Python virtual environment and install the pip packages for CounterF
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh
python --version
```
You should see the following:
The output should contain the following:
```output
(.venv) ➜ nightlight python --version
@ -122,7 +122,7 @@ Create a Python application to print `"Hello World"` to the console.
> 💁 If your terminal returns `command not found` on macOS it means VS Code has not been added to your PATH. You can add VS Code to your PATH by following the instructions in the [Launching from the command line section of the VS Code documentation](https://code.visualstudio.com/docs/setup/mac?WT.mc_id=academic-17441-jabenn#_launching-from-the-command-line) and run the command afterwards. VS Code is installed to your PATH by default on Windows and Linux.
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. The selected virtual environment will appear in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -136,9 +136,9 @@ Create a Python application to print `"Hello World"` to the console.
(.venv) ➜ nightlight
```
If you don't see `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
If you don't have `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and the the call to activate this will appear in the terminal. The prompt will also have the name of the virtual environment (`.venv`):
```output
➜ nightlight source .venv/bin/activate
@ -159,7 +159,7 @@ Create a Python application to print `"Hello World"` to the console.
python app.py
```
You should see the following output:
The following will be in the output:
```output
(.venv) ➜ nightlight python app.py
@ -184,7 +184,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![The Counter Fit app running in a browser](../../../images/counterfit-first-run.png)
You will see it marked as *Disconnected*, with the LED in the top-right corner turned off.
It will be marked as *Disconnected*, with the LED in the top-right corner turned off.
1. Add the following code to the top of `app.py`:
@ -201,7 +201,7 @@ As a second 'Hello World' step, you will run the CounterFit app and connect your
![VS Code Create a new integrated terminal button](../../../images/vscode-new-terminal.png)
1. In this new terminal, run the `app.py` file as before. You will see the status of CounterFit change to **Connected** and the LED light up.
1. In this new terminal, run the `app.py` file as before. The status of CounterFit will change to **Connected** and the LED will light up.
![Counter Fit showing as connected](../../../images/counterfit-connected.png)

@ -40,7 +40,7 @@ Create the PlatformIO project.
1. Launch VS Code
1. You should see the PlatformIO icon on the side menu bar:
1. The PlatformIO icon will be on the side menu bar:
![The Platform IO menu option](../../../images/vscode-platformio-menu.png)
@ -83,7 +83,7 @@ The VS Code explorer will show a number of files and folders created by the Plat
#### Files
* `main.cpp` - this file in the `src` folder contains the entry point for your application. If you open the file, you will see the following:
* `main.cpp` - this file in the `src` folder contains the entry point for your application. Open this file, and it will contain the following code:
```cpp
#include <Arduino.h>
@ -101,7 +101,7 @@ The VS Code explorer will show a number of files and folders created by the Plat
* `.gitignore` - this file lists the files and directories to be ignored when adding your code to git source code control, such as uploading to a repository on GitHub.
* `platformio.ini` - this file contains configuration for your device and app. If you open this file, you will see the following:
* `platformio.ini` - this file contains configuration for your device and app. Open this file, and it will contain the following code:
```ini
[env:seeed_wio_terminal]
@ -166,7 +166,7 @@ Write the Hello World app.
1. The code will be compiled and uploaded to the Wio Terminal
> 💁 If you are using macOS you will see a notification about a *DISK NOT EJECTED PROPERLY*. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
> 💁 If you are using macOS, a notification about a *DISK NOT EJECTED PROPERLY* will appear. This is because the Wio Terminal gets mounted as a drive as part of the flashing process, and it is disconnected when the compiled code is written to the device. You can ignore this notification.
⚠️ If you get errors about the upload port being unavailable, first make sure you have the Wio Terminal connected to your computer, and switched on using the switch on the left hand side of the screen. The green light on the bottom should be on. If you still get the error, pull the on/off switch down twice in quick succession to force the Wio Terminal into bootloader mode and try the upload again.
@ -191,7 +191,7 @@ PlatformIO has a Serial Monitor that can monitor data sent over the USB cable fr
Hello World
```
You will see `Hello World` appear every 5 seconds.
`Hello World` will print to the serial monitor every 5 seconds.
> 💁 You can find this code in the [code/wio-terminal](code/wio-terminal) folder.

@ -100,7 +100,7 @@ The faster the clock cycle, the more instructions that can be processed each sec
Microcontrollers have much lower clock speeds than desktop or laptop computers, or even most smartphones. The Wio Terminal for example has a CPU that runs at 120MHz or 120,000,000 cycles per second.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and see how many times faster it is than the Wio terminal.
✅ An average PC or Mac has a CPU with multiple cores running at multiple GigaHertz, meaning the clock ticks billions of times a second. Research the clock speed of your computer and compare how many times faster it is than the Wio terminal.
Each clock cycle draws power and generates heat. The faster the ticks, the more power consumed and more heat generated. PC's have heat sinks and fans to remove heat, without which they would overheat and shut down within seconds. Microcontrollers often have neither as they run much cooler and therefore much slower. PC's run off mains power or large batteries for a few hours, microcontrollers can run for days, months, or even years off small batteries. Microcontrollers can also have cores that run at different speeds, switching to slower low power cores when the demand on the CPU is low to reduce power consumption.
@ -112,7 +112,7 @@ Each clock cycle draws power and generates heat. The faster the ticks, the more
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, see if you can find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and see if you can see the CPU through the clear plastic window on the back.
If you are using a Wio Terminal for these lessons, try to find the CPU. Find the *Hardware Overview* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) for a picture of the internals, and try to find the CPU through the clear plastic window on the back.
### Memory
@ -150,7 +150,7 @@ Microcontrollers need input and output (I/O) connections to read data from senso
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to see which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
If you are using a Wio Terminal for these lessons, find the GPIO pins. Find the *Pinout diagram* section of the [Wio Terminal product page](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) to learn which pins are which. The Wio Terminal comes with a sticker you can mount on the back with pin numbers, so add this now if you haven't already.
### Physical size
@ -198,7 +198,7 @@ There is a large ecosystem of third-party Arduino libraries that allow you to ad
Investigate the Wio Terminal.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output to see the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and see this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to see this called each time the device reboots.
If you are using a Wio Terminal for these lessons, re-read the code you wrote in the last lesson. Find the `setup` and `loop` function. Monitor the serial output for the loop function being called repeatedly. Try adding code to the `setup` function to write to the serial port and observe that this code is only called once each time you reboot. Try rebooting your device with the power switch on the side to show this is called each time the device reboots.
## Deeper dive into single-board computers

@ -93,7 +93,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
pi@raspberrypi:~/nightlight $ python3 app.py

@ -89,7 +89,7 @@ Program the device.
python3 app.py
```
You should see sunlight values being output to the console. Cover and uncover the sunlight sensor to see the values change:
Sunlight values will be output to the console. Cover and uncover the sunlight sensor, and the values will change:
```output
pi@raspberrypi:~/nightlight $ python3 app.py

@ -91,7 +91,7 @@ Program the nightlight.
python3 app.py
```
You should see light values being output to the console.
Light values will be output to the console.
```output
(.venv) ➜ GroveTest python3 app.py
@ -101,7 +101,7 @@ Program the nightlight.
Light level: 253
```
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. You will see the LED turn on and off.
1. Change the *Value* or the *Random* settings to vary the light level above and below 300. The LED will turn on and off.
![The LED in the CounterFit app turning on and off as the light level changes](../../../images/virtual-device-running-assignment-1-1.gif)

@ -87,7 +87,7 @@ Program the device.
python3 app.py
```
You should see light values being output to the console. Initially this value will be 0.
Light values will be output to the console. Initially this value will be 0.
1. From the CounterFit app, change the value of the light sensor that will be read by the app. You can do this in one of two ways:
@ -95,7 +95,7 @@ Program the device.
* Check the *Random* checkbox, and enter a *Min* and *Max* value, then select the **Set** button. Every time the sensor reads a value, it will read a random number between *Min* and *Max*.
You should see the values you set appearing in the console. Change the *Value* or the *Random* settings to see the value change.
The values you set will be output to in the console. Change the *Value* or the *Random* settings to make the value change.
```output
(.venv) ➜ GroveTest python3 app.py

@ -83,7 +83,7 @@ Program the nightlight.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal.
1. Connect the Serial Monitor. Light values will be output to the terminal.
```output
> Executing task: platformio device monitor <

@ -40,7 +40,7 @@ Program the device.
Serial.println(light);
```
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can see it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
This code reads an analog value from the `WIO_LIGHT` pin. This reads a value from 0-1,023 from the on-board light sensor. This value is then sent to the serial port so you can read it in the Serial Monitor when this code is running. `Serial.print` writes the text without a new line on the end, so each line will start with `Light value:` and end with the actual light value.
1. Add a small delay of one second (1,000ms) at the end of the `loop` as the light levels don't need to be checked continuously. A delay reduces the power consumption of the device.
@ -50,7 +50,7 @@ Program the device.
1. Reconnect the Wio Terminal to your computer, and upload the new code as you did before.
1. Connect the Serial Monitor. You should see light values being output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal to see the values change.
1. Connect the Serial Monitor.Light values will be output to the terminal. Cover and uncover the light sensor on the back of the Wio Terminal, and the values will change.
```output
> Executing task: platformio device monitor <

@ -88,7 +88,7 @@ Messages can be sent with a quality of service (QoS), which determines the guara
Although the name is Message Queueing (initials in MQTT), it doesn't actually support message queues. This means that if a client disconnects, then reconnects it won't receive messages sent during the disconnection, except for those messages that it had already started to process using the QoS process. Messages can have a retained flag set on them. If this is set, the MQTT broker will store the last message sent on a topic with this flag, and send this to any clients who later subscribe to the topic. This way, the clients will always get the latest message.
MQTT also supports a keep alive function that checks to see if the connection is still alive during long gaps between messages.
MQTT also supports a keep alive function that checks if the connection is still alive during long gaps between messages.
> 🦟 [Mosquitto from the Eclipse Foundation](https://mosquitto.org) has a free MQTT broker you can run yourself to experiment with MQTT, along with a public MQTT broker you can use to test your code, hosted at [test.mosquitto.org](https://test.mosquitto.org).
@ -198,13 +198,13 @@ Configure a Python virtual environment and install the MQTT pip packages.
source ./.venv/bin/activate
```
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to see this:
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get this version:
```sh
python --version
```
You should see the following:
The output will be similar to the following:
```output
(.venv) ➜ nightlight-server python --version
@ -249,7 +249,7 @@ Write the server code.
code .
```
1. When VS Code launches, it will activate the Python virtual environment. You will see this in the bottom status bar:
1. When VS Code launches, it will activate the Python virtual environment. This will be reported in the bottom status bar:
![VS Code showing the selected virtual environment](../../../images/vscode-virtual-env.png)
@ -257,7 +257,7 @@ Write the server code.
![VS Code Kill the active terminal instance button](../../../images/vscode-kill-terminal.png)
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and you will see the call to activate this in the terminal, as well as having the name of the virtual environment (`.venv`) in the prompt:
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, with the call to activate this appearing in the terminal. The name of the virtual environment (`.venv`) will also be in the prompt:
```output
➜ nightlight source .venv/bin/activate
@ -311,7 +311,7 @@ Write the server code.
The app will start listening to messages from the IoT device.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. You will see messages being received in the terminal.
1. Make sure your device is running and sending telemetry messages. Adjust the light levels detected by your physical or virtual device. Messages being received will be printed to the terminal.
```output
(.venv) ➜ nightlight-server python app.py
@ -319,7 +319,7 @@ Write the server code.
Message received: {'light': 400}
```
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to recieve the messages being sent.
The app.py file in the nightlight virtual environment has to be running for the app.py file in the nightlight-server virtual environment to receive the messages being sent.
> 💁 You can find this code in the [code-server/server](code-server/server) folder.
@ -382,7 +382,7 @@ The next step for our Internet controlled nightlight is for the server code to s
1. Run the code as before
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal:
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal:
```output
(.venv) ➜ nightlight-server python app.py

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -46,7 +46,7 @@ Subscribe to commands.
1. Run the code in the same way as you ran the code from the previous part of the assignment. If you are using a virtual IoT device, then make sure the CounterFit app is running and the light sensor and LED have been created on the correct pins.
1. Adjust the light levels detected by your physical or virtual device. You will see messages being received and commands being sent in the terminal. You will also see the LED is being turned on and off depending on the light level.
1. Adjust the light levels detected by your physical or virtual device. Messages being received and commands being sent will be written to the terminal. The LED will also be turned on and off depending on the light level.
> 💁 You can find this code in the [code-commands/virtual-device](code-commands/virtual-device) folder or the [code-commands/pi](code-commands/pi) folder.

@ -24,8 +24,8 @@ Install the Arduino libraries.
```ini
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -16,8 +16,8 @@ lib_deps =
seeed-studio/Grove Temperature And Humidity Sensor @ 1.0.1
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -254,12 +254,12 @@ You are now ready to create the event trigger.
1. From the VS Code terminal run the following command from inside the `soil-moisture-trigger` folder:
```sh
func new --name iot_hub_trigger --template "Azure Event Hub trigger"
func new --name iot-hub-trigger --template "Azure Event Hub trigger"
```
This creates a new Function called `iot_hub_trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This creates a new Function called `iot-hub-trigger`. The trigger will connect to the Event Hub compatible endpoint on the IoT Hub, so you can use an event hub trigger. There is no specific IoT Hub trigger.
This will create a folder inside the `soil-moisture-trigger` folder called `iot_hub_trigger` that contains this function. This folder will have the following files inside it:
This will create a folder inside the `soil-moisture-trigger` folder called `iot-hub-trigger` that contains this function. This folder will have the following files inside it:
* `__init__.py` - this is the Python code file that contains the trigger, using the standard Python file name convention to turn this folder into a Python module.
@ -313,7 +313,7 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
func start
```
The Functions app will start up, and will discover the `iot_hub_trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
The Functions app will start up, and will discover the `iot-hub-trigger` function. It will then process any events that have already been sent to the IoT Hub in the past day.
```output
(.venv) ➜ soil-moisture-trigger func start
@ -325,23 +325,23 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot_
Functions:
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
For detailed output, run func with --verbose flag.
[2021-05-05T02:44:07.517Z] Worker process started and initialized.
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot_hub_trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot-hub-trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.205Z] Trigger Details: PartionId: 0, Offset: 1011240-1011632, EnqueueTimeUtc: 2021-05-04T19:04:04.2030000Z-2021-05-04T19:04:04.3900000Z, SequenceNumber: 2546-2547, Count: 2
[2021-05-05T02:44:09.352Z] Python EventHub trigger processed an event: {"soil_moisture":628}
[2021-05-05T02:44:09.354Z] Python EventHub trigger processed an event: {"soil_moisture":624}
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot_hub_trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot-hub-trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)
```
Each call to the function will be surrounded by a `Executing 'Functions.iot_hub_trigger'`/`Executed 'Functions.iot_hub_trigger'` block in the output, so you can how many messages were processed in each function call.
Each call to the function will be surrounded by a `Executing 'Functions.iot-hub-trigger'`/`Executed 'Functions.iot-hub-trigger'` block in the output, so you can how many messages were processed in each function call.
> If you get the following error:
```output
The listener for function 'Functions.iot_hub_trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
The listener for function 'Functions.iot-hub-trigger' was unable to start. Microsoft.WindowsAzure.Storage: Connection refused. System.Net.Http: Connection refused. System.Private.CoreLib: Connection refused.
```
Then check Azurite is running and you have set the `AzureWebJobsStorage` in the `local.settings.json` file to `UseDevelopmentStorage=true`.
@ -561,7 +561,7 @@ Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in soil-moisture-sensor:
iot_hub_trigger - [eventHubTrigger]
iot-hub-trigger - [eventHubTrigger]
```
Make sure your IoT device is running. Change the moisture levels by adjusting the soil moisture, or moving the sensor in and out of the soil. You will see the relay turn on and off as the soil moisture changes.

@ -35,7 +35,7 @@ Some hints:
relay_on: [GET,POST] http://localhost:7071/api/relay_on
iot_hub_trigger: eventHubTrigger
iot-hub-trigger: eventHubTrigger
```
Paste the URL into your browser and hit `return`, or `Ctrl+click` (`Cmd+click` on macOS) the link in the terminal window in VS Code to open it in your default browser. This will run the trigger.

@ -158,7 +158,7 @@ Once data is flowing into your IoT Hub, you can write some serverless code to li
1. Use the Azurite app as a local storage emulator
Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
1. Run your functions app to ensure it is receiving events from your GPS device. Make sure your IoT device is also running and sending GPS data.
```output
Python EventHub trigger processed an event: {"gps": {"lat": 47.73481, "lon": -122.25701}}
@ -258,7 +258,7 @@ The data will be saved as a JSON blob with the following format:
> pip install --upgrade pip
> ```
1. In the `__init__.py` file for the `iot_hub_trigger`, add the following import statements:
1. In the `__init__.py` file for the `iot-hub-trigger`, add the following import statements:
```python
import json

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,8 +13,8 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -13,12 +13,12 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi
seeed-studio/Seeed Arduino FS
seeed-studio/Seeed Arduino SFUD
seeed-studio/Seeed Arduino rpcUnified
seeed-studio/Seeed_Arduino_mbedtls
seeed-studio/Seeed Arduino RTC
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
build_flags =
-w

@ -1,8 +1,8 @@
# Consumer IoT - build a smart voice assistant
The fod has been grown, driven to a processing plant, sorted for quality, sold in the store and now it's time to cook! One of the core pieces of any kitchen is a timer. Initially these started as simple hour glasses - your food was cooked when all the sand trickled down into the bottom bulb. They then went clockwork, then electric.
The fod has been grown, driven to a processing plant, sorted for quality, sold in the store and now it's time to cook! One of the core pieces of any kitchen is a timer. Initially these started as hour glasses - your food was cooked when all the sand trickled down into the bottom bulb. They then went clockwork, then electric.
The latest iterations are now part of our smart devices. In kitchens all throughout the world you'll head chefs shouting "Hey Siri - set a 10 minute timer", or "Alexa - cancel my bread timer". No longer do you have to walk back to the kitchen to check on a timer, you can do it from your phone, or a call out across the room.
The latest iterations are now part of our smart devices. In kitchens in homes all throughout the world you'll hear cooks shouting "Hey Siri - set a 10 minute timer", or "Alexa - cancel my bread timer". No longer do you have to walk back to the kitchen to check on a timer, you can do it from your phone, or a call out across the room.
In these 4 lessons you'll learn how to build a smart timer, using AI to recognize your voice, understand what you are asking for, and reply with information about your timer. You'll also add support for multiple languages.
@ -12,7 +12,7 @@ In these 4 lessons you'll learn how to build a smart timer, using AI to recogniz
1. [Recognize speech with an IoT device](./lessons/1-speech-recognition/README.md)
1. [Understand language](./lessons/2-language-understanding/README.md)
1. [Provide spoken feedback](./lessons/3-spoken-feedback/README.md)
1. [Set a timer and provide spoken feedback](./lessons/3-spoken-feedback/README.md)
1. [Support multiple languages](./lessons/4-multiple-language-support/README.md)
## Credits

@ -2,7 +2,9 @@
Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url)
This video gives an overview of the Azure speech service, a topic that will be covered in this lesson:
[![How to get started using your Cognitive Services Speech resource from the Microsoft Azure YouTube channel](https://img.youtube.com/vi/iW0Fw0l3mrA/0.jpg)](https://www.youtube.com/watch?v=iW0Fw0l3mrA)
## Pre-lecture quiz
@ -113,7 +115,7 @@ Speech to text, or speech recognition, involves using AI to convert words in an
### Speech recognition models
To convert speech to text, samples from the audio signal are grouped together and fed into a machine learning model based around a Recurrent Neural network (RNN). This is a type of machine learning model that can use previous data to make a decision about incoming data. For example, the RNN could detect one block of audio samples as the sound 'Hel', and when it receives another that it thinks is the sound 'lo', it can combine this with the previous sound, see that 'Hello' is a valid word and select that as the outcome.
To convert speech to text, samples from the audio signal are grouped together and fed into a machine learning model based around a Recurrent Neural network (RNN). This is a type of machine learning model that can use previous data to make a decision about incoming data. For example, the RNN could detect one block of audio samples as the sound 'Hel', and when it receives another that it thinks is the sound 'lo', it can combine this with the previous sound, find that 'Hello' is a valid word and select that as the outcome.
ML models always accept data of the same size every time. The image classifier you built in an earlier lesson resizes images to a fixed size and processes them. The same with speech models, they have to process fixed sized audio chunks. The speech models need to be able to combine the outputs of multiple predictions to get the answer, to allow it to distinguish between 'Hi' and 'Highway', or 'flock' and 'floccinaucinihilipilification'.
@ -194,7 +196,7 @@ To use the results of the speech to text conversion, you need to send it to the
}
```
Where `<converted speech>` is the output from the speech to text call.
Where `<converted speech>` is the output from the speech to text call. You only need to send speech that has content, if the call returns an empty string it can be ignored.
1. Verify that messages are being sent by monitoring the Event Hub compatible endpoint using the `az iot hub monitor-events` command.
@ -204,7 +206,7 @@ To use the results of the speech to text conversion, you need to send it to the
## 🚀 Challenge
Speech recognition has been around for a long time, and is continuously improving. Research the current capabilities and see how these have evolved over time, including how accurate machine transcriptions are compared to human.
Speech recognition has been around for a long time, and is continuously improving. Research the current capabilities and compare how these have evolved over time, including how accurate machine transcriptions are compared to human.
What do you think the future holds for speech recognition?

@ -10,14 +10,6 @@ from azure.iot.device import IoTHubDeviceClient, Message
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
@ -52,6 +44,13 @@ def capture_audio():
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {

@ -190,7 +190,7 @@ You can capture audio from the microphone using Python code.
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and you will hear the recording.
You may see some ALSA errors when the PyAudio instance is created. This is due to configuration on the Pi for audio devices you don't have. You can ignore these errors.
You may get some ALSA errors when the PyAudio instance is created. This is due to configuration on the Pi for audio devices you don't have. You can ignore these errors.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py
@ -200,7 +200,7 @@ You can capture audio from the microphone using Python code.
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
```
If you see the following error:
If you get the following error:
```output
OSError: [Errno -9997] Invalid sample rate

@ -91,7 +91,7 @@ The audio can be sent to the speech service using the REST API. To use the speec
print(text)
```
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and you will see the audio converted to text in the output.
1. Run the code. Press the button and speak into the microphone. Release the button when you are done, and the audio will be converted to text and printed to the console.
```output
pi@raspberrypi:~/smart-timer $ python3 app.py

@ -16,7 +16,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
pip install azure-cognitiveservices-speech
```
> ⚠️ If you see the following error:
> ⚠️ If you get the following error:
>
> ```output
> ERROR: Could not find a version that satisfies the requirement azure-cognitiveservices-speech (from versions: none)
@ -80,7 +80,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
time.sleep(1)
```
1. Run this app. Speak into your microphone and you will see the audio converted to text in the console.
1. Run this app. Speak into your microphone and the audio converted to text will be output to the console.
```output
(.venv) ➜ smart-timer python3 app.py

@ -10,24 +10,422 @@ Add a sketchnote if possible/appropriate
## Introduction
In this lesson you will learn about
In the last lesson you converted speech to text. For this to be used to program a smart timer, your code will need to have an understanding of what was said. You could assume the user will speak a fixed phrase, such as "Set a 3 minute timer", and parse that expression to get how long the timer should be, but this isn't very user-friendly. If a user were to say "Set a timer for 3 minutes", you or I would understand what they mean, but your code would not, it would be expecting a fixed phrase.
This is where language understanding comes in, using AI models to interpret text and return the details that are needed, for example being able to take both "Set a 3 minute timer" and "Set a timer for 3 minutes", and understand that a timer is required for 3 minutes.
In this lesson you will learn about language understanding models, how to create them, train them, and use them from your code.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Language understanding](#language-understanding)
* [Create a language understanding model](create-a-language-understanding-model)
* [Intents and entities](#intents-and-entities)
* [Use the language understanding model](#use-the-language-understanding-model)
## Language understanding
Humans have used language to communicate for hundreds of thousands of years. We communicate with words, sounds, or actions and understand what is said, both the meaning of the words, sounds or actions, but also their context. We understand sincerity and sarcasm, allowing the same words to mean different things depending on the tone of our voice.
✅ Think about some of the conversations you have had recently. How much of the conversation would be hard for a computer to understand because it needs context?
Language understanding, also called natural-language understanding is part of a field of artificial intelligence called natural-language processing (or NLP), and deals with reading comprehension, trying to understand the details of words or sentences. If you use a voice assistant such as Alexa or Siri, you have used language understanding services. These are the behind-the-scenes AI services that convert "Alexa, play the latest album by Taylor Swift" into my daughter dancing around the living room to her favorite tunes.
> 💁 Computers, despite all their advances, still have a long way to go to truly understand text. When we refer to language understanding with computers, we don't mean anything anywhere near as advanced as human communication, instead we mean taking some words and extracting key details.
As humans, we understand language without really thinking about it. If I asked another human to "play the latest album by Taylor Swift" then they would instinctively know what I meant. For a computer, this is harder. It would have to take the words, converted from speech to text, and work out the following pieces of information:
* Music needs to be played
* The music is by the artist Taylor Swift
* The specific music is a whole album of multiple tracks in order
* Taylor Swift has many albums, so they need to be sorted by chronological order and the most recently published is the one required
✅ Think of some other sentences you have spoken when making requests, such as ordering coffee or asking a family member to pass you something. Try to break then down into the pieces of information a computer would need to extract to understand the sentence.
Language understanding models are AI models that are trained to extract certain details from language, and then are trained for specific tasks using transfer learning, in the same way you trained a Custom Vision model using a small set of images. You can take a model, then train it using the text you want it to understand.
## Create a language understanding model
![The LUIS logo](../../../images/luis-logo.png)
You can create language understanding models using LUIS, a language understanding service from Microsoft that is part of Cognitive Services.
### Task - create an authoring resource
To use LUIS, you need to create an authoring resource.
1. Use the following command to create an authoring resource in your `smart-timer` resource group:
```python
az cognitiveservices account create --name smart-timer-luis-authoring \
--resource-group smart-timer \
--kind LUIS.Authoring \
--sku F0 \
--yes \
--location <location>
```
Replace `<location>` with the location you used when creating the Resource Group.
> ⚠️ LUIS isn't available in all regions, so if you get the following error:
>
> ```output
> InvalidApiSetId: The account type 'LUIS.Authoring' is either invalid or unavailable in given region.
> ```
>
> pick a different region.
This will create a free-tier LUIS authoring resource.
### Task - create a language understanding app
1. Open the LUIS portal at [luis.ai](https://luis.ai?WT.mc_id=academic-17441-jabenn) in your browser, and sign in with the same account you have been using for Azure.
1. Follow the instructions on the dialog to select your Azure subscription, then select the `smart-timer-luis-authoring` resource you have just created.
1. From the *Conversation apps* list, select the **New app** button to create a new application. Name the new app `smart-timer`, and set the *Culture* to your language.
> 💁 There is a field for a prediction resource. You can create a second resource just for prediction, but the free authoring resource allows 1,000 predictions a month which should be enough for development, so you can leave this blank.
1. Read through the guide that appears once you cerate the app to get an understanding of the steps you need to take to train the language understanding model. Close this guide when you are done.
## Intents and entities
Language understanding is based around *intents* and *entities*. Intents are what the intent of the words are, for example playing music, setting a timer, or ordering food. Entities are what the intent is referring to, such as the album, the length of the timer, or the type of food. Each sentence that the model interprets should have at least one intent, and optionally one or more entities.
Some examples:
| Sentence | Intent | Entities |
| --------------------------------------------------- | ---------------- | ------------------------------------------ |
| "Play the latest album by Taylor Swift" | *play music* | *the latest album by Taylor Swift* |
| "Set a 3 minute timer" | *set a timer* | *3 minutes* |
| "Cancel my timer" | *cancel a timer* | None |
| "Order 3 large pineapple pizzas and a caesar salad" | *order food* | *3 large pineapple pizzas*, *caesar salad* |
✅ With the sentences you though about earlier, what would be the intent and any entities in that sentence?
To train LUIS, first you set the entities. These can be a fixed list of terms, or learned from the text. For example, you could provide a fixed list of food available from your menu, with variations (or synonyms) of each word, such as *egg plant* and *aubergine* as variations of *aubergine*. LUIS also has pre-built entities that can be used, such as numbers and locations.
For setting a timer, you could have one entity using the pre-built number entities for the time, and another for the units, such as minutes and seconds. Each unit would have multiple variations to cover the singular and plural forms - such as minute and minutes.
Once the entities are defined, you create intents. These are learned by the model based on example sentences that you provide (known as utterances). For example, for a *set timer* intent, you might provide the following sentences:
* `set a 1 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a 9 minute 30 second timer`
You then tell LUIS what parts of these sentences map to the entities:
![The sentence set a timer for 1 minute and 12 seconds broken into entities](../../../images/sentence-as-intent-entities.png)
The sentence `set a timer for 1 minute and 12 seconds` has the intent of `set timer`. It also has 2 entities with 2 values each:
| | time | unit |
| ---------- | ---: | ------ |
| 1 minute | 1 | minute |
| 12 seconds | 12 | second |
To train a good model, you need a range of different example sentences to cover the many different ways someone might ask for the same thing.
> 💁 As with any AI model, the more data and the more accurate the data you use to train, the better the model.
✅ Think about the different ways you might ask the same thing and expect a human to understand.
### Task - add entities to the language understanding models
For the timer, you need to add 2 entities - one for the unit of time (minutes or seconds), and one for the number of minutes or seconds.
You can find instructions for using the LUIS portal in the [Quickstart: Build your app in LUIS portal documentation on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/luis-get-started-create-app?WT.mc_id=academic-17441-jabenn).
1. From the LUIS portal, select the *Entities* tab and add the *number* prebuilt entity by selecting the **Add prebuilt entity** button, then selecting *number* from the list.
1. Create a new entity for the time unit using the **Create** button. Name the entity `time unit` and set the type to *List*. Add values for `minute` and `second` to the *Normalized values* list, adding the singular and plural forms to the *synonyms* list. Press `return` after adding each synonym to add it to the list.
| Normalized value | Synonyms |
| ---------------- | --------------- |
| minute | minute, minutes |
| second | second, seconds |
### Task - add intents to the language understanding models
1. From the *Intents* tab, select the **Create** button to create a new intent. Name this intent `set timer`.
1. In the examples, enter different ways to set a timer using both minutes, seconds and minutes and seconds combined. Examples could be:
* `set a 1 second timer`
* `set a 4 minute timer`
* `set a 9 minute 30 second timer`
* `set a timer for 1 minute and 12 seconds`
* `set a timer for 3 minutes`
* `set a timer for 3 minutes and 1 second`
* `set a timer for 1 minute1 and 1 second`
* `set a timer for 30 seconds`
* `set a timer for 1 second`
1. As you enter each example, LUIS will start detecting entities, and will underline and label any it finds.
![The examples with the numbers and time units underlined by LUIS](../../../images/luis-intent-examples.png)
### Task - train and test the model
1. Once the entities and intents are configured, you can train the model using the **Train** button on the top menu. Select this button, and the model should train in a few seconds. The button will be greyed out whilst training, and be re-enabled once done.
1. Select the **Test** button from the top menu to test the language understanding model. Enter text such as `set a timer for 5 minutes and 4 seconds` and press return. The sentence will appear in a box under the text box that you typed it in to, and blow that will be the *top intent*, or the intent that was detected with the highest probability. This should be `set timer`. The intent name will be followed by the probability that the intent detected was the right one.
1. Select the **Inspect** option to see a breakdown of the results. You will see the top-scoring intent with it's percentage probability, along with lists of the entities detected.
1. Close the *Test* pane when you are done testing.
### Task - publish the model
To use this model from code, you need to publish it. When publishing from LUIS, you can publish to either a staging environment for testing, or a product environment for a full release. In this lesson, a staging environment is fine.
1. From the LUIS portal, select the **Publish** button from the top menu.
1. Make sure *Staging slot* is selected, then select **Done**. You will see a notification when the app is published.
1. You can test this using curl. To build the curl command, you need three values - the endpoint, the application ID (App ID) and an API key. These can be accessed from the **MANAGE** tab that can be selected from the top menu.
## Thing 1
1. From the *Settings* section, copy the App ID
1. From the *Azure Resources* section, select *Authoring Resource*, and copy the *Primary Key* and *Endpoint URL*
1. Run the following curl command in your command prompt or terminal:
```sh
curl "<endpoint url>/luis/prediction/v3.0/apps/<app id>/slots/staging/predict" \
--request GET \
--get \
--data "subscription-key=<primary key>" \
--data "verbose=false" \
--data "show-all-intents=true" \
--data-urlencode "query=<sentence>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section.
Replace `<app id>` with the App ID from the *Settings* section.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section.
Replace `<sentence>` with the sentence you want to test with.
1. The output of this call will be a JSON document that details the query, the top intent, and a list of entities broken down by type.
```JSON
{
"query": "set a timer for 45 minutes and 12 seconds",
"prediction": {
"topIntent": "set timer",
"intents": {
"set timer": {
"score": 0.97031575
},
"None": {
"score": 0.02205793
}
},
"entities": {
"number": [
45,
12
],
"time-unit": [
[
"minute"
],
[
"second"
]
]
}
}
}
```
The JSON above came from querying with `set a timer for 45 minutes and 12 seconds`:
* The `set timer` was the top intent with a probability of 97%.
* Two *number* entities were detected, `45` and `12`.
* Two *time-unit* entities were detected, `minute` and `second`.
## Use the language understanding model
Once published, the LUIS model can be called from code. In the last lesson you sent the recognized speech to an IoT Hub, and you can use serverless code to respond to this and understand what was sent.
### Task - create a serverless functions app
1. Create an Azure Functions app called `smart-timer-trigger`.
1. Add an IoT Hub event trigger to this app called `speech-trigger`.
1. Set the Event Hub compatible endpoint connection string for your IoT Hub in the `local.settings.json` file, and use the key for that entry in the `function.json` file.
1. Use the Azurite app as a local storage emulator.
1. Run your functions app and your IoT device to ensure speech is arriving at the IoT Hub.
```output
Python EventHub trigger processed an event: {"speech": "Set a 3 minute timer."}
```
### Task - use the language understanding model
1. The SDK for LUIS is available via a Pip package. Add the following line to the `requirements.txt` file to add the dependency on this package:
```sh
azure-cognitiveservices-language-luis
```
1. Make sure the VS Code terminal has the virtual environment activated, and run the following command to install the Pip packages:
```sh
pip install -r requirements.txt
```
1. Add new entries to the `local.settings.json` file for your LUIS API Key, Endpoint URL, and App ID from the **MANAGE** tab of the LUIS portal:
```JSON
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
```
Replace `<endpoint url>` with the Endpoint URL from the *Azure Resources* section of the **MANAGE** tab. This will be `https://<location>.api.cognitive.microsoft.com/`.
Replace `<app id>` with the App ID from the *Settings* section of the **MANAGE** tab.
Replace `<primary key>` with the Primary Key from the *Azure Resources* section of the **MANAGE** tab.
1. Add the following imports to the `__init__.py` file:
```python
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
```
This imports some system libraries, as well as the libraries to interact with LUIS.
1. In the `main` method, before it loops through all the events, add the following code:
```python
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
```
This loads the values you added to the `local.settings.json` file for your LUIS app, creates a credentials object with your API key, then creates a LUIS client object to interact with your LUIS app.
1. Predictions are requested from LUIS by sending a prediction request - a JSON document containing the text to predict. Create this with the following code inside the `for event in events` loop:
```python
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
```
This code extracts the speech that was sent to the IoT Hub and uses it to build the prediction request.
1. This request can then be sent to LUIS, using the staging slot that your app was published to:
```python
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
```
1. The prediction response contains the top intent - the intent with the highest prediction score, along with the entities. If the top intent is `set timer`, then the entities can be read to get the time needed for the timer:
```python
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
```
The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers - 4 and 17.
The `time unit` entities will be an array of arrays of strings, with each time unit as an array of strings inside the array. For example, if you said *"Set a four minute 17 second timer."*, then the `time unit` array will contain 2 arrays with single values each - `['minute']` and `['second']`.
The JSON version of these entities for *"Set a four minute 17 second timer."* is:
```json
{
"number": [4, 17],
"time unit": [
["minute"],
["second"]
]
}
```
This code also defines a count for the total time for the timer in seconds. This will be populated by the values from the entities.
1. The entities aren't linked, but we can make some assumptions about them. They will be in the order spoken, so the position in the array can be used to determine which number matches to which time unit. For example:
* *"Set a 30 second timer"* - this will have one number, `30`, and one time unit, `second` so the single number will match the single time unit.
* *"Set a 2 minute and 30 second timer"* - this will have two numbers, `2` and `30`, and two time units, `minute` and `second` so the first number will be for the first time unit (2 minutes), and the second number for the second time unit (30 seconds).
The following code gets the count of items in the number entities, and uses that to extract the first item from each array, then the second and so on:
```python
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
```
For *"Set a four minute 17 second timer."*, this will loop twice, giving the following values:
| loop count | `number` | `time_unit` |
| ---------: | -------: | ----------- |
| 0 | 4 | minute |
| 1 | 17 | second |
1. Inside this loop, use the number and time unit to calculate the total time for the timer, adding 60 seconds for each minute, and the number of seconds for any seconds.
```python
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
```
1. Finally, outside this loop through the entities, log the total time for the timer:
```python
logging.info(f'Timer required for {total_time} seconds')
```
1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output:
```output
[2021-06-16T01:38:33.316Z] Executing 'Functions.speech-trigger' (Reason='(null)', Id=39720c37-b9f1-47a9-b213-3650b4d0b034)
[2021-06-16T01:38:33.329Z] Trigger Details: PartionId: 0, Offset: 3144-3144, EnqueueTimeUtc: 2021-06-16T01:38:32.7970000Z-2021-06-16T01:38:32.7970000Z, SequenceNumber: 8-8, Count: 1
[2021-06-16T01:38:33.605Z] Python EventHub trigger processed an event: {"speech": "Set a four minute 17 second timer."}
[2021-06-16T01:38:35.076Z] Timer required for 257 seconds
[2021-06-16T01:38:35.128Z] Executed 'Functions.speech-trigger' (Succeeded, Id=39720c37-b9f1-47a9-b213-3650b4d0b034, Duration=1894ms)
```
> 💁 You can find this code in the [code/functions](code/functions) folder.
---
## 🚀 Challenge
There are many ways to request the same thing, such as setting a timer. Think of different ways to do this, and use them as examples in your LUIS app. Test these out, to see how well your model can cope with multiple ways to request a timer.
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/34)
## Review & Self Study
* Read more about LUIS and it's capabilities on the [Language Understanding (LUIS) documentation page on Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/luis/?WT.mc_id=academic-17441-jabenn)
* Read more about language understanding on the [Natural-language understanding page on Wikipedia](https://wikipedia.org/wiki/Natural-language_understanding)
## Assignment
[](assignment.md)
[Cancel the timer](assignment.md)

@ -1,9 +1,14 @@
#
# Cancel the timer
## Instructions
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven.
Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized.
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |
| Add the cancel timer intent to the LUIS app | Was able to add the intent and train the model | Was able to add the intent but not train the model | Was unable to add the intent and train the model |
| Handle the intent in the serverless app | Was able to detect the intent as the top intent and log it | Was able to detect the intent as the top intent | Was unable to detect the intent as the top intent |

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,11 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"IOT_HUB_CONNECTION_STRING": "<connection string>",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>"
}
}

@ -0,0 +1,4 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis

@ -0,0 +1,43 @@
from typing import List
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
def main(events: List[func.EventHubEvent]):
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
for event in events:
logging.info('Python EventHub trigger processed an event: %s',
event.get_body().decode('utf-8'))
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_time = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_time += number * 60
else:
total_time += number
logging.info(f'Timer required for {total_time} seconds')

@ -0,0 +1,15 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "samples-workitems",
"connection": "IOT_HUB_CONNECTION_STRING",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}

@ -1,4 +1,4 @@
# Provide spoken feedback
# Set a timer and provide spoken feedback
Add a sketchnote if possible/appropriate

@ -10,5 +10,5 @@ to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simpl
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
For more information read the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

@ -83,6 +83,12 @@ We have two choices of IoT hardware to use for the projects depending on persona
| 16 | [Manufacturing](./4-manufacturing) | Check fruit quality from an IoT device | Learn about using your fruit quality detector from an IoT device | [Check fruit quality from an IoT device](./4-manufacturing/lessons/2-check-fruit-from-device/README.md) |
| 17 | [Manufacturing](./4-manufacturing) | Run your fruit detector on the edge | Learn about running your fruit detector on an IoT device on the edge | [Run your fruit detector on the edge](./4-manufacturing/lessons/3-run-fruit-detector-edge/README.md) |
| 18 | [Manufacturing](./4-manufacturing) | Trigger fruit quality detection from a sensor | Learn about triggering fruit quality detection from a sensor | [Trigger fruit quality detection from a sensor](./4-manufacturing/lessons/4-trigger-fruit-detector/README.md) |
| 19 | [Retail](./5-retail) | | |
| 20 | [Retail](./5-retail) | | |
| 21 | [Consumer](./6-consumer) | Recognize speech with an IoT device | Learn how to recognize speech from an IoT device to build a smart timer | [Recognize speech with an IoT device](./6-consumer/lessons/1-speech-recognition/README.md) |
| 22 | [Consumer](./6-consumer) | Understand language | Learn how to understand sentences spoken to an IoT device | [Understand language](./6-consumer/lessons/2-language-understanding/README.md) |
| 23 | [Consumer](./6-consumer) | Set a timer and provide spoken feedback | Learn how to set a timer on an IoT device and give spoken feedback on when the timer is set and when it finishes | [Set a timer and provide spoken feedback](./6-consumer/lessons/3-spoken-feedback/README.md) |
| 24 | [Consumer](./6-consumer) | Support multiple languages | Learn how to support multiple languages, both being spoken to and the responses from your smart timer | [Support multiple languages](./6-consumer/lessons/4-multiple-language-support/README.md) |
## Offline access

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

@ -904,7 +904,7 @@
},
{
"answerText": "A Functions App, a Storage Account, and Application Settings",
"isCorrect": "frue"
"isCorrect": "true"
}
]
}

Loading…
Cancel
Save