Merge branch 'microsoft:main' into main

pull/405/head
Weslley Luiz 3 years ago committed by GitHub
commit c8bac9a7fd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -159,7 +159,7 @@ Create a Python application to print `"Hello World"` to the console.
If you don't have `.venv` as a prefix on the prompt, the virtual environment is not active in the terminal.
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and the the call to activate this will appear in the terminal. The prompt will also have the name of the virtual environment (`.venv`):
1. Launch a new VS Code Terminal by selecting *Terminal -> New Terminal, or pressing `` CTRL+` ``. The new terminal will load the virtual environment, and the call to activate this will appear in the terminal. The prompt will also have the name of the virtual environment (`.venv`):
```output
➜ nightlight source .venv/bin/activate

@ -38,7 +38,7 @@ Program the device.
1. Open the `soil-moisture-sensor` project from the last lesson in VS Code if it's not already open. You will be adding to this project.
1. There isn't a library for this actuator - it's a digital actuator controlled by a high or low signal. To turn it on, you send a high signal to the pin (3.3V), turn turn it off you send a low signal (0V). You can do this using the built-in Arduino [`digitalWrite`](https://www.arduino.cc/reference/en/language/functions/digital-io/digitalwrite/) function. Start by adding the following to the bottom of the `setup` function to setup the combined I<sub>2</sub>C/digital port as an output pin to send a voltage to the relay:
2. There isn't a library for this actuator - it's a digital actuator controlled by a high or low signal. To turn it on, you send a high signal to the pin (3.3V), to turn it off you send a low signal (0V). You can do this using the built-in Arduino [`digitalWrite`](https://www.arduino.cc/reference/en/language/functions/digital-io/digitalwrite/) function. Start by adding the following to the bottom of the `setup` function to setup the combined I<sub>2</sub>C/digital port as an output pin to send a voltage to the relay:
```cpp
pinMode(PIN_WIRE_SCL, OUTPUT);

@ -102,7 +102,7 @@ The public test MQTT broker you have been using is a great tool when learning, b
* Performance - it is designed for only a few test messages, so wouldn't cope with a large amount of messages being sent
* Discovery - there is no way to know what devices are connected
IoT services in the cloud solve these problems. They are maintained by large cloud providers who invest heavily in reliability and are on hand to fix any issues that might arise. They have security baked in to stop hackers reading your data or sending rogue commands. They are also high performance, being able to handle many millions of messages every day, taking advantage of the cloud to scale as needed.
IoT services in the cloud solve these problems. They are maintained by large cloud providers who invest heavily in reliability and are on hand to fix any issues that might arise. They have security baked-in to stop hackers reading your data or sending rogue commands. They are also high performance, being able to handle many millions of messages every day, taking advantage of the cloud to scale as needed.
> 💁 Although you pay for these upsides with a monthly fee, most cloud providers offer a free version of their IoT service with a limited amount of messages per day or devices that can connect. This free version is usually more than enough for a developer to learn about the service. In this lesson you will be using a free version.

@ -375,7 +375,7 @@ This will create a folder inside the `soil-moisture-trigger` folder called `iot-
For detailed output, run func with --verbose flag.
[2021-05-05T02:44:07.517Z] Worker process started and initialized.
[2021-05-05T02:44:09.202Z] Executing 'Functions.iot-hub-trigger' (Reason='(null)', Id=802803a5-eae9-4401-a1f4-176631456ce4)
[2021-05-05T02:44:09.205Z] Trigger Details: PartionId: 0, Offset: 1011240-1011632, EnqueueTimeUtc: 2021-05-04T19:04:04.2030000Z-2021-05-04T19:04:04.3900000Z, SequenceNumber: 2546-2547, Count: 2
[2021-05-05T02:44:09.205Z] Trigger Details: PartitionId: 0, Offset: 1011240-1011632, EnqueueTimeUtc: 2021-05-04T19:04:04.2030000Z-2021-05-04T19:04:04.3900000Z, SequenceNumber: 2546-2547, Count: 2
[2021-05-05T02:44:09.352Z] Python EventHub trigger processed an event: {"soil_moisture":628}
[2021-05-05T02:44:09.354Z] Python EventHub trigger processed an event: {"soil_moisture":624}
[2021-05-05T02:44:09.395Z] Executed 'Functions.iot-hub-trigger' (Succeeded, Id=802803a5-eae9-4401-a1f4-176631456ce4, Duration=245ms)

@ -34,7 +34,7 @@ The Custom Vision service has a Python SDK you can use to classify images.
Replace `<prediction_url>` with the URL you copied from the *Prediction URL* dialog earlier in this lesson. Replace `<prediction key>` with the prediction key you copied from the same dialog.
1. The prediciton URL that was provided by the *Prediction URL* dialog is designed to be used when calling the REST endpoint directly. The Python SDK uses parts of the URL in different places. Add the following code to break apart this URL into the parts needed:
1. The prediction URL that was provided by the *Prediction URL* dialog is designed to be used when calling the REST endpoint directly. The Python SDK uses parts of the URL in different places. Add the following code to break apart this URL into the parts needed:
```python
parts = prediction_url.split('/')

@ -1,6 +1,6 @@
# Capture an image - Virtual IoT Hardware
In this part of the lesson, you will add a camera sensor to your yirtual IoT device, and read images from it.
In this part of the lesson, you will add a camera sensor to your virtual IoT device, and read images from it.
## Hardware

@ -56,7 +56,7 @@ The Wio Terminal can now be programmed to use the attached ArduCAM camera.
1. The ArduCam library isn't available as an Arduino library that can be installed from the `platformio.ini` file. Instead it will need to be installed from source from their GitHub page. You can get this by either:
* Cloning the repo from [https://github.com/ArduCAM/Arduino.git](https://github.com/ArduCAM/Arduino.git)
* Heading to the repo on GitHub at [github.com/ArduCAM/Arduino](https://github.com/ArduCAM/Arduino) and downloading the code as a zip from from the **Code** button
* Heading to the repo on GitHub at [github.com/ArduCAM/Arduino](https://github.com/ArduCAM/Arduino) and downloading the code as a zip from the **Code** button
1. You only need the `ArduCAM` folder from this code. Copy the entire folder into the `lib` folder in your project.

@ -62,7 +62,7 @@ When you then use it to predict images, instead of getting back a list of tags a
![Object detection of cashew nuts and tomato paste](../../../images/object-detector-cashews-tomato.png)
The image above contains both a tub of cashew nuts and three cans of tomato paste. The object detector detected the cashew nuts, returning the bounding box that contains the cashew nuts with the percentage chance that that bounding box contains the object, in this case 97.6%. The object detector has also detected three cans of tomato paste, and provides three separate bounding boxes, one for each detected can, and each one has a percentage probability that the bounding box contains a can of tomato paste.
The image above contains both a tub of cashew nuts and three cans of tomato paste. The object detector detected the cashew nuts, returning the bounding box that contains the cashew nuts with the percentage chance that the bounding box contains the object, in this case 97.6%. The object detector has also detected three cans of tomato paste, and provides three separate bounding boxes, one for each detected can, and each one has a percentage probability that the bounding box contains a can of tomato paste.
✅ Think of some different scenarios you might want to use image-based AI models for. Which ones would need classification, and which would need object detection?
@ -113,7 +113,7 @@ You can train an object detector using Custom Vision, in a similar way to how yo
![The settings for the custom vision project with the name set to fruit-quality-detector, no description, the resource set to fruit-quality-detector-training, the project type set to classification, the classification types set to multi class and the domains set to food](../../../images/custom-vision-create-object-detector-project.png)
✅ The products on shelves domain is specifically targeted for detecting stock on store shelves. Read more on the different domains in the [Select a domian documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/select-domain?WT.mc_id=academic-17441-jabenn#object-detection)
✅ The products on shelves domain is specifically targeted for detecting stock on store shelves. Read more on the different domains in the [Select a domain documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/select-domain?WT.mc_id=academic-17441-jabenn#object-detection)
✅ Take some time to explore the Custom Vision UI for your object detector.

@ -120,7 +120,7 @@ You can use bounding boxes combined with probabilities to evaluate how accurate
![Two bonding boxes overlapping a can of tomato paste](../../../images/overlap-object-detection.png)
In the example above, one bounding box indicated a predicted can of tomato paste at 78.3%. A second bounding box is slightly smaller, and is inside the first bounding box with a probability of 64.3%. Your code can check the bounding boxes, see they overlap completely, and ignore the lower probability as there is no way one can can be inside another.
In the example above, one bounding box indicated a predicted can of tomato paste at 78.3%. A second bounding box is slightly smaller, and is inside the first bounding box with a probability of 64.3%. Your code can check the bounding boxes, see they overlap completely, and ignore the lower probability as there is no way one can be inside another.
✅ Can you think of a situation where is it valid to detect one object inside another?

@ -156,7 +156,7 @@ typedef struct
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode

@ -150,7 +150,7 @@ typedef struct
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode

@ -297,7 +297,7 @@ Once each buffer has been captured, it can be written to the flash memory. Flash
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode
@ -397,7 +397,7 @@ Once each buffer has been captured, it can be written to the flash memory. Flash
The audio buffers are arrays of 16-bit integers containing the audio from the ADC. The ADC returns 12-bit unsigned values (0-1023), so these need to be converted to 16-bit signed values, and then converted into 2 bytes to be stored as raw binary data.
These bytes are written to the flash memory buffers. The write starts at index 44 - this is the offset from the 44 bytes written as the WAV file header. Once all the bytes needed for the required audio length have been captured, the remaing data is written to the flash memory.
These bytes are written to the flash memory buffers. The write starts at index 44 - this is the offset from the 44 bytes written as the WAV file header. Once all the bytes needed for the required audio length have been captured, the remaining data is written to the flash memory.
1. In the `public` section of the `Mic` class, add the following code:

@ -38,7 +38,7 @@ Text to speech systems typically have 3 stages:
### Text analysis
Text analysis involves taking the text provided, and converting into words that can be used to generate speech. For example, if you convert "Hello world", there there is no text analysis needed, the two words can be converted to speech. If you have "1234" however, then this might need to be converted either into the words "One thousand, two hundred thirty four" or "One, two, three, four" depending on the context. For "I have 1234 apples", then it would be "One thousand, two hundred thirty four", but for "The child counted 1234" then it would be "One, two, three, four".
Text analysis involves taking the text provided, and converting into words that can be used to generate speech. For example, if you convert "Hello world", there is no text analysis needed, the two words can be converted to speech. If you have "1234" however, then this might need to be converted either into the words "One thousand, two hundred thirty four" or "One, two, three, four" depending on the context. For "I have 1234 apples", then it would be "One thousand, two hundred thirty four", but for "The child counted 1234" then it would be "One, two, three, four".
The words created vary not only for the language, but the locale of that language. For example, in American English, 120 would be "One hundred twenty", in British English it would be "One hundred and twenty", with the use of "and" after the hundreds.

@ -150,7 +150,7 @@ typedef struct
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode

@ -150,7 +150,7 @@ typedef struct
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode

@ -150,7 +150,7 @@ typedef struct
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable peripheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode

@ -147,4 +147,4 @@ The speech service REST API doesn't support direct translations, instead you can
> 💁 You can find this code in the [code/pi](code/pi) folder.
😀 Your multi-lingual timer program was a success!
😀 Your multilingual timer program was a success!

@ -187,4 +187,4 @@ The speech service doesn't support translation of text back to speech, instead y
> 💁 You can find this code in the [code/virtual-iot-device](code/virtual-iot-device) folder.
😀 Your multi-lingual timer program was a success!
😀 Your multilingual timer program was a success!

@ -267,4 +267,4 @@ The speech service REST API doesn't support direct translations, instead you can
> 💁 You can find this code in the [code/wio-terminal](code/wio-terminal) folder.
😀 Your multi-lingual timer program was a success!
😀 Your multilingual timer program was a success!

Loading…
Cancel
Save