spelling: into

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
pull/406/head
Josh Soref 3 years ago
parent ba0bbc6334
commit 7b7b423066

@ -110,7 +110,7 @@ On the first day above the base temperature, the following temperatures were mea
| Maximum | 16 |
| Minimum | 12 |
Plugging these numbers in to our calculation:
Plugging these numbers into our calculation:
* T<sub>max</sub> = 16
* T<sub>min</sub> = 12

@ -226,7 +226,7 @@ Update your server code to run the relay for 5 seconds, then wait 20 seconds.
client.publish(server_command_topic, json.dumps(command))
```
This code defines a function called `send_relay_command` that sends a command over MQTT to control the relay. The telemetry is created as a dictionary, then converted to a JSON string. The value passed in to `state` determines if the relay should be on or off.
This code defines a function called `send_relay_command` that sends a command over MQTT to control the relay. The telemetry is created as a dictionary, then converted to a JSON string. The value passed into `state` determines if the relay should be on or off.
1. After the `send_relay_code` function, add the following code:

@ -99,7 +99,7 @@ Program the device.
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
The `camera.rotation = 0` line sets the rotation of the image. The ribbon cable comes in to the bottom of the camera, but if your camera was rotated to allow it to point easier at the item you want to classify, then you can change this line to the number of degrees of rotation.
The `camera.rotation = 0` line sets the rotation of the image. The ribbon cable comes into the bottom of the camera, but if your camera was rotated to allow it to point easier at the item you want to classify, then you can change this line to the number of degrees of rotation.
![The camera hanging down over a drink can](../../../images/pi-camera-upside-down.png)

@ -175,7 +175,7 @@ You can find instructions for using the LUIS portal in the [Quickstart: Build yo
1. Once the entities and intents are configured, you can train the model using the **Train** button on the top menu. Select this button, and the model should train in a few seconds. The button will be greyed out whilst training, and be re-enabled once done.
1. Select the **Test** button from the top menu to test the language understanding model. Enter text such as `set a timer for 5 minutes and 4 seconds` and press return. The sentence will appear in a box under the text box that you typed it in to, and blow that will be the *top intent*, or the intent that was detected with the highest probability. This should be `set timer`. The intent name will be followed by the probability that the intent detected was the right one.
1. Select the **Test** button from the top menu to test the language understanding model. Enter text such as `set a timer for 5 minutes and 4 seconds` and press return. The sentence will appear in a box under the text box that you typed it into, and blow that will be the *top intent*, or the intent that was detected with the highest probability. This should be `set timer`. The intent name will be followed by the probability that the intent detected was the right one.
1. Select the **Inspect** option to see a breakdown of the results. You will see the top-scoring intent with it's percentage probability, along with lists of the entities detected.

Loading…
Cancel
Save