Changing param to busy to avoid url encoding (#129)

* Adding content

* Update en.json

* Update README.md

* Update TRANSLATIONS.md

* Adding lesson tempolates

* Fixing code files with each others code in

* Update README.md

* Adding lesson 16

* Adding virtual camera

* Adding Wio Terminal camera capture

* Adding wio terminal code

* Adding SBC classification to lesson 16

* Adding challenge, review and assignment

* Adding images and using new Azure icons

* Update README.md

* Update iot-reference-architecture.png

* Adding structure for JulyOT links

* Removing icons

* Sketchnotes!

* Create lesson-1.png

* Starting on lesson 18

* Updated sketch

* Adding virtual distance sensor

* Adding Wio Terminal image classification

* Update README.md

* Adding structure for project 6 and wio terminal distance sensor

* Adding some of the smart timer stuff

* Updating sketchnotes

* Adding virtual device speech to text

* Adding chapter 21

* Language tweaks

* Lesson 22 stuff

* Update en.json

* Bumping seeed libraries

* Adding functions lab to lesson 22

* Almost done with LUIS

* Update README.md

* Reverting sunlight sensor change

Fixes #88

* Structure

* Adding speech to text lab for Pi

* Adding virtual device text to speech lab

* Finishing lesson 23

* Clarifying privacy

Fixes #99

* Update README.md

* Update hardware.md

* Update README.md

* Fixing some code samples that were wrong

* Adding more on translation

* Adding more on translator

* Update README.md

* Update README.md

* Adding public access to the container

* First part of retail object detection

* More on stock lesson

* Tweaks to maps lesson

* Update README.md

* Update pi-sensor.md

* IoT Edge install stuffs

* Notes on consumer groups and not running the event monitor at the same time

* Assignment for object detector

* Memory notes for speech to text

* Migrating LUIS to an HTTP trigger

* Adding Wio Terminal speech to text

* Changing smart timer to functions from hub

* Changing a param to body to avoid URL encoding
pull/135/head
Jim Bennett 3 years ago committed by GitHub
parent 68a4535193
commit 2c0df12d90
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -66,7 +66,7 @@ When you are happy with an iteration, you can publish it to make it available to
Iterations are published from the Custom Vision portal.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already. Then open your `fruit-detector` project.
1. Select the **Performance** tab from the options at the top

@ -348,10 +348,11 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
This loads the values you added to the `local.settings.json` file for your LUIS app, creates a credentials object with your API key, then creates a LUIS client object to interact with your LUIS app.
1. This HTTP trigger will be called passing the text to understand as an HTTP parameter. These are key/value pairs sent as part of the URL. For this app, the key will be `text` and the value will be the text to understand. The following code extracts the value from the HTTP request, and logs it to the console. Add this code to the `main` function:
1. This HTTP trigger will be called passing the text to understand as JSON, with the text in a property called `text`. The following code extracts the value from the body of the HTTP request, and logs it to the console. Add this code to the `main` function:
```python
text = req.params.get('text')
req_body = req.get_json()
text = req_body['text']
logging.info(f'Request - {text}')
```
@ -448,9 +449,18 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
404 is the status code for *not found*.
1. Run the function app and test it out by passing text to the URL. URLs cannot contain spaces, so you will need to encode spaces in a way that URLs can use. The encoding for a space is `%20`, so replace all the spaces in the text with `%20`. For example, to test "Set a 2 minutes 27 second timer", use the following URL:
1. Run the function app and test it out using curl.
[http://localhost:7071/api/text-to-timer?text=Set%20a%202%20minutes%2027%20second%20timer](http://localhost:7071/api/text-to-timer?text=Set%20a%202%20minutes%2027%20second%20timer)
```sh
curl --request POST 'http://localhost:7071/api/text-to-timer' \
--header 'Content-Type: application/json' \
--include \
--data '{"text":"<text>"}'
```
Replace `<text>` with the text of your request, for example `set a 2 minutes 27 second timer`.
You will see the following output from the functions app:
```output
Functions:
@ -465,6 +475,20 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
[2021-06-26T19:45:53.746Z] Executed 'Functions.text-to-timer' (Succeeded, Id=f68bfb90-30e4-47a5-99da-126b66218e81, Duration=1750ms)
```
The call to curl will return the following:
```output
HTTP/1.1 200 OK
Date: Tue, 29 Jun 2021 01:14:11 GMT
Content-Type: text/plain; charset=utf-8
Server: Kestrel
Transfer-Encoding: chunked
{"seconds": 147}
```
The number of seconds for the timer is in the `"seconds"` value.
> 💁 You can find this code in the [code/functions](code/functions) folder.
### Task - make your function available to your IoT device

@ -15,7 +15,9 @@ def main(req: func.HttpRequest) -> func.HttpResponse:
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
text = req.params.get('text')
req_body = req.get_json()
text = req_body['text']
logging.info(f'Request - {text}')
prediction_request = { 'query' : text }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)

@ -6,8 +6,6 @@ import time
import wave
import threading
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
@ -45,13 +43,6 @@ def capture_audio():
speech_api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {
@ -83,6 +74,28 @@ def convert_speech_to_text(buffer):
else:
return ''
def get_timer_time(text):
url = '<URL>'
body = {
'text': text
}
response = requests.post(url, json=body)
if response.status_code != 200:
return 0
payload = response.json()
return payload['seconds']
def process_text(text):
print(text)
seconds = get_timer_time(text)
if seconds > 0:
create_timer(seconds)
def get_voice():
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
@ -167,18 +180,10 @@ def handle_method_request(request):
if seconds > 0:
create_timer(payload['seconds'])
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
device_client.on_method_request_received = handle_method_request
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
text = convert_speech_to_text(buffer)
if len(text) > 0:
print(text)
message = Message(json.dumps({ 'speech': text }))
device_client.send_message(message)
process_text(text)

@ -1,19 +1,11 @@
import json
import requests
import threading
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer, SpeechSynthesizer
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
speech_api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
recognizer_config = SpeechConfig(subscription=speech_api_key,
region=location,
@ -21,24 +13,6 @@ recognizer_config = SpeechConfig(subscription=speech_api_key,
recognizer = SpeechRecognizer(speech_config=recognizer_config)
def recognized(args):
if len(args.result.text) > 0:
message = Message(json.dumps({ 'speech': args.result.text }))
device_client.send_message(message)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
speech_config = SpeechConfig(subscription=speech_api_key,
region=location)
speech_config.speech_synthesis_language = language
speech_synthesizer = SpeechSynthesizer(speech_config=speech_config)
voices = speech_synthesizer.get_voices_async().get().voices
first_voice = next(x for x in voices if x.locale.lower() == language.lower())
speech_config.speech_synthesis_voice_name = first_voice.short_name
def say(text):
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{first_voice.short_name}\'>'
@ -70,17 +44,43 @@ def create_timer(total_seconds):
announcement += 'timer started.'
say(announcement)
def handle_method_request(request):
if request.name == 'set-timer':
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
def get_timer_time(text):
url = '<URL>'
body = {
'text': text
}
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
response = requests.post(url, json=body)
device_client.on_method_request_received = handle_method_request
if response.status_code != 200:
return 0
payload = response.json()
return payload['seconds']
def process_text(text):
print(text)
seconds = get_timer_time(text)
if seconds > 0:
create_timer(seconds)
def recognized(args):
process_text(args.result.text)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
speech_config = SpeechConfig(subscription=speech_api_key,
region=location)
speech_config.speech_synthesis_language = language
speech_synthesizer = SpeechSynthesizer(speech_config=speech_config)
voices = speech_synthesizer.get_voices_async().get().voices
first_voice = next(x for x in voices if x.locale.lower() == language.lower())
speech_config.speech_synthesis_voice_name = first_voice.short_name
while True:
time.sleep(1)

@ -76,11 +76,11 @@ def convert_speech_to_text(buffer):
def get_timer_time(text):
url = '<URL>'
params = {
body = {
'text': text
}
response = requests.post(url, params=params)
response = requests.post(url, json=body)
if response.status_code != 200:
return 0

@ -16,11 +16,11 @@ recognizer = SpeechRecognizer(speech_config=recognizer_config)
def get_timer_time(text):
url = '<URL>'
params = {
body = {
'text': text
}
response = requests.post(url, params=params)
response = requests.post(url, json=body)
if response.status_code != 200:
return 0

@ -26,14 +26,14 @@ Timers can be set using the Python `threading.Timer` class. This class takes a d
Replace `<URL>` with the URL of your rest endpoint that you built in the last lesson, either on your computer or in the cloud.
1. Add the following code to set the text as a parameter on the URL and make the API call:
1. Add the following code to set the text as a property passed as JSON to the call:
```python
params = {
body = {
'text': text
}
response = requests.post(url, params=params)
response = requests.post(url, json=body)
```
1. Below this, retrieve the `seconds` from the response payload, returning 0 if the call failed:

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 467 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 624 KiB

Loading…
Cancel
Save