Merge branch 'main' into patch-2

pull/258/head
Jim Bennett 4 years ago committed by GitHub
commit 01aaed5b6e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -4,6 +4,14 @@
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
This lesson was taught as part of the [Hello IoT series](https://youtube.com/playlist?list=PLmsFUfdnGr3xRts0TIwyaHyQuHaNQcb6-) from the [Microsoft Reactor](https://developer.microsoft.com/reactor/?WT.mc_id=academic-17441-jabenn). The lesson was taught as 2 videos - a 1 hour lesson, and a 1 hour office hour diving deeper into parts of the lesson and answering questions.
[![Lesson 1: Introduction to IoT](https://img.youtube.com/vi/bVFfcYh6UBw/0.jpg)](https://youtu.be/bVFfcYh6UBw)
[![Lesson 1: Introduction to IoT - Office hours](https://img.youtube.com/vi/YI772q5v3yI/0.jpg)](https://youtu.be/YI772q5v3yI)
> 🎥 Click the images above to watch the videos
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/1)

@ -8,6 +8,6 @@
## 评分表
| 条件 | 优秀 | 一般 | 需改进 |
| 标准 | 优秀 | 一般 | 需改进 |
| -------- | --------- | -------- | ----------------- |
| 解释项目的好处与坏处 | 把项目的好处与坏处解释得很清楚 |简要地解释项目的好处与坏处 | 没有解释项目的好处与坏处 |
| 解释项目的好处与坏处 | 把项目的好处与坏处解释得很清楚 | 简要地解释项目的好处与坏处 | 没有解释项目的好处与坏处 |

@ -55,11 +55,18 @@ Configure a Python virtual environment and install the pip packages for CounterF
1. Activate the virtual environment:
* On Windows run:
* On Windows:
* If you are using the Command Prompt, or the Command Prompt through Windows Terminal, run:
```cmd
.venv\Scripts\activate.bat
```
```cmd
.venv\Scripts\activate.bat
```
* If you are using PowerShell, run:
```powershell
.\.venv\Scripts\Activate.ps1
```
* On macOS or Linux, run:
@ -67,6 +74,8 @@ Configure a Python virtual environment and install the pip packages for CounterF
source ./.venv/bin/activate
```
> 💁 These commands should be run from the same location you ran the command to create the virtual environment. You will never need to navigate into the `.venv` folder, you should always run the activate command and any commands to install packages or run code from the folder you were in when you created the virtual environment.
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh

@ -4,6 +4,14 @@
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
This lesson was taught as part of the [Hello IoT series](https://youtube.com/playlist?list=PLmsFUfdnGr3xRts0TIwyaHyQuHaNQcb6-) from the [Microsoft Reactor](https://developer.microsoft.com/reactor/?WT.mc_id=academic-17441-jabenn). The lesson was taught as 2 videos - a 1 hour lesson, and a 1 hour office hour diving deeper into parts of the lesson and answering questions.
[![Lesson 2: A deeper dive into IoT](https://img.youtube.com/vi/t0SySWw3z9M/0.jpg)](https://youtu.be/t0SySWw3z9M)
[![Lesson 2: A deeper dive into IoT - Office hours](https://img.youtube.com/vi/tTZYf9EST1E/0.jpg)](https://youtu.be/tTZYf9EST1E)
> 🎥 Click the images above to watch the videos
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/3)

@ -0,0 +1,12 @@
# 比较与对比微控制器和单板计算机
## 指示
这个课程涵盖了微控制器和单板计算机。创建一个表格来比较与对比它们,并记下至少两个微控制器和单板计算机相比之下选择微控制器的原因,还有至少两个微控制器和单板计算机相比之下选择单板计算机的原因。
## 评分表
| 标准 | 优秀 | 一般 | 需改进 |
| -------- | --------- | -------- | ----------------- |
| 创建一个表格来比较微控制器和单板计算机 | 创建一个有多项的列表来正确地比较和对比它们 | 创建一个只有几项的列表 | 只能想出一项,或一项也没有来比较和对比它们 |
| 两个相比之下选择某一个的原因 | 能够为微控制器提出至少两个原因,而且为单板计算机提出至少两个原因 | 只能为微控制器提出一两个原因,而且为单板计算机提出一两个原因 | 无法为微控制器或单板计算机提出至少一个原因 |

@ -4,6 +4,14 @@
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
This lesson was taught as part of the [Hello IoT series](https://youtube.com/playlist?list=PLmsFUfdnGr3xRts0TIwyaHyQuHaNQcb6-) from the [Microsoft Reactor](https://developer.microsoft.com/reactor/?WT.mc_id=academic-17441-jabenn). The lesson was taught as 2 videos - a 1 hour lesson, and a 1 hour office hour diving deeper into parts of the lesson and answering questions.
[![Lesson 3: Interact with the Physical World with Sensors and Actuators](https://img.youtube.com/vi/Lqalu1v6aF4/0.jpg)](https://youtu.be/Lqalu1v6aF4)
[![Lesson 3: Interact with the Physical World with Sensors and Actuators - Office hours](https://img.youtube.com/vi/qR3ekcMlLWA/0.jpg)](https://youtu.be/qR3ekcMlLWA)
> 🎥 Click the images above to watch the videos
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/5)

@ -4,6 +4,14 @@
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). Click the image for a larger version.
This lesson was taught as part of the [Hello IoT series](https://youtube.com/playlist?list=PLmsFUfdnGr3xRts0TIwyaHyQuHaNQcb6-) from the [Microsoft Reactor](https://developer.microsoft.com/reactor/?WT.mc_id=academic-17441-jabenn). The lesson was taught as 2 videos - a 1 hour lesson, and a 1 hour office hour diving deeper into parts of the lesson and answering questions.
[![Lesson 4: Connect your Device to the Internet](https://img.youtube.com/vi/O4dd172mZhs/0.jpg)](https://youtu.be/O4dd172mZhs)
[![Lesson 4: Connect your Device to the Internet - Office hours](https://img.youtube.com/vi/j-cVCzRDE2Q/0.jpg)](https://youtu.be/j-cVCzRDE2Q)
> 🎥 Click the images above to watch the videos
## Pre-lecture quiz
[Pre-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/7)
@ -178,11 +186,18 @@ Configure a Python virtual environment and install the MQTT pip packages.
1. Activate the virtual environment:
* On Windows run:
* On Windows:
* If you are using the Command Prompt, or the Command Prompt through Windows Terminal, run:
```cmd
.venv\Scripts\activate.bat
```
```cmd
.venv\Scripts\activate.bat
```
* If you are using PowerShell, run:
```powershell
.\.venv\Scripts\Activate.ps1
```
* On macOS or Linux, run:
@ -190,6 +205,8 @@ Configure a Python virtual environment and install the MQTT pip packages.
source ./.venv/bin/activate
```
> 💁 These commands should be run from the same location you ran the command to create the virtual environment. You will never need to navigate into the `.venv` folder, you should always run the activate command and any commands to install packages or run code from the folder you were in when you created the virtual environment.
1. Once the virtual environment has been activated, the default `python` command will run the version of Python that was used to create the virtual environment. Run the following to get the version:
```sh

@ -16,7 +16,7 @@ lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -100,8 +100,7 @@ void loop()
doc["light"] = light;
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -15,7 +15,7 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
knolleary/PubSubClient @ 2.8
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
bblanchon/ArduinoJson @ 6.17.3

@ -74,8 +74,7 @@ void loop()
doc["light"] = light;
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -25,8 +25,8 @@ Install the Arduino libraries.
```ini
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
```

@ -53,8 +53,7 @@ Publish telemetry to the MQTT broker.
doc["light"] = light;
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -17,7 +17,7 @@ lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -77,8 +77,7 @@ void loop()
doc["temperature"] = temp_hum_val[1];
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -16,7 +16,7 @@ lib_deps =
knolleary/PubSubClient @ 2.8
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -100,8 +100,7 @@ void loop()
doc["soil_moisture"] = soil_moisture;
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -120,8 +120,7 @@ void loop()
doc["soil_moisture"] = soil_moisture;
string telemetry;
JsonObject obj = doc.as<JsonObject>();
serializeJson(obj, telemetry);
serializeJson(doc, telemetry);
Serial.print("Sending telemetry ");
Serial.println(telemetry.c_str());

@ -131,11 +131,18 @@ The Azure Functions CLI can be used to create a new Functions app.
1. Activate the virtual environment:
* On Windows run:
* On Windows:
* If you are using the Command Prompt, or the Command Prompt through Windows Terminal, run:
```cmd
.venv\Scripts\activate.bat
```
```cmd
.venv\Scripts\activate.bat
```
* If you are using PowerShell, run:
```powershell
.\.venv\Scripts\Activate.ps1
```
* On macOS or Linux, run:
@ -143,6 +150,8 @@ The Azure Functions CLI can be used to create a new Functions app.
source ./.venv/bin/activate
```
> 💁 These commands should be run from the same location you ran the command to create the virtual environment. You will never need to navigate into the `.venv` folder, you should always run the activate command and any commands to install packages or run code from the folder you were in when you created the virtual environment.
1. Run the following command to create a Functions app in this folder:
```sh

@ -54,4 +54,3 @@
| --------- | ------------------ | ------------- | --------------------- |
| HTTP ট্রিগার তৈরী | সঠিক নামকরণের মাধ্যমে ২টি ট্রিগার তৈরি করে রিলে অন/অফ করা হয়েছে | সঠিক নামকরণের মাধ্যমে ১টি ট্রিগার তৈরি করেছে | কোন ট্রিগার তৈরী করতে সমর্থ হয়নি |
| এইচটিটিপি ট্রিগারগুলি থেকে রিলে নিয়ন্ত্রণ করা | উভয় ট্রিগারকে আইওটি হাবের সাথে সংযুক্ত করতে এবং যথোপযুক্তভাবে রিলে নিয়ন্ত্রণ করতে সক্ষম হয়েছিল| কেবল ১টি ট্রিগারকে আইওটি হাবের সাথে সংযুক্ত করতে এবং যথোপযুক্তভাবে রিলে নিয়ন্ত্রণ করতে সক্ষম হয়েছিল | ট্রিগারকে আইওটি হাবের সাথে সংযুক্ত করতে সমর্থ হয়নি |
{"mode":"full","isActive":false}

@ -1,9 +0,0 @@
# Dummy File
This file acts as a placeholder for the `translations` folder. <br>
**Please remove this file after adding the first translation**
For the instructions, follow the directives in the [translations guide](https://github.com/microsoft/IoT-For-Beginners/blob/main/TRANSLATIONS.md) .
## THANK YOU
We truly appreciate your efforts!

@ -0,0 +1,245 @@
# উদ্ভিদের নিরাপত্তা নিশ্চিতকরণ
![A sketchnote overview of this lesson](../../../../sketchnotes/lesson-10.jpg)
> স্কেচনোটটি তৈরী করেছেন [Nitya Narasimhan](https://github.com/nitya). বড় সংস্করণে দেখার জন্য ছবিটিতে ক্লিক করতে হবে।
## লেকচার-পূর্ববর্তী কুইজ
[লেকচার-পূর্ববর্তী কুইজ](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/19)
## সূচনা
গত কয়েকটি পাঠে আমরা মাটি পর্যবেক্ষণের জন্য একটি আইওটি ডিভাইস তৈরি করেছি এবং এটিকে ক্লাউডের সাথে সংযুক্ত করেছি। তবে যদি হ্যাকাররা আমাদের আইওটি ডিভাইসগুলির নিয়ন্ত্রণ দখল করতে সক্ষম হয় তবে কী হবে? কী হবে যদি তারা মাটির আর্দ্রতার সঠিক মান পরিবর্তন করে, উচ্চমান পাঠায় - যাতে আমাদের গাছপালা কখনই সেচ না পায় অথবা আমাদের সেচব্যবস্থা সবসময় চালু রেখে যদি আমাদের গাছগুলিকে বেশি পানিতে ডুবিয়ে রাখে ?
এই পাঠে আমরা আইওটি ডিভাইসগুলি নিরাপত্তার বিষয়ে শিখব। যেহেতু এটি এই প্রকল্পের শেষ পাঠ, আমরা আমাদের ক্লাউড সার্ভিসগুলি কীভাবে গুছিয়ে রাখতে পারি যাতে ব্যয় হ্রাস হয় - তা শিখব।
এই পাঠে আমরা দেখবো:
* [কেন আমাদের আইওটি ডিভাইসগুলি সুরক্ষিত করা দরকার?](#কেন-আমাদের-আইওটি-ডিভাইসগুলি-সুরক্ষিত-করা-দরকার)
* [সংকেতলিপি (Cryptography)](#সংকেতলিপি)
* [আমাদের আইওটি ডিভাইস নিরাপদ রাখা](#আইওটি-ডিভাইস-নিরাপত্তা)
* [X.509 Certificate তৈরী ও ব্যবহার](#X.509-Certificate-তৈরী-ও-ব্যবহার)
> 🗑 এটি এই প্রজেক্টের শেষ লেসন, সুতরাং এই পাঠ এবং এর অ্যাসাইনমেন্ট শেষ করার পরে, আমাদের ক্লাউড ্সার্ভিসগুলি আমাদেরকে অবশ্যই গুছিয়ে রেখে দিতে হবে বা clean up করতে হবে। অ্যাসাইনমেন্টটি সম্পন্ন করার জন্য আমাদের যেসব সার্ভিসগুলির প্রয়োজন হবে, সেগুলো আগে নিশ্চিত করতে হবে।
> এক্ষেত্রে [প্রজেক্ট ক্লীন-আপ গাইডে](../../../../../translations/clean-up.bn.md) নির্দেশনা পাওয়া যাবে যাতে আমরা ক্লাউড সার্ভিস গুছিয়ে রাখতে পারি।
## কেন আমাদের আইওটি ডিভাইসগুলি সুরক্ষিত করা দরকার?
আইওটি সুরক্ষার মধ্যে এটি নিশ্চিত করা হয় যে কেবলমাত্র নির্দিষ্ট কিছু ডিভাইসই আমাদের ক্লাউড আইওটি ্সার্ভিসতে সংযোগ করতে পারে এবং তাদের টেলিমেট্রি পাঠাতে পারে। এটিও নিশ্চিত করা হয় যেন কেবল আমাদের ক্লাউড সার্ভিসই আমাদের ডিভাইসে নির্দেশ পাঠাতে পারে। আইওটি ডেটা চিকিত্সা বা বেশ অন্তরঙ্গ ডেটা সহ ব্যক্তিগতও হতে পারে, তাই এই তথ্য ফাঁস হওয়া বন্ধ করতে আমাদের পুরো ব্যবস্থাপনার সুরক্ষা বিবেচনা করা উচিত।
যদি আমাদের আইওটি অ্যাপ্লিকেশনটি সুরক্ষিত না হয় তবে বিভিন্ন ঝুঁকি রয়েছে:
* একটি নকল ডিভাইস আমাদের অ্যাপ্লিকেশনটিকে ভুলভাবে প্রতিক্রিয়া দিতে পারে। উদাহরণস্বরূপ, মাটির আর্দ্রতার সঠিক মান পরিবর্তন করে, উচ্চমান পাঠালে আমাদের সেচ ব্যবস্থা কখনই চালু হবেনা এবং আমাদের গাছপালা পানির অভাবে মারা যাবে।
* অননুমোদিত ব্যবহারকারীরা ব্যক্তিগত বা ব্যবসায়িক গুরুত্বপূর্ণ ডেটা পড়তে পারবে।
* হ্যাকাররা কোন ডিভাইসে এমনভাবে কমান্ডগুলি প্রেরণ করতে পারে যাতে ডিভাইসটি বা এর সাথে হার্ডওয়ার এর ক্ষতি করতে পারে।
* আইওটি ডিভাইসের সাথে সংযোগ স্থাপনের মাধ্যমে, হ্যাকাররা এটি অতিরিক্ত নেটওয়ার্কগুলিতে প্রবেশের অনুমতি পেয়ে যেতে পারে।
* ক্ষতিকারক ব্যবহারকারীরা ব্যক্তিগত ডেটা অ্যাক্সেস করতে এবং এটি ব্ল্যাকমেইলের জন্য ব্যবহার করতে পারে
এগুলি বাস্তব পরিস্থিতি এবং সর্বদাই ঘটে। পূর্ববর্তী পাঠগুলিতে কিছু উদাহরণ দেওয়া হয়েছিল, এখানে আরও কিছু রয়েছে:
* 2018 সালে, হ্যাকাররা ডেটা চুরির জন্য ক্যাসিনোর নেটওয়ার্কে অ্যাক্সেস পেতে একটি ফিশ ট্যাঙ্ক থার্মোস্টেটে একটি ওপেন ওয়াইফাই অ্যাক্সেস পয়েন্ট ব্যবহার করেছিল। [সূত্রঃ The Hacker News - Casino Gets Hacked Through Its Internet-Connected Fish Tank Thermometer](https://thehackernews.com/2018/04/iot-hacking-thermometer.html)
* ২০১৬ সালে মিরাই বটনেট denial of service (Dos) এর মাধ্যমে Dyn নামক একটি ইন্টারনেট সার্ভিস প্রদানকারী সরবরাহকারীর বিরুদ্ধে আক্রমণ করে। এই বটনেট সাধারণ আইওটি ডিভাইস যেমন ডিভিআর এবং ক্যামেরাগুলিতে ডিফল্ট ব্যবহারকারীর নাম এবং পাসওয়ার্ড ব্যবহার করে এবং সেখান থেকে আক্রমণ চালিয়েছিল connect [সূত্রঃ The Guardian - DDoS attack that disrupted internet was largest of its kind in history, experts say](https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-mirai-botnet)
* Spiral Toys এর কাছে ইন্টারনেটে সকলের জন্য এভেইলেবল ডেটাবেস ছিলো যেটিতে তাদের ক্লাউডপেটস সংযুক্ত খেলনাগুলির ব্যবহারকারীর তথ্য ছিলো । [সূত্রঃ Troy Hunt - Data from connected CloudPets teddy bears leaked and ransomed, exposing kids' voice messages](https://www.troyhunt.com/data-from-connected-cloudpets-teddy-bears-leaked-and-ransomed-exposing-kids-voice-messages/).
* Strava তে একজন এথলিট আরেকজনকে পাশাপাশি অতিক্রম করে গেলে, একে অপরের রূট সহ অনেক ব্যক্তিগত তথ্য ফাঁস করে দেয় [সূত্রঃ Kim Komndo - Fitness app could lead a stranger right to our home — change this setting](https://www.komando.com/security-privacy/strava-fitness-app-privacy/755349/).
✅ কিছু গবেষণা করা যাক: আরও উদাহরণের জন্য অনুসন্ধান করি আইওটি হ্যাকস এবং আইওটি ডেটা লঙ্ঘনের ঘটনাগুলি, বিশেষত ব্যক্তিগত বিষয়াদি যেমন ইন্টারনেট সংযুক্ত টুথব্রাশ বা স্কেল ব্যবহার করে হ্যাক। এই হ্যাকগুলি ভুক্তভোগী বা গ্রাহকদের উপর কী প্রভাব ফেলতে পারে সে সম্পর্কে চিন্তা করি।
> 💁 নিরাপত্তা একটি বিশাল বিষয় এবং এই পাঠটি কেবলমাত্র আমাদের ডিভাইসটিকে ক্লাউডের সাথে সংযুক্ত করার জন্য কয়েকটি প্রাথমিক বিষয় শেখাবে। অন্যান্য বিষয় যা আলোচনা করা হবে না তার মধ্যে রয়েছে ট্রানজিটে ডেটা পরিবর্তনের জন্য নজরদারি, সরাসরি ডিভাইস হ্যাকিং, বা ডিভাইস কনফিগারেশনে পরিবর্তন ইত্যাদি । IoT হ্যাকিং এর মত সমস্যা মোকাবেলা করতে [Azure Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/?WT.mc_id=academic-17441-jabenn) তৈরী করা হয়েছে। এটি আমাদের কম্পিউটারে ব্যবহৃত এন্টিভাইরাসেরই মতো, যা ছোট এবং কম পাওয়ারে চলমান আইওটি ডিভাইসের জন্য বানানো।
## সংকেতলিপি
যখন কোন ডিভাইস আইওটি পরিষেবাতে সংযুক্ত থাকে, তখন এটি নিজেকে সনাক্ত করতে একটি আইডি ব্যবহার করে। সমস্যা হল এই আইডিটি ক্লোন করা যায় - হ্যাকার একটি নকল ডিভাইস সেট আপ করতে পারে যা একই আইডিটিকে আসল ডিভাইস হিসাবে ব্যবহার করে তবে ভুল ডেটা প্রেরণ করে।
![Both valid and malicious devices could use the same ID to send telemetry](../../../../images/iot-device-and-hacked-device-connecting.png)
এই সমস্যার সমাধানের জন্য কেবলমাত্র ডিভাইস এবং ক্লাউডের পরিচিত কিছু ভ্যালু বা গোপন সংকেত দ্বারা ডেটা আদান প্রদানের সময় তা অগোছালো করে একধরণের গোপন সংকেত-নির্ভর করে দেয়া। এই প্রক্রিয়াকে বলে *encryption* এবং যে ভ্যালু বা গোপন সংকেত দ্বারা ডেটাকে পরিবর্তিত করা হয়, তাকে বলে *encryption key*
![If encryption is used, then only encrypted messages will be accepted, others will be rejected](../../../../images/iot-device-and-hacked-device-connecting-encryption.png)
যে ক্লাউড পরিষেবাটি একটি প্রক্রিয়া ব্যবহার করে ডেটাটিকে একটি পঠনযোগ্য ফর্ম্যাটে রূপান্তর করতে পারে সেই প্রক্রিয়াকে বলে *decryption (ডিক্রিপশন)* এবং এই কাজে একই এনক্রিপশন কী বা একটি *ডিক্রিপশন কী* ব্যবহার করা হয়। যদি এনক্রিপ্ট করা বার্তা কী দ্বারা ডিক্রিপ্ট করা না যায়, সেক্ষেত্রে ধরে নেয়া হয় যে ডিভাইসটি হ্যাক হয়ে গেছে এবং বার্তাটি তখন প্রত্যাখ্যান করা হয়।
এনক্রিপশন এবং ডিক্রিপশনের টেকনিককে একসাথে বলা হয় - *সংকেতলিপি (Cryptography)*
### আদিপর্যায়ের সংকেতলিপি
সবথেকে আদিযুগের সংকেতলিপি (Cryptography) ছিলো সাইফার প্রতিস্থাপন যা প্রায় ৩৫০০ বছর আগে ব্যবহার করা হতো। এগুলোতে একটি বর্ণের পরিবর্তে আরেকটি বসানো হত। উদাহরণস্বরূপ, [সিজার সাইফারে](https://wikipedia.org/wiki/Caesar_cipher) বর্ণগুলো সামনে বা পেছনে নির্দিষ্ট সংখ্যক ঘর অবস্থান পরিবর্তন করা হতো যে পরিবর্তনের মান কেবল প্রেরক ও গ্রাহক জানতো।
আবার [Vigenère cipher](https://wikipedia.org/wiki/Vigenère_cipher) এর ক্ষেত্রে বর্ণগুলো ভিন্ন ভিন্ন ঘর পর্যন্ত মান পরিবর্তন করতো যা সাইফার টেক্সট থেকেও কঠিন হয়ে যায়।
ক্রিপ্টোগ্রাফি বিভিন্ন উদ্দেশ্যে যেমন প্রাচীন মেসোপটেমিয়ায় একটি কুমার গ্লাইজ রেসিপি রক্ষা করা বা ভারতে প্রেমের গোপন চিঠি লেখার জন্য বা প্রাচীন মিশরীয় যাদুকরী মন্ত্রকে গোপন রাখার মতো কাজে ব্যবহার করা হয়েছিল।
### আধুনিক সংকেতলিপি (Cryptography)
আধুনিক ক্রিপ্টোগ্রাফি অনেক বেশি উন্নত, এটি প্রাথমিক পদ্ধতির চেয়ে ক্র্যাক করা আরও অনেক কঠিন করে তোলে। আধুনিক ক্রিপ্টোগ্রাফি ব্রুট ফোর্স আক্রমণকে অকার্যকর করার জন্য অনেকগুলি সম্ভাব্য কী(key) দিয়ে ডেটা এনক্রিপ্ট করতে জটিল গণিত ব্যবহার করে।
সুরক্ষিত যোগাযোগের জন্য এটি বিভিন্ন উপায়ে ব্যবহার করা হয়। যদি আমরা এই লেসনটি যদি গিটহাব এ পড়ি, তবে আমরা লক্ষ্য করতে পারি যে ওয়েব সাইটের ঠিকানাটি *https* দিয়ে শুরু হয়, যার অর্থ আমাদের ব্রাউজার এবং গিটহাবের ওয়েব সার্ভারের মধ্যে যোগাযোগ এনক্রিপ্ট করা আছে। যদি কেউ আমাদের ব্রাউজার এবং গিটহাবের মধ্যে প্রবাহিত ইন্টারনেট ট্র্যাফিক পড়তে সক্ষম হয়ও তবুও তারা এনক্রিপ্ট করা ডেটা পড়তে সক্ষম হবে না। আমাদের কম্পিউটার এমনকি আমাদের হার্ডড্রাইভে সমস্ত ডেটা এনক্রিপ্ট করতে পারে যাতে কেউ যদি এটি চুরি করে, তবুও যেন তারা আমাদের পাসওয়ার্ড ছাড়াই আমাদের ডেটা পড়তে না পারে।
> 🎓 HTTPS হলো HyperText Transfer Protocol **Secure**
দুর্ভাগ্যক্রমে, সবকিছুই নিরাপদ নয়। কিছু ডিভাইসের কোন সুরক্ষা নেই, অন্যকিছু ডিভাইসে আবার সহজেই ক্র্যাক করা যায় এমন কী(KEY) গুলি ব্যবহার করে বা কখনও কখনও একই কী(KEY) ব্যবহার করে একই ধরণের সমস্ত ডিভাইসে। খুব ব্যক্তিগত আইওটি ডিভাইসগুলির জন্য অ্যাকাউন্ট রয়েছে যেগুলির সাথে ওয়াইফাই বা ব্লুটুথের মাধ্যমে সংযোগ করার জন্য পাসওয়ার্ড রয়েছে। আমরা যদি আমাদের নিজস্ব ডিভাইসে সংযোগ করতে পারি তবে কিন্তু আমরা চাইলে অন্য কারও ডিভাইসেও সংযোগ করতে পারি (অনুমতি ব্যাতিরেকে এটি করা অপরাধ)। একবার সংযুক্ত হয়ে আমরা কিছু খুব প্রাইভেট ডেটা অ্যাক্সেস করতে পারি, বা তাদের ডিভাইসে নিয়ন্ত্রণ রাখতে পারি।
> 💁 আধুনিক সংকেতলিপি (Cryptography) এর জটিলতা এবং এনক্রিপশন ভাঙতে বিলিয়ন বিলিয়ন বছর লাগবে - এমন দাবি স্বত্বেও কোয়ান্টাম কম্পিউটিং এর আবির্ভাবের ফলে এই এনক্রিপশন কী (KEY) গুলো ভেঙে ফেলা অনেক সহজ হয়ে গিয়েছে।
### Symmetric এবং asymmetric keys
এনক্রিপশন ২ভাবে হয় - symmetric এবং asymmetric.
**Symmetric** এনক্রিপশন ডেটা এনক্রিপ্ট এবং ডিক্রিপ্ট করতে একই কী ব্যবহার করে। প্রেরক এবং প্রাপক উভয়েরই একই কী (KEY)টি জানা দরকার। এটি সর্বনিম্ন সুরক্ষা স্তর, কারণ কী (KEY)টি শেয়ার করতে হবে। প্রেরকের দ্বারা প্রাপকের কাছে একটি এনক্রিপ্ট করা বার্তা প্রেরণের জন্য, প্রেরককে প্রথমে প্রাপককে কী (KEY) টি পাঠাতে হবে।
![Symmetric key encryption uses the same key to encrypt and decrypt a message](../../../../images/send-message-symmetric-key.png)
If the key gets stolen in transit, or the sender or recipient get hacked and the key is found, the encryption can be cracked.
যদি ট্রানজিটে কী (KEY) টি চুরি হয়ে যায়, বা প্রেরক বা প্রাপক হ্যাক হয়ে যায় এবং কী(KEY)টি পাওয়া যায়, তাহলে এনক্রিপশনটি ক্র্যাক হয়ে যাবে।
![Symmetric key encryption is only secure if a hacker doesn't get the key - if so they can intercept and decrypt the message](../../../../images/send-message-symmetric-key-hacker.png)
**Asymmetric** এনক্রিপশনটিতে 2টি KEY ব্যবহৃত হয় - একটি এনক্রিপশন কী এবং একটি ডিক্রিপশন কী, এটি পাবলিক/প্রাইভেট KEY Pair হিসাবে উল্লেখ করা হয়। বার্তাটি এনক্রিপ্ট করার জন্য সর্বজনীন (PUBLIC) কী ব্যবহৃত হয় তবে এটি ডিক্রিপ্ট করতে তা ব্যবহার করা যায় না। ব্যক্তিগত (PRIVATE) KEY এখানে বার্তাটি ডিক্রিপ্ট করার জন্য ব্যবহৃত হয় তবে এটি আবার এনক্রিপ্ট করার জন্য ব্যবহার করা যায় না।
![Asymmetric encryption uses a different key to encrypt and decrypt. The encryption key is sent to any message senders so they can encrypt a message before sending it to the recipient who owns the keys](../../../../images/send-message-asymmetric.png)
প্রাপক তার পাবলিক KEY শেয়ার করে এবং প্রেরক এটি ব্যবহার করে বার্তা এনক্রিপ্ট করে। ম্যাসেজ পাঠানোর পরে,প্রাপক তার প্রাইভেট কী দ্বারা তা ডিক্রিপ্ট করে। এসিমেট্রিক এনক্রিপশন তুলনামূলকভাবে বেশি নিরাপদ কারণ এখানে প্রাপকের প্রাইভেট কী কখনই শেয়ার করা হয়না। পাবলিক কী তে যে কেউই একসেস পেতে পারে, তবে এটি দিয়ে কেবলমাত্র এনক্রিপ্ট করা যাবে।
সিমেট্রিক পদ্ধতিটি এসিমেট্রিক এর তুলনায় দ্রুত কাজ করতে পারে, যদিও তা কম নিরাপদ। কিছু ক্ষেত্রে উভয় পদ্ধতিই ব্যবহৃত হয়। এসিমেট্রিক পদ্ধতিতে এনক্রিপ্ট করা, আবার সিমেট্রিক কী টি শেয়ার করে সকল ডেটা সিমেট্রিক কী দ্বারা এনক্রিপ্ট করা হয়। এতে করে প্রেরক ও প্রাপকের মধ্যে সিমেট্রিক কী শেয়ার করলেও সেটা অনেক বেশি নিরাপদ থাকে আবার দ্রুতও হয়।
## আইওটি ডিভাইস নিরাপত্তা
আইওটি ডিভাইসের নিরাপত্তার জন্য সিমেট্রিক এবং এসিমেট্রিক পদ্ধতি ব্যবহার করা যায়। সিমেট্রিকটি সহজ, তবে কম নিরাপদ।
### সিমেট্রিক KEYs
আইওটি হাবের সাথে আমাদের ডিভাইসের যোগাযোগ এর জন্য আমরা কানেকশন স্ট্রিং ব্যবহার করেছিলাম। উদাহরণস্বরূপ :
```output
HostName=soil-moisture-sensor.azure-devices.net;DeviceId=soil-moisture-sensor;SharedAccessKey=Bhry+ind7kKEIDxubK61RiEHHRTrPl7HUow8cEm/mU0=
```
এই কানেকশন স্ট্রিং ৩ভাগে বিভক্ত যার প্রতিটি অংশ সেমিকোলন দ্বারা পৃথকীকৃত, যেখানে প্রতি অংশে Key এবং Value রয়েছে।
| কী | ভ্যালু | বর্ণনা |
| --- | ----- | ----------- |
| Hostname | `soil-moisture-sensor.azure-devices.net` | এটি আইওটি হাবের URL |
| DeviceID | `soil-moisture-sensor` | এটি ডিভাইসের ইউনিক আইডি |
| SharedAccessKey | `Bhry+ind7kKEIDxubK61RiEHHRTrPl7HUow8cEm/mU0=` | ডিভাইস এবং আইওটি হাবের কাছে থাকা সিমেট্রিক KEY |
কানেকশন স্ট্রিং এর শেষাংশ `SharedAccessKey` মূলত একটি symmetric key যা ডিভাইস এবং আইওটি হাব উভয়ের কাছেই রয়েছে। এটি কখনই ডিভাইস থেকে ক্লাউডে বা ক্লাউড থেকে ডিভাইসে প্রেরণ করা হয়না। বরং এটি পরিবাহিত ডেটাকে এনক্রিপ্ট করে।
✅ একটি এক্সপেরিমেন্ট করা যাক। আমরা আইওটি ডিভাইসে সংযুক্ত হবার সময় কানেকশন স্ট্রিং এর `SharedAccessKey` পরিবর্তন করে দিলে কী হবে? চেষ্টা করে দেখো।
ডিভাইসটি প্রথমে আইওটি হাবের সাথে সংযোগ দেওয়ার চেষ্টা করলে, তখন URL সহ shared access signature (SAS) টোকেন , একটি টাইমস্ট্যাম্প যেটির সিগনেচার একসেস একটি নির্দিষ্ট সময় পর (সাধারণত বর্তমান সময় থেকে 1 দিন পর্যন্ত) শেষ হয়ে যায় এবং একটি সিগনেচার - এসব প্রেরণ করে। এই সিগনেচারে কানেকশন স্ট্রিং থেকে ইউআরএল, মেয়াদোত্তীর্ণের সময় এবং এনক্রিপটেড শেয়ারড একসেস কী থাকে।
আইওটি হাব শেয়ারড একসেস কী দিয়ে সিগনেচারটি ডিক্রিপ্ট করে এবং যদি ডিক্রিপ্ট করা মানটি ইউআরএল এবং মেয়াদোত্তীর্ণের সময়ের সাথে মিলে যায় তবে ডিভাইসটি সংযোগ করার অনুমতি পায়। এটিও যাচাই করা হয় যে বর্তমান সময়টি মেয়াদোত্তীর্ণের সময়ের আগে যাতে করে কোন দুষ্ট (malicious) ডিভাইস কোন আসল ডিভাইসের এসএএস টোকেন ক্যাপচার করে তা ব্যবহার করতে না পারে।
প্রেরকটি সঠিক ডিভাইস কিনা তা যাচাই করার জন্য এটি একটি উত্তম উপায়। ডিক্রিপ্ট এবং এনক্রিপ্ট করা কিছু জ্ঞাত ডেটা প্রেরণ করে সার্ভার ডিভাইসটিকে যাচাই করে দেখতে পারে যে এনক্রিপ্ট করা ডেটার ডিক্রিপ্ট ভার্সনটি, প্রেরণ করা ডিক্রিপ্ট করা সংস্করণের সাথে মেলে কিনা। যদি এটি মেলে, তবে প্রেরক এবং প্রাপক উভয়েরই একই Symmetric Encryption Key রয়েছে।
> 💁 যেহেতু এখানে মেয়াদোত্তীর্ণের একটি বিষয় রয়েছে, আমাদের আইওটি ডিভাইসের জন্য তাই বর্তমান সময়টি জানা জরুরী। সাধারণত [NTP](https://wikipedia.org/wiki/Network_Time_Protocol) সার্ভার থেকেই এটি সময় সংক্রান্ত ডেটা নেয়। সময় সঠিক না হলে, কানেকশন হবেনা।
সংযোগের পরে, ডিভাইস থেকে আইওটি হাবের কাছে বা আইওটি হাব থেকে ডিভাইসে প্রেরিত সমস্ত ডেটা shared access key দিয়ে এনক্রিপ্ট করা হবে।
✅ একাধিক ডিভাইস একই সংযোগের স্ট্রিং শেয়ার করলে কী ঘটবে বলে মনে হয়?
> 💁 কোডের মধ্যেই এই KEY সংরক্ষণ করা নিরাপত্তার প্রেক্ষিতে বেশ বাজে একটি চর্চা। কোন হ্যাকার যদি আমাদের সোর্স কোড পায় তবে তারা আমাদের KEY পেয়ে যেতে পারে। এছাড়াও কোড রিলিজ করার সময় এটি আরও কঠিন হয় কারণ আমাদের প্রতিটি ডিভাইসের জন্য একটি আপডেট কী দিয়ে পুনরায় ্তা পরিবর্তন করতে হবে। একটি হার্ডওয়্যার সুরক্ষা মডিউল থেকে এই KEY লোড করা ভাল উপায়। এই মডিউল হলো আইওটি ডিভাইসের একটি চিপ যা এনক্রিপ্ট করা মানগুলিকে স্টোর করে যা আমাদের কোড দ্বারা একসেস করা যাবে।
>
> আইওটি শেখার সময় KEY গুলো কোডে রাখলে কাজ করা সহজ হয়, যেমন আমরা পূর্ববর্তী পাঠে করেছিলাম। তবে আমাদের অবশ্যই নিশ্চিত করতে হবে যে এই KEY জনসাধারণের জন্য সোর্স কোডে উন্মুক্ত করা হয়নি।
Devices have 2 keys, and 2 corresponding connection strings. This allows we to rotate the keys - that is switch from one key to another if the first gets compromised, and re-generate the first key.
ডিভাইসগুলিতে 2 টি কী এবং 2 টি কানেকশন স্ট্রিং রয়েছে। এটি আমাদের কীগুলির মধ্যে পরিবর্তনের অনুমতি দেয় । যদি প্রথম কী টি সমস্যার মুখে পড়ে, তবে ২য়টি ব্যবহার করা এবং প্রথম কী পুনরায় তৈরী করে।
### X.509 সার্টিফিকেট
যখন আমরা কোনও পাবলিক/প্রাইভেট কী পেয়ার এর সাথে এসিমেট্রিক এনক্রিপশন ব্যবহার করি, আমাদেরকে কেউ ডেটা প্রেরণ করতে চাইলে তাকে আমাদের পাবলিক কী সরবরাহ করতে হবে। সমস্যাটি হল কীভাবে আমাদের প্রাপক নিশ্চিত করতে পারেন যে এটি আসলেই আমাদের পাবলিক কী এবং অন্য কেউ আমাদের রূপধারণ করে সংযোগের চেষ্টা করছে না? KEY সরবরাহ করার পরিবর্তে, আমরা বরং আমাদের পাবলিক কী এমন একটি সার্টিফিকেটের ভিতরে সরবরাহ করতে পারি যা একটি বিশ্বস্ত তৃতীয় পক্ষ দ্বারা যাচাই করা হয়েছে এবং এটিকে বলা হয় X.509 সার্টিফিকেট।
X.509 সার্টিফিকেট হলো ডিজিটাল ডকুমেন্ট যেগুলো পাবলিক/প্রাইভেট কী পেয়ার এর পাবলিক অংশটি ধারণ করে। এগুলি সাধারণত বেশ কয়েকটি বিশ্বস্ত সংস্থার দ্বারা ইস্যু করা হয় যেগুলোকে বলা হয় [Certification authorities](https://wikipedia.org/wiki/Certificate_authority) (CAs) এবং এসকল সংস্থা এই ডকুমেন্টগুলো সাইন করে দেয় যা বোঝায় যে key গুলো সঠিক এবং ঠিক ব্যবহারকারীর কাছ থেকেই আসছে। যেভাবে আমরা পাসপোর্ট বা ড্রাইভিং লাইসেন্সে বিশ্বাস করি (যেহেতু নির্দিষ্ট কর্তৃপক্ষ দ্বারা তা স্বীকৃত, সেভাবেই এখানেও আমরা সার্টিফিকেটগুলো বিশ্বাস করি। এগুলোর জন্য অর্থ ব্যয় হয়, তাই আমরা 'স্ব-স্বাক্ষর'ও করতে পারি, এটি পরীক্ষার উদ্দেশ্যে আমাদের স্বাক্ষরিত একটি সার্টিফিকেট তৈরি করে।
> 💁 আমাদের কোনও প্রোডাকশন রিলিজের জন্য স্ব-স্বাক্ষরিত সার্টিফিকেট ব্যবহার করা উচিত নয়।
These certificates have a number of fields in them, including who the public key is from, the details of the CA who issued it, how long it is valid for, and the public key itself. Before using a certificate, it is good practice to verify it by checking that is was signed by the original CA.
এই সার্টিফিকেট গুলির মধ্যে বেশ কয়েকটি ক্ষেত্র রয়েছে যেমন - কোথায় পাবলিক কী রয়েছে , যে CA এটির স্বীকৃতি দিয়েছে তার তথ্যাবলি, কতক্ষণের জন্য এটি বৈধ হবে তার বিবরণ এবং সেই পাবলিক কী। কোন সার্টিফিকেট ব্যবহার করার আগে, এটি যে সিএ স্বাক্ষর করেছে তা যাচাই করা ভাল একটি চর্চা।
✅ সার্টিফিকেট গুলির সকল ক্ষেত্র এর বর্ণনা [Microsoft Understanding X.509 Public Key Certificates tutorial](https://docs.microsoft.com/azure/iot-hub/tutorial-x509-certificates?WT.mc_id=academic-17441-jabenn#certificate-fields) এ রয়েছে।
X.509 সার্টিফিকেট ব্যবহার করার সময়, প্রেরক এবং প্রাপক উভয়েরই নিজস্ব public and private keys এবং সেইসাথে উভয়েরই X.509 শংসাপত্র থাকবে যাতে পাবলিক কী রয়েছে। এরপরে তারা X.509 সার্টিফিকেট কোনভাবে বিনিময় করেন, একে অপরকে তারা পাঠানো ডেটা এনক্রিপ্ট করার জন্য পাবলিক কী এবং তাদের প্রাপ্ত ডেটা ডিক্রিপ্ট করার জন্য তাদের নিজস্ব প্রাইভেট কী ব্যবহার করে।
![Instead of sharing a public key, we can share a certificate. The user of the certificate can verify that it comes from we by checking with the certificate authority who signed it.](../../../../images/send-message-certificate.png)
X.509 শংসাপত্রগুলি ব্যবহার করার একটি বড় সুবিধা হল এগুলি ডিভাইসের মধ্যে শেয়ার করা যায়। আমরা একটি শংসাপত্র তৈরি করতে পারি, এটি আইওটি হাবে আপলোড করতে পারি এবং আমাদের সমস্ত ডিভাইসের জন্য এটি ব্যবহার করতে পারি। আইওটি হাব থেকে প্রাপ্ত বার্তাগুলি ডিক্রিপ্ট করার জন্য প্রতিটি ডিভাইসটির তখন কেবল প্রাইভেট কী জানতে হবে।
আইওটি হাবের কাছে পাঠানো বার্তাগুলি এনক্রিপ্ট করার জন্য আমাদের ডিভাইস দ্বারা ব্যবহৃত শংসাপত্রটি মাইক্রোসফ্ট প্রকাশ করে। এটি একই শংসাপত্র যা প্রচুর অ্যাজুর সার্ভিস ব্যবহার করে এবং কখনও কখনও SDK-গুলিতে অন্তর্নির্মিত থাকে।
> 💁 মনে রাখতে হবে, একটি public key আসলেই 'public'. অ্যাজুরে পাবলিক কী কেবল এখানে প্রেরিত ডেটা এনক্রিপ্ট করতে ব্যবহার করা যেতে পারে, এটি ডিক্রিপ্ট করার জন্য নয়, সুতরাং এটি সোর্স কোড সহ সর্বত্র শেয়ার করা যায়। উদাহরণস্বরূপ, আমরা [Azure IoT C SDK source code](https://github.com/Azure/azure-iot-sdk-c/blob/master/certs/certs.c) এও তা দেখতে পাবো।
✅ X.509 certificates এ কিছু নির্দিষ্ট শব্দ বা ভাষা রয়েছে। অপিরিচিত কোন শোব্দের মুখোমুখি হলে আমরা [The laymans guide to X.509 certificate jargon](https://techcommunity.microsoft.com/t5/internet-of-things/the-layman-s-guide-to-x-509-certificate-jargon/ba-p/2203540?WT.mc_id=academic-17441-jabenn) পড়লে তা পাবো।
## X.509 certificate তৈরী ও ব্যবহার
একটি X.509 শংসাপত্র তৈরী করার পদক্ষেপগুলি হল:
1. একটি public/private key pair তৈরী করা। এটির জন্য বহুল ব্যবহৃত একটি এলগরিদম হলো [RivestShamirAdleman](https://wikipedia.org/wiki/RSA_(cryptosystem))(RSA) পদ্ধতি।
1. CA বা self-signing এর মাধ্যমে পাবলিক কী এবং প্রয়োজনীয় তথ্য সাবমিট করা।
আইওটি হাবটিতে একটি নতুন ডিভাইস আইডেনটিটি তৈরি করতে এবং স্বয়ংক্রিয়ভাবে পাবলিক / প্রাইভেট কী পেয়ার তৈরী করতে এবং স্ব-স্বাক্ষরিত শংসাপত্র তৈরি করার জন্য আজুর সিএলআইয়ের কমান্ড রয়েছে।
> 💁 কাজের ধাপগুলো বিস্তারিত দেখতে হলে আমরা বরং CLI এর পরিবর্তে [Using OpenSSL to create self-signed certificates tutorial in the Microsoft IoT Hub documentation](https://docs.microsoft.com/azure/iot-hub/tutorial-x509-self-sign?WT.mc_id=academic-17441-jabenn) দেখতে পারি।
### কাজ - X.509 certificate দ্বারা ডিভাইস আইডেনটিটি তৈরী
1. নতুন ডিভাইস আইডেনটিটি তৈরীর জন্য নিম্নের কমান্ড রান দিই, স্বয়ংক্রিয়ভাবে কী এবং শংসাপত্রগুলি তৈরি করা হচ্ছে:
```sh
az iot hub device-identity create --device-id soil-moisture-sensor-x509 \
--am x509_thumbprint \
--output-dir . \
--hub-name <hub_name>
```
এখানে `<hub_name>` এর জায়গায় আমাদের IoT Hub এ ব্যবহৃত নামটি দিতে হবে।
এটি `soil-moisture-sensor-x509` এর জন্য একটি আইডি সহ ডিভাইস তৈরী করে দিবে যা আগের লেসনে তৈরী করা ডিভাইস থেকে ভিন্ন । এছাড়াও ২টি ফাইল তৈরী হবে:
* `soil-moisture-sensor-x509-key.pem` - এই ফাইলে ডিভাইসের জন্য প্রাইভেট কী থাকে।
* `soil-moisture-sensor-x509-cert.pem` - এটিতে X.509 সার্টিফিকেট থাকে।
এই ফাইল গুলো সুরক্ষিত রাখতে হবে! পাবলিকে একসেস পাওয়ার মত করে রাখা যাবেনা।
### কাজ - ডিভাইস কোডে X.509 certificate ব্যবহার
নিম্নের প্রাসঙ্গিক কোন একটি গাইড অনুসরণ করতে হবে ঃ
* [Arduino - Wio Terminal](wio-terminal-x509.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-x509.md)
---
## 🚀 চ্যালেঞ্জ
Azure সার্ভিস তৈরী, পরিচালনা এবং ডিলিট করার জন্য অনেকগুলো উপায় রয়েছে। এর মধ্যে একটি হলো [Azure Portal](https://portal.azure.com?WT.mc_id=academic-17441-jabenn) - একটি ওয়য়েব-ভিত্তিক ইন্টারফেস এক্সেখান থেকে আমরা সহজেই Azure services ব্যবহার করতে পারি।
আমরা [portal.azure.com](https://portal.azure.com?WT.mc_id=academic-17441-jabenn) এ গিয়ে পোর্টালটি দেখতে পারি। এখানে আইওটি হাব তৈরী করে, পরে ডিলিট করে দিই।
**Hint** - পোর্টালের মাধ্যমে পরিষেবাগুলি তৈরি করার সময়, আমাদের শুরুতেই কোনও রিসোর্স গ্রুপ তৈরি করার দরকার নেই, যখন পরিষেবাটি তৈরি করা হয় তখন একটি রিসোর্স গ্রুপ তৈরি করা যেতে পারে। সবশেষে নিশ্চিত হয়ে নিতে হবে যে আমরা কাজ শেষ হয়ে গেলে এটি মুছে ফেলেছি!
Azure Portal নিয়ে ডকুমেন্ট, টিউটোরিয়াল, গাইড [Azure portal documentation](https://docs.microsoft.com/azure/azure-portal/?WT.mc_id=academic-17441-jabenn) এ পাওয়া যাবে।
## লেকচার-পরবর্তী কুইজ
[লেকচার পরবর্তী কুইজ](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/20)
## রিভিউ এবং স্ব-অধ্যয়ন
* সংকেতলিপি এর ইতিহাস [History of Cryptography - Wikipedia](https://wikipedia.org/wiki/History_of_সংকেতলিপি (Cryptography))থেকে জেনে নিই।
* X.509 সার্টিফিকেট সম্পর্কে [X.509 page on Wikipedia](https://wikipedia.org/wiki/X.509) থেকে বিশদভাবে জ্ঞান অর্জন করি।
## এসাইনমেন্ট
[নতুন আইওটি ডিভাইস তৈরি](assignment.bn.md)

@ -0,0 +1,15 @@
# নতুন আইওটি ডিভাইস তৈরি
## নির্দেশাবলী
আমরা ডিজিটাল কৃষিকাজ এবং উদ্ভিদ বৃদ্ধির পূর্বাভাস দেওয়ার ডেটা সংগ্রহ করার জন্য আইওটি ডিভাইসগুলি কীভাবে ব্যবহার করব এবং মাটির আর্দ্রতা পাঠের উপর ভিত্তি করে সেচকার্য স্বয়ংক্রিয়ভাবে কীভাবে করব সে সম্পর্কে গত 6 টি পাঠে শিখেছি।
আমাদের পছন্দমতো সেন্সর এবং অ্যাকচুয়েটর ব্যবহার করে নতুন আইওটি ডিভাইস তৈরি করতে হবে। একটি আইওটি হাবে টেলিমেট্রি প্রেরণ করি এবং সার্ভারলেস কোডের মাধ্যমে কোন অ্যাকচুয়েটরকে নিয়ন্ত্রণ করতে এটি ব্যবহার করি। আমরা তো ইতিমধ্যেই বেশ কিছু সেন্সর এবং অ্যাকচুয়েটর ব্যবহার শিখেছি - সেগুলো বা একদম নতুন কিছু নিয়ে কাজ করতে পারি।
## এসাইনমেন্ট মূল্যায়ন মানদন্ড
| ক্রাইটেরিয়া | দৃষ্টান্তমূলক (সর্বোত্তম) | পর্যাপ্ত (মাঝারি) | উন্নতি প্রয়োজন (নিম্নমান) |
| --------- | ------------------ | -------------- | -------------------- |
| সেন্সর এবং অ্যাকচুয়েটর ব্যবহার করতে একটি আইওটি ডিভাইস কোড করা | সেন্সর এবং অ্যাকচুয়েটর এর সাথে কার্যকর আইওটি ডিভাইস তৈরী করেছে |হয় সেন্সর বা অ্যাকচুয়েটর এর সাথে কার্যকর আইওটি ডিভাইস তৈরী করেছে| সেন্সর এবং অ্যাকচুয়েটর এর সাথে কার্যকর আইওটি ডিভাইস তৈরী করতে ব্যার্থ |
| আইওটি ডিভাইসের সাথে আইওটি হাবের কানেকশন | একটি আইওটি হাব ডেপ্লয় করতে এবং এতে টেলিমেট্রি পাঠাতে সক্ষম হয়েছিল এবং এর থেকে নির্দেশ গ্রহণ করতে পেরেছিল | একটি আইওটি হাব ডেপ্লয় করতে এবং হয় এতে টেলিমেট্রি পাঠাতে সক্ষম হয়েছিল অথবা এর থেকে নির্দেশ গ্রহণ করতে পেরেছিল | একটি আইওটি হাব ডেপ্লয় করতে এবং সংযোগ তৈরী করতে ব্যার্থ |
| সার্ভারলেস কোড দ্বারা অ্যাকচুয়েটর নিয়ন্ত্রণ | টেলিমেট্রি ইভেন্টগুলির দ্বারা ট্রিগার হওয়া ডিভাইস নিয়ন্ত্রণ করতে একটি অ্যাজুর ফাংশন ডেপ্লয় করতে সক্ষম | টেলিমেট্রি ইভেন্টগুলি দ্বারা ট্রিগার করা একটি অ্যাজুর ফাংশন তৈরী করতে সক্ষম হয়েছিল কিন্তু অ্যাকচুয়েটর ব্যবহার করতে পারেনি | অ্যাজুর ফাংশন ডেপ্লয় করতে ব্যার্থ |

@ -15,8 +15,8 @@ framework = arduino
lib_deps =
bblanchon/ArduinoJson @ 6.17.3
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -14,8 +14,8 @@ board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0

@ -6,6 +6,8 @@ The latest iterations are now part of our smart devices. In kitchens in homes al
In these 4 lessons you'll learn how to build a smart timer, using AI to recognize your voice, understand what you are asking for, and reply with information about your timer. You'll also add support for multiple languages.
> ⚠️ Working with speech and microphone data uses a lot of memory, meaning it is easy to hit limits on microcontrollers. The project here works around these issues, but be aware the Wio Terminal labs are complex and may take more time that other labs in this curriculum.
> 💁 These lessons will use some cloud resources. If you don't complete all the lessons in this project, make sure you [clean up your project](../clean-up.md).
## Topics

@ -13,7 +13,7 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
build_flags =
-DSFUD_USING_QSPI

@ -13,8 +13,8 @@ platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1

@ -67,6 +67,11 @@ public:
return text;
}
String AccessToken()
{
return _access_token;
}
private:
String getAccessToken()
{

@ -24,8 +24,8 @@ Once each buffer has been captured, it can be written to the flash memory. Flash
```ini
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.0.3
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
```
1. Open the `main.cpp` file and add the following include directive for the flash memory library to the top of the file:

@ -18,6 +18,8 @@ To connect the ReSpeaker 2-Mics Pi Hat you will need 40 pin-to-pin (also referre
> 💁 If you are comfortable soldering, then you can use the [40 Pin Raspberry Pi Hat Adapter Board For Wio Terminal](https://www.seeedstudio.com/40-Pin-Raspberry-Pi-Hat-Adapter-Board-For-Wio-Terminal-p-4730.html) to connect the ReSpeaker.
You will also need an SD card to use to download and playback audio. The Wio Terminal only supports SD Cards up to 16GB in size, and these need to be formatted as FAT32 or exFAT.
### Task - connect the ReSpeaker Pi Hat
1. With the Wio Terminal powered off, connect the ReSpeaker 2-Mics Pi Hat to the Wio Terminal using the jumper leads and the GPIO sockets on the back of the Wio Terminal:
@ -59,3 +61,15 @@ To connect the ReSpeaker 2-Mics Pi Hat you will need 40 pin-to-pin (also referre
* If you are using a speaker with a 3.5mm jack, or headphones, insert them into the 3.5mm jack socket.
![A speaker connected to the ReSpeaker via the 3.5mm jack socket](../../../images/respeaker-35mm-speaker.png)
### Task - set up the SD card
1. Connect the SD Card to your computer, using na external reader if you don't have an SD Card slot.
1. Format the SD Card using the appropriate tool on your computer, making sure to use a FAT32 or exFAT file system
1. Insert the SD card into the SD Card slot on the left-hand side of the Wio Terminal, just below the power button. Make sure the card is all the way in and clicks in - you may need a thin tool or another SD Card to help push it all the way in.
![Inserting the SD card into the SD card slot below the power switch](../../../images/wio-sd-card.png)
> 💁 To eject the SD Card, you need to push it in slightly and it will eject. You will need a thin tool to do this such as a flat-head screwdriver or another SD Card.

@ -180,6 +180,15 @@ The audio can be sent to the speech service using the REST API. To use the speec
This code builds the URL for the token issuer API using the location of the speech resource. It then creates an `HTTPClient` to make the web request, setting it up to use the WiFi client configured with the token endpoints certificate. It sets the API key as a header for the call. It then makes a POST request to get the certificate, retrying if it gets any errors. Finally the access token is returned.
1. To the `public` section, add a method to get the access token. This will be needed in later lessons to convert text to speech.
```cpp
String AccessToken()
{
return _access_token;
}
```
1. To the `public` section, add an `init` method that sets up the token client:
```cpp
@ -497,7 +506,7 @@ The audio can be sent to the speech service using the REST API. To use the speec
Serial.println(text);
```
1. Build this code, upload it to your Wio Terminal and test it out through the serial monitor. Press the C button (the one on the left-hand side, closest to the power switch), and speak. 4 seconds of audio will be captured, then converted to text.
1. Build this code, upload it to your Wio Terminal and test it out through the serial monitor. Once you see `Ready` in the serial monitor, press the C button (the one on the left-hand side, closest to the power switch), and speak. 4 seconds of audio will be captured, then converted to text.
```output
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time

@ -274,7 +274,7 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
func new --name text-to-timer --template "HTTP trigger"
```
This will crate an HTTP trigger called `text-to-timer`.
This will create an HTTP trigger called `text-to-timer`.
1. Test the HTTP trigger by running the functions app. When it runs you will see the endpoint listed in the output:
@ -493,9 +493,35 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
### Task - make your function available to your IoT device
1. For your IoT device to call your REST endpoint, it will need to know the URL. When you accessed it earlier, you used `localhost`, which is a shortcut to access REST endpoints on your local machine. To allow you IoT device to get access, you need to either:
1. For your IoT device to call your REST endpoint, it will need to know the URL. When you accessed it earlier, you used `localhost`, which is a shortcut to access REST endpoints on your local machine. To allow you IoT device to get access, you need to either publish to the cloud, or get your IP address to access it locally.
> ⚠️ If you are using a Wio Terminal, it is easier to run the function app locally, as there will be a dependency on libraries that mean you cannot deploy the function app in the same way as you have done before. Run the function app locally and access it via your computers IP address. If you do want to deploy to the cloud, information will be provided in a later lesson on the way to do this.
* Publish the Functions app - follow the instructions in earlier lessons to publish your functions app to the cloud. Once published, the URL will be `https://<APP_NAME>.azurewebsites.net/api/text-to-timer`, where `<APP_NAME>` will be the name of your functions app. Make sure to also publish your local settings.
When working with HTTP triggers, they are secured by default with a function app key. To get this key, run the following command:
```sh
az functionapp keys list --resource-group smart-timer \
--name <APP_NAME>
```
Copy the value of the `default` entry from the `functionKeys` section.
```output
{
"functionKeys": {
"default": "sQO1LQaeK9N1qYD6SXeb/TctCmwQEkToLJU6Dw8TthNeUH8VA45hlA=="
},
"masterKey": "RSKOAIlyvvQEQt9dfpabJT018scaLpQu9p1poHIMCxx5LYrIQZyQ/g==",
"systemKeys": {}
}
```
This key will need to be added as a query parameter to the URL, so the final URL will be `https://<APP_NAME>.azurewebsites.net/api/text-to-timer?code=<FUNCTION_KEY>`, where `<APP_NAME>` will be the name of your functions app, and `<FUNCTION_KEY>` will be your default function key.
> 💁 You can change the type of authorization of the HTTP trigger using `authlevel` setting in the `function.json` file. You can read more about this in the [configuration section of the Azure Functions HTTP trigger documentation on Microsoft docs](https://docs.microsoft.com/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=python&WT.mc_id=academic-17441-jabenn#configuration).
* Publish the Functions app - follow the instructions in earlier lessons to publish your functions app to the cloud. Once published, the URL will be `http://<APP_NAME>.azurewebsites.net/api/text-to-timer`, where `<APP_NAME>` will be the name of your functions app.
* Run the functions app locally, and access using the IP address - you can get the IP address of your computer on your local network, and use that to build the URL.
Find your IP address:
@ -504,11 +530,13 @@ Rather than calling LUIS from the IoT device, you can use serverless code with a
* On macOS, follow the [how to find you IP address on a Mac guide](https://www.hellotech.com/guide/for/how-to-find-ip-address-on-mac)
* On linux, follow the section on finding your private IP address in the [how to find your IP address in Linux guide](https://opensource.com/article/18/5/how-find-ip-address-linux)
Once you have your IP address, you will able to access the function at `http://<IP_ADDRESS>:7071/api/text-to-timer`, where `<IP_ADDRESS>` will be your IP address, for example `http://192.168.1.10:7071/api/text-to-timer`.
Once you have your IP address, you will able to access the function at `http://<IP_ADDRESS>:7071/api/text-to-timer`, where `<IP_ADDRESS>` will be your IP address, for example `http://192.168.1.10:7071/api/text-to-timer`.
> 💁 Not that this uses port 7071, so after the IP address you will need to have `:7071`.
> 💁 This will only work if your IoT device is on the same network as your computer.
> 💁 This will only work if your IoT device is on the same network as your computer.
1. Test the endpoint by accessing it using your browser.
1. Test the endpoint by accessing it using curl.
---

@ -0,0 +1,26 @@
import json
import os
import requests
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
req_body = req.get_json()
language = req_body['language']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
response = requests.get(url, headers=headers)
voices_json = json.loads(response.text)
voices = filter(lambda x: x['Locale'].lower() == language.lower(), voices_json)
voices = map(lambda x: x['ShortName'], voices)
return func.HttpResponse(json.dumps(list(voices)), status_code=200)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,12 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>",
"SPEECH_KEY": "<key>",
"SPEECH_LOCATION": "<location>"
}
}

@ -0,0 +1,5 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis
librosa

@ -0,0 +1,52 @@
import io
import os
import requests
import librosa
import soundfile as sf
import azure.functions as func
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
playback_format = 'riff-48khz-16bit-mono-pcm'
def main(req: func.HttpRequest) -> func.HttpResponse:
req_body = req.get_json()
language = req_body['language']
voice = req_body['voice']
text = req_body['text']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': 'application/ssml+xml',
'X-Microsoft-OutputFormat': playback_format
}
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{voice}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
response = requests.post(url, headers=headers, data=ssml.encode('utf-8'))
raw_audio, sample_rate = librosa.load(io.BytesIO(response.content), sr=48000)
resampled = librosa.resample(raw_audio, sample_rate, 44100)
output_buffer = io.BytesIO()
sf.write(output_buffer, resampled, 44100, 'PCM_16', format='wav')
output_buffer.seek(0)
return func.HttpResponse(output_buffer.read(), status_code=200)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,46 @@
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
def main(req: func.HttpRequest) -> func.HttpResponse:
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
req_body = req.get_json()
text = req_body['text']
logging.info(f'Request - {text}')
prediction_request = { 'query' : text }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_seconds = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_seconds += number * 60
else:
total_seconds += number
logging.info(f'Timer required for {total_seconds} seconds')
payload = {
'seconds': total_seconds
}
return func.HttpResponse(json.dumps(payload), status_code=200)
return func.HttpResponse(status_code=404)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,23 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
contrem/arduino-timer @ 2.3.0

@ -0,0 +1,93 @@
#pragma once
#define RATE 16000
#define SAMPLE_LENGTH_SECONDS 4
#define SAMPLES RATE * SAMPLE_LENGTH_SECONDS
#define BUFFER_SIZE (SAMPLES * 2) + 44
#define ADC_BUF_LEN 1600
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";
const char *SPEECH_API_KEY = "<API_KEY>";
const char *SPEECH_LOCATION = "<LOCATION>";
const char *LANGUAGE = "<LANGUAGE>";
const char *TOKEN_URL = "https://%s.api.cognitive.microsoft.com/sts/v1.0/issuetoken";
const char *SPEECH_URL = "https://%s.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=%s";
const char *TEXT_TO_TIMER_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/text-to-timer";
const char *GET_VOICES_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/get-voices";
const char *TEXT_TO_SPEECH_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/text-to-speech";
const char *TOKEN_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
"-----END CERTIFICATE-----\r\n";
const char *SPEECH_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQCq+mxcpjxFFB6jvh98dTFzANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwMTCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBAMedcDrkXufP7pxVm1FHLDNA9IjwHaMoaY8arqqZ4Gff4xyr\r\n"
"RygnavXL7g12MPAx8Q6Dd9hfBzrfWxkF0Br2wIvlvkzW01naNVSkHp+OS3hL3W6n\r\n"
"l/jYvZnVeJXjtsKYcXIf/6WtspcF5awlQ9LZJcjwaH7KoZuK+THpXCMtzD8XNVdm\r\n"
"GW/JI0C/7U/E7evXn9XDio8SYkGSM63aLO5BtLCv092+1d4GGBSQYolRq+7Pd1kR\r\n"
"EkWBPm0ywZ2Vb8GIS5DLrjelEkBnKCyy3B0yQud9dpVsiUeE7F5sY8Me96WVxQcb\r\n"
"OyYdEY/j/9UpDlOG+vA+YgOvBhkKEjiqygVpP8EZoMMijephzg43b5Qi9r5UrvYo\r\n"
"o19oR/8pf4HJNDPF0/FJwFVMW8PmCBLGstin3NE1+NeWTkGt0TzpHjgKyfaDP2tO\r\n"
"4bCk1G7pP2kDFT7SYfc8xbgCkFQ2UCEXsaH/f5YmpLn4YPiNFCeeIida7xnfTvc4\r\n"
"7IxyVccHHq1FzGygOqemrxEETKh8hvDR6eBdrBwmCHVgZrnAqnn93JtGyPLi6+cj\r\n"
"WGVGtMZHwzVvX1HvSFG771sskcEjJxiQNQDQRWHEh3NxvNb7kFlAXnVdRkkvhjpR\r\n"
"GchFhTAzqmwltdWhWDEyCMKC2x/mSZvZtlZGY+g37Y72qHzidwtyW7rBetZJAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQUDyBd16FXlduSzyvQx8J3BM5ygHYwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQAlFvNh7QgXVLAZSsNR2XRmIn9iS8OHFCBA\r\n"
"WxKJoi8YYQafpMTkMqeuzoL3HWb1pYEipsDkhiMnrpfeYZEA7Lz7yqEEtfgHcEBs\r\n"
"K9KcStQGGZRfmWU07hPXHnFz+5gTXqzCE2PBMlRgVUYJiA25mJPXfB00gDvGhtYa\r\n"
"+mENwM9Bq1B9YYLyLjRtUz8cyGsdyTIG/bBM/Q9jcV8JGqMU/UjAdh1pFyTnnHEl\r\n"
"Y59Npi7F87ZqYYJEHJM2LGD+le8VsHjgeWX2CJQko7klXvcizuZvUEDTjHaQcs2J\r\n"
"+kPgfyMIOY1DMJ21NxOJ2xPRC/wAh/hzSBRVtoAnyuxtkZ4VjIOh\r\n"
"-----END CERTIFICATE-----\r\n";

@ -0,0 +1,69 @@
#pragma once
#include <Arduino.h>
#include <HTTPClient.h>
#include <sfud.h>
#include "config.h"
class FlashStream : public Stream
{
public:
FlashStream()
{
_pos = 0;
_flash_address = 0;
_flash = sfud_get_device_table() + 0;
populateBuffer();
}
virtual size_t write(uint8_t val)
{
return 0;
}
virtual int available()
{
int remaining = BUFFER_SIZE - ((_flash_address - HTTP_TCP_BUFFER_SIZE) + _pos);
int bytes_available = min(HTTP_TCP_BUFFER_SIZE, remaining);
if (bytes_available == 0)
{
bytes_available = -1;
}
return bytes_available;
}
virtual int read()
{
int retVal = _buffer[_pos++];
if (_pos == HTTP_TCP_BUFFER_SIZE)
{
populateBuffer();
}
return retVal;
}
virtual int peek()
{
return _buffer[_pos];
}
private:
void populateBuffer()
{
sfud_read(_flash, _flash_address, HTTP_TCP_BUFFER_SIZE, _buffer);
_flash_address += HTTP_TCP_BUFFER_SIZE;
_pos = 0;
}
size_t _pos;
size_t _flash_address;
const sfud_flash *_flash;
byte _buffer[HTTP_TCP_BUFFER_SIZE];
};

@ -0,0 +1,60 @@
#pragma once
#include <Arduino.h>
#include <sfud.h>
class FlashWriter
{
public:
void init()
{
_flash = sfud_get_device_table() + 0;
_sfudBufferSize = _flash->chip.erase_gran;
_sfudBuffer = new byte[_sfudBufferSize];
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void reset()
{
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void writeSfudBuffer(byte b)
{
_sfudBuffer[_sfudBufferPos++] = b;
if (_sfudBufferPos == _sfudBufferSize)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void flushSfudBuffer()
{
if (_sfudBufferPos > 0)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void writeSfudBuffer(byte *b, size_t len)
{
for (size_t i = 0; i < len; ++i)
{
writeSfudBuffer(b[i]);
}
}
private:
byte *_sfudBuffer;
size_t _sfudBufferSize;
size_t _sfudBufferPos;
size_t _sfudBufferWritePos;
const sfud_flash *_flash;
};

@ -0,0 +1,53 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClient.h>
#include "config.h"
class LanguageUnderstanding
{
public:
int GetTimerDuration(String text)
{
DynamicJsonDocument doc(1024);
doc["text"] = text;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_TIMER_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
int seconds = 0;
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
seconds = obj["seconds"].as<int>();
}
else
{
Serial.print("Failed to understand text - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return seconds;
}
private:
WiFiClient _client;
};
LanguageUnderstanding languageUnderstanding;

@ -0,0 +1,130 @@
#include <Arduino.h>
#include <arduino-timer.h>
#include <rpcWiFi.h>
#include <sfud.h>
#include <SPI.h>
#include "config.h"
#include "language_understanding.h"
#include "mic.h"
#include "speech_to_text.h"
#include "text_to_speech.h"
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
Serial.println("Connected!");
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
while (!(sfud_init() == SFUD_SUCCESS))
;
sfud_qspi_fast_read_enable(sfud_get_device(SFUD_W25Q32_DEVICE_INDEX), 2);
pinMode(WIO_KEY_C, INPUT_PULLUP);
mic.init();
speechToText.init();
textToSpeech.init();
Serial.println("Ready.");
}
auto timer = timer_create_default();
void say(String text)
{
Serial.println(text);
textToSpeech.convertTextToSpeech(text);
}
bool timerExpired(void *announcement)
{
say((char *)announcement);
return false;
}
void processAudio()
{
String text = speechToText.convertSpeechToText();
Serial.println(text);
int total_seconds = languageUnderstanding.GetTimerDuration(text);
if (total_seconds == 0)
{
return;
}
int minutes = total_seconds / 60;
int seconds = total_seconds % 60;
String begin_message;
if (minutes > 0)
{
begin_message += minutes;
begin_message += " minute ";
}
if (seconds > 0)
{
begin_message += seconds;
begin_message += " second ";
}
begin_message += "timer started.";
String end_message("Times up on your ");
if (minutes > 0)
{
end_message += minutes;
end_message += " minute ";
}
if (seconds > 0)
{
end_message += seconds;
end_message += " second ";
}
end_message += "timer.";
say(begin_message);
timer.in(total_seconds * 1000, timerExpired, (void *)(end_message.c_str()));
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW && !mic.isRecording())
{
Serial.println("Starting recording...");
mic.startRecording();
}
if (!mic.isRecording() && mic.isRecordingReady())
{
Serial.println("Finished recording");
processAudio();
mic.reset();
}
timer.tick();
}

@ -0,0 +1,242 @@
#pragma once
#include <Arduino.h>
#include "config.h"
#include "flash_writer.h"
class Mic
{
public:
Mic()
{
_isRecording = false;
_isRecordingReady = false;
}
void startRecording()
{
_isRecording = true;
_isRecordingReady = false;
}
bool isRecording()
{
return _isRecording;
}
bool isRecordingReady()
{
return _isRecordingReady;
}
void init()
{
analogReference(AR_INTERNAL2V23);
_writer.init();
initBufferHeader();
configureDmaAdc();
}
void reset()
{
_isRecordingReady = false;
_isRecording = false;
_writer.reset();
initBufferHeader();
}
void dmaHandler()
{
static uint8_t count = 0;
if (DMAC->Channel[1].CHINTFLAG.bit.SUSP)
{
DMAC->Channel[1].CHCTRLB.reg = DMAC_CHCTRLB_CMD_RESUME;
DMAC->Channel[1].CHINTFLAG.bit.SUSP = 1;
if (count)
{
audioCallback(_adc_buf_0, ADC_BUF_LEN);
}
else
{
audioCallback(_adc_buf_1, ADC_BUF_LEN);
}
count = (count + 1) % 2;
}
}
private:
volatile bool _isRecording;
volatile bool _isRecordingReady;
FlashWriter _writer;
typedef struct
{
uint16_t btctrl;
uint16_t btcnt;
uint32_t srcaddr;
uint32_t dstaddr;
uint32_t descaddr;
} dmacdescriptor;
// Globals - DMA and ADC
volatile dmacdescriptor _wrb[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor_section[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor __attribute__((aligned(16)));
void configureDmaAdc()
{
// Configure DMA to sample from ADC at a regular interval (triggered by timer/counter)
DMAC->BASEADDR.reg = (uint32_t)_descriptor_section; // Specify the location of the descriptors
DMAC->WRBADDR.reg = (uint32_t)_wrb; // Specify the location of the write back descriptors
DMAC->CTRL.reg = DMAC_CTRL_DMAENABLE | DMAC_CTRL_LVLEN(0xf); // Enable the DMAC peripheral
DMAC->Channel[1].CHCTRLA.reg = DMAC_CHCTRLA_TRIGSRC(TC5_DMAC_ID_OVF) | // Set DMAC to trigger on TC5 timer overflow
DMAC_CHCTRLA_TRIGACT_BURST; // DMAC burst transfer
_descriptor.descaddr = (uint32_t)&_descriptor_section[1]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_0 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_0 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[0], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
_descriptor.descaddr = (uint32_t)&_descriptor_section[0]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_1 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_1 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[1], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
// Configure NVIC
NVIC_SetPriority(DMAC_1_IRQn, 0); // Set the Nested Vector Interrupt Controller (NVIC) priority for DMAC1 to 0 (highest)
NVIC_EnableIRQ(DMAC_1_IRQn); // Connect DMAC1 to Nested Vector Interrupt Controller (NVIC)
// Activate the suspend (SUSP) interrupt on DMAC channel 1
DMAC->Channel[1].CHINTENSET.reg = DMAC_CHINTENSET_SUSP;
// Configure ADC
ADC1->INPUTCTRL.bit.MUXPOS = ADC_INPUTCTRL_MUXPOS_AIN12_Val; // Set the analog input to ADC0/AIN2 (PB08 - A4 on Metro M4)
while (ADC1->SYNCBUSY.bit.INPUTCTRL)
; // Wait for synchronization
ADC1->SAMPCTRL.bit.SAMPLEN = 0x00; // Set max Sampling Time Length to half divided ADC clock pulse (2.66us)
while (ADC1->SYNCBUSY.bit.SAMPCTRL)
; // Wait for synchronization
ADC1->CTRLA.reg = ADC_CTRLA_PRESCALER_DIV128; // Divide Clock ADC GCLK by 128 (48MHz/128 = 375kHz)
ADC1->CTRLB.reg = ADC_CTRLB_RESSEL_12BIT | // Set ADC resolution to 12 bits
ADC_CTRLB_FREERUN; // Set ADC to free run mode
while (ADC1->SYNCBUSY.bit.CTRLB)
; // Wait for synchronization
ADC1->CTRLA.bit.ENABLE = 1; // Enable the ADC
while (ADC1->SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
ADC1->SWTRIG.bit.START = 1; // Initiate a software trigger to start an ADC conversion
while (ADC1->SYNCBUSY.bit.SWTRIG)
; // Wait for synchronization
// Enable DMA channel 1
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode
TC5->COUNT16.CC[0].reg = 3000 - 1; // Set the trigger to 16 kHz: (4Mhz / 16000) - 1
while (TC5->COUNT16.SYNCBUSY.bit.CC0)
; // Wait for synchronization
// Start Timer/Counter 5
TC5->COUNT16.CTRLA.bit.ENABLE = 1; // Enable the TC5 timer
while (TC5->COUNT16.SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
}
uint16_t _adc_buf_0[ADC_BUF_LEN];
uint16_t _adc_buf_1[ADC_BUF_LEN];
// WAV files have a header. This struct defines that header
struct wavFileHeader
{
char riff[4]; /* "RIFF" */
long flength; /* file length in bytes */
char wave[4]; /* "WAVE" */
char fmt[4]; /* "fmt " */
long chunk_size; /* size of FMT chunk in bytes (usually 16) */
short format_tag; /* 1=PCM, 257=Mu-Law, 258=A-Law, 259=ADPCM */
short num_chans; /* 1=mono, 2=stereo */
long srate; /* Sampling rate in samples per second */
long bytes_per_sec; /* bytes per second = srate*bytes_per_samp */
short bytes_per_samp; /* 2=16-bit mono, 4=16-bit stereo */
short bits_per_samp; /* Number of bits per sample */
char data[4]; /* "data" */
long dlength; /* data length in bytes (filelength - 44) */
};
void initBufferHeader()
{
wavFileHeader wavh;
strncpy(wavh.riff, "RIFF", 4);
strncpy(wavh.wave, "WAVE", 4);
strncpy(wavh.fmt, "fmt ", 4);
strncpy(wavh.data, "data", 4);
wavh.chunk_size = 16;
wavh.format_tag = 1; // PCM
wavh.num_chans = 1; // mono
wavh.srate = RATE;
wavh.bytes_per_sec = (RATE * 1 * 16 * 1) / 8;
wavh.bytes_per_samp = 2;
wavh.bits_per_samp = 16;
wavh.dlength = RATE * 2 * 1 * 16 / 2;
wavh.flength = wavh.dlength + 44;
_writer.writeSfudBuffer((byte *)&wavh, 44);
}
void audioCallback(uint16_t *buf, uint32_t buf_len)
{
static uint32_t idx = 44;
if (_isRecording)
{
for (uint32_t i = 0; i < buf_len; i++)
{
int16_t audio_value = ((int16_t)buf[i] - 2048) * 16;
_writer.writeSfudBuffer(audio_value & 0xFF);
_writer.writeSfudBuffer((audio_value >> 8) & 0xFF);
}
idx += buf_len;
if (idx >= BUFFER_SIZE)
{
_writer.flushSfudBuffer();
idx = 44;
_isRecording = false;
_isRecordingReady = true;
}
}
}
};
Mic mic;
void DMAC_1_Handler()
{
mic.dmaHandler();
}

@ -0,0 +1,102 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
#include "flash_stream.h"
class SpeechToText
{
public:
void init()
{
_token_client.setCACert(TOKEN_CERTIFICATE);
_speech_client.setCACert(SPEECH_CERTIFICATE);
_access_token = getAccessToken();
}
String convertSpeechToText()
{
char url[128];
sprintf(url, SPEECH_URL, SPEECH_LOCATION, LANGUAGE);
HTTPClient httpClient;
httpClient.begin(_speech_client, url);
httpClient.addHeader("Authorization", String("Bearer ") + _access_token);
httpClient.addHeader("Content-Type", String("audio/wav; codecs=audio/pcm; samplerate=") + String(RATE));
httpClient.addHeader("Accept", "application/json;text/xml");
Serial.println("Sending speech...");
FlashStream stream;
int httpResponseCode = httpClient.sendRequest("POST", &stream, BUFFER_SIZE);
Serial.println("Speech sent!");
String text = "";
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
text = obj["DisplayText"].as<String>();
}
else if (httpResponseCode == 401)
{
Serial.println("Access token expired, trying again with a new token");
_access_token = getAccessToken();
return convertSpeechToText();
}
else
{
Serial.print("Failed to convert text to speech - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return text;
}
private:
String getAccessToken()
{
char url[128];
sprintf(url, TOKEN_URL, SPEECH_LOCATION);
HTTPClient httpClient;
httpClient.begin(_token_client, url);
httpClient.addHeader("Ocp-Apim-Subscription-Key", SPEECH_API_KEY);
int httpResultCode = httpClient.POST("{}");
if (httpResultCode != 200)
{
Serial.println("Error getting access token, trying again...");
delay(10000);
return getAccessToken();
}
Serial.println("Got access token.");
String result = httpClient.getString();
httpClient.end();
return result;
}
WiFiClientSecure _token_client;
WiFiClientSecure _speech_client;
String _access_token;
};
SpeechToText speechToText;

@ -0,0 +1,86 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <Seeed_FS.h>
#include <SD/Seeed_SD.h>
#include <WiFiClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
class TextToSpeech
{
public:
void init()
{
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, GET_VOICES_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonArray obj = doc.as<JsonArray>();
_voice = obj[0].as<String>();
Serial.print("Using voice ");
Serial.println(_voice);
}
else
{
Serial.print("Failed to get voices - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
}
void convertTextToSpeech(String text)
{
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
doc["voice"] = _voice;
doc["text"] = text;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_SPEECH_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
if (httpResponseCode == 200)
{
File wav_file = SD.open("SPEECH.WAV", FILE_WRITE);
httpClient.writeToStream(&wav_file);
wav_file.close();
}
else
{
Serial.print("Failed to get speech - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
}
private:
WiFiClient _client;
String _voice;
};
TextToSpeech textToSpeech;

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,23 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
contrem/arduino-timer @ 2.3.0

@ -0,0 +1,91 @@
#pragma once
#define RATE 16000
#define SAMPLE_LENGTH_SECONDS 4
#define SAMPLES RATE * SAMPLE_LENGTH_SECONDS
#define BUFFER_SIZE (SAMPLES * 2) + 44
#define ADC_BUF_LEN 1600
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";
const char *SPEECH_API_KEY = "<API_KEY>";
const char *SPEECH_LOCATION = "<LOCATION>";
const char *LANGUAGE = "<LANGUAGE>";
const char *TOKEN_URL = "https://%s.api.cognitive.microsoft.com/sts/v1.0/issuetoken";
const char *SPEECH_URL = "https://%s.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=%s";
const char *TEXT_TO_TIMER_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/text-to-timer";
const char *TOKEN_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
"-----END CERTIFICATE-----\r\n";
const char *SPEECH_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQCq+mxcpjxFFB6jvh98dTFzANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwMTCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBAMedcDrkXufP7pxVm1FHLDNA9IjwHaMoaY8arqqZ4Gff4xyr\r\n"
"RygnavXL7g12MPAx8Q6Dd9hfBzrfWxkF0Br2wIvlvkzW01naNVSkHp+OS3hL3W6n\r\n"
"l/jYvZnVeJXjtsKYcXIf/6WtspcF5awlQ9LZJcjwaH7KoZuK+THpXCMtzD8XNVdm\r\n"
"GW/JI0C/7U/E7evXn9XDio8SYkGSM63aLO5BtLCv092+1d4GGBSQYolRq+7Pd1kR\r\n"
"EkWBPm0ywZ2Vb8GIS5DLrjelEkBnKCyy3B0yQud9dpVsiUeE7F5sY8Me96WVxQcb\r\n"
"OyYdEY/j/9UpDlOG+vA+YgOvBhkKEjiqygVpP8EZoMMijephzg43b5Qi9r5UrvYo\r\n"
"o19oR/8pf4HJNDPF0/FJwFVMW8PmCBLGstin3NE1+NeWTkGt0TzpHjgKyfaDP2tO\r\n"
"4bCk1G7pP2kDFT7SYfc8xbgCkFQ2UCEXsaH/f5YmpLn4YPiNFCeeIida7xnfTvc4\r\n"
"7IxyVccHHq1FzGygOqemrxEETKh8hvDR6eBdrBwmCHVgZrnAqnn93JtGyPLi6+cj\r\n"
"WGVGtMZHwzVvX1HvSFG771sskcEjJxiQNQDQRWHEh3NxvNb7kFlAXnVdRkkvhjpR\r\n"
"GchFhTAzqmwltdWhWDEyCMKC2x/mSZvZtlZGY+g37Y72qHzidwtyW7rBetZJAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQUDyBd16FXlduSzyvQx8J3BM5ygHYwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQAlFvNh7QgXVLAZSsNR2XRmIn9iS8OHFCBA\r\n"
"WxKJoi8YYQafpMTkMqeuzoL3HWb1pYEipsDkhiMnrpfeYZEA7Lz7yqEEtfgHcEBs\r\n"
"K9KcStQGGZRfmWU07hPXHnFz+5gTXqzCE2PBMlRgVUYJiA25mJPXfB00gDvGhtYa\r\n"
"+mENwM9Bq1B9YYLyLjRtUz8cyGsdyTIG/bBM/Q9jcV8JGqMU/UjAdh1pFyTnnHEl\r\n"
"Y59Npi7F87ZqYYJEHJM2LGD+le8VsHjgeWX2CJQko7klXvcizuZvUEDTjHaQcs2J\r\n"
"+kPgfyMIOY1DMJ21NxOJ2xPRC/wAh/hzSBRVtoAnyuxtkZ4VjIOh\r\n"
"-----END CERTIFICATE-----\r\n";

@ -0,0 +1,69 @@
#pragma once
#include <Arduino.h>
#include <HTTPClient.h>
#include <sfud.h>
#include "config.h"
class FlashStream : public Stream
{
public:
FlashStream()
{
_pos = 0;
_flash_address = 0;
_flash = sfud_get_device_table() + 0;
populateBuffer();
}
virtual size_t write(uint8_t val)
{
return 0;
}
virtual int available()
{
int remaining = BUFFER_SIZE - ((_flash_address - HTTP_TCP_BUFFER_SIZE) + _pos);
int bytes_available = min(HTTP_TCP_BUFFER_SIZE, remaining);
if (bytes_available == 0)
{
bytes_available = -1;
}
return bytes_available;
}
virtual int read()
{
int retVal = _buffer[_pos++];
if (_pos == HTTP_TCP_BUFFER_SIZE)
{
populateBuffer();
}
return retVal;
}
virtual int peek()
{
return _buffer[_pos];
}
private:
void populateBuffer()
{
sfud_read(_flash, _flash_address, HTTP_TCP_BUFFER_SIZE, _buffer);
_flash_address += HTTP_TCP_BUFFER_SIZE;
_pos = 0;
}
size_t _pos;
size_t _flash_address;
const sfud_flash *_flash;
byte _buffer[HTTP_TCP_BUFFER_SIZE];
};

@ -0,0 +1,60 @@
#pragma once
#include <Arduino.h>
#include <sfud.h>
class FlashWriter
{
public:
void init()
{
_flash = sfud_get_device_table() + 0;
_sfudBufferSize = _flash->chip.erase_gran;
_sfudBuffer = new byte[_sfudBufferSize];
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void reset()
{
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void writeSfudBuffer(byte b)
{
_sfudBuffer[_sfudBufferPos++] = b;
if (_sfudBufferPos == _sfudBufferSize)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void flushSfudBuffer()
{
if (_sfudBufferPos > 0)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void writeSfudBuffer(byte *b, size_t len)
{
for (size_t i = 0; i < len; ++i)
{
writeSfudBuffer(b[i]);
}
}
private:
byte *_sfudBuffer;
size_t _sfudBufferSize;
size_t _sfudBufferPos;
size_t _sfudBufferWritePos;
const sfud_flash *_flash;
};

@ -0,0 +1,53 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClient.h>
#include "config.h"
class LanguageUnderstanding
{
public:
int GetTimerDuration(String text)
{
DynamicJsonDocument doc(1024);
doc["text"] = text;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_TIMER_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
int seconds = 0;
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
seconds = obj["seconds"].as<int>();
}
else
{
Serial.print("Failed to understand text - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return seconds;
}
private:
WiFiClient _client;
};
LanguageUnderstanding languageUnderstanding;

@ -0,0 +1,127 @@
#include <Arduino.h>
#include <arduino-timer.h>
#include <rpcWiFi.h>
#include <sfud.h>
#include <SPI.h>
#include "config.h"
#include "language_understanding.h"
#include "mic.h"
#include "speech_to_text.h"
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
Serial.println("Connected!");
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
while (!(sfud_init() == SFUD_SUCCESS))
;
sfud_qspi_fast_read_enable(sfud_get_device(SFUD_W25Q32_DEVICE_INDEX), 2);
pinMode(WIO_KEY_C, INPUT_PULLUP);
mic.init();
speechToText.init();
Serial.println("Ready.");
}
auto timer = timer_create_default();
void say(String text)
{
Serial.println(text);
}
bool timerExpired(void *announcement)
{
say((char *)announcement);
return false;
}
void processAudio()
{
String text = speechToText.convertSpeechToText();
Serial.println(text);
int total_seconds = languageUnderstanding.GetTimerDuration(text);
if (total_seconds == 0)
{
return;
}
int minutes = total_seconds / 60;
int seconds = total_seconds % 60;
String begin_message;
if (minutes > 0)
{
begin_message += minutes;
begin_message += " minute ";
}
if (seconds > 0)
{
begin_message += seconds;
begin_message += " second ";
}
begin_message += "timer started.";
String end_message("Times up on your ");
if (minutes > 0)
{
end_message += minutes;
end_message += " minute ";
}
if (seconds > 0)
{
end_message += seconds;
end_message += " second ";
}
end_message += "timer.";
say(begin_message);
timer.in(total_seconds * 1000, timerExpired, (void *)(end_message.c_str()));
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW && !mic.isRecording())
{
Serial.println("Starting recording...");
mic.startRecording();
}
if (!mic.isRecording() && mic.isRecordingReady())
{
Serial.println("Finished recording");
processAudio();
mic.reset();
}
timer.tick();
}

@ -0,0 +1,242 @@
#pragma once
#include <Arduino.h>
#include "config.h"
#include "flash_writer.h"
class Mic
{
public:
Mic()
{
_isRecording = false;
_isRecordingReady = false;
}
void startRecording()
{
_isRecording = true;
_isRecordingReady = false;
}
bool isRecording()
{
return _isRecording;
}
bool isRecordingReady()
{
return _isRecordingReady;
}
void init()
{
analogReference(AR_INTERNAL2V23);
_writer.init();
initBufferHeader();
configureDmaAdc();
}
void reset()
{
_isRecordingReady = false;
_isRecording = false;
_writer.reset();
initBufferHeader();
}
void dmaHandler()
{
static uint8_t count = 0;
if (DMAC->Channel[1].CHINTFLAG.bit.SUSP)
{
DMAC->Channel[1].CHCTRLB.reg = DMAC_CHCTRLB_CMD_RESUME;
DMAC->Channel[1].CHINTFLAG.bit.SUSP = 1;
if (count)
{
audioCallback(_adc_buf_0, ADC_BUF_LEN);
}
else
{
audioCallback(_adc_buf_1, ADC_BUF_LEN);
}
count = (count + 1) % 2;
}
}
private:
volatile bool _isRecording;
volatile bool _isRecordingReady;
FlashWriter _writer;
typedef struct
{
uint16_t btctrl;
uint16_t btcnt;
uint32_t srcaddr;
uint32_t dstaddr;
uint32_t descaddr;
} dmacdescriptor;
// Globals - DMA and ADC
volatile dmacdescriptor _wrb[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor_section[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor __attribute__((aligned(16)));
void configureDmaAdc()
{
// Configure DMA to sample from ADC at a regular interval (triggered by timer/counter)
DMAC->BASEADDR.reg = (uint32_t)_descriptor_section; // Specify the location of the descriptors
DMAC->WRBADDR.reg = (uint32_t)_wrb; // Specify the location of the write back descriptors
DMAC->CTRL.reg = DMAC_CTRL_DMAENABLE | DMAC_CTRL_LVLEN(0xf); // Enable the DMAC peripheral
DMAC->Channel[1].CHCTRLA.reg = DMAC_CHCTRLA_TRIGSRC(TC5_DMAC_ID_OVF) | // Set DMAC to trigger on TC5 timer overflow
DMAC_CHCTRLA_TRIGACT_BURST; // DMAC burst transfer
_descriptor.descaddr = (uint32_t)&_descriptor_section[1]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_0 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_0 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[0], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
_descriptor.descaddr = (uint32_t)&_descriptor_section[0]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_1 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_1 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[1], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
// Configure NVIC
NVIC_SetPriority(DMAC_1_IRQn, 0); // Set the Nested Vector Interrupt Controller (NVIC) priority for DMAC1 to 0 (highest)
NVIC_EnableIRQ(DMAC_1_IRQn); // Connect DMAC1 to Nested Vector Interrupt Controller (NVIC)
// Activate the suspend (SUSP) interrupt on DMAC channel 1
DMAC->Channel[1].CHINTENSET.reg = DMAC_CHINTENSET_SUSP;
// Configure ADC
ADC1->INPUTCTRL.bit.MUXPOS = ADC_INPUTCTRL_MUXPOS_AIN12_Val; // Set the analog input to ADC0/AIN2 (PB08 - A4 on Metro M4)
while (ADC1->SYNCBUSY.bit.INPUTCTRL)
; // Wait for synchronization
ADC1->SAMPCTRL.bit.SAMPLEN = 0x00; // Set max Sampling Time Length to half divided ADC clock pulse (2.66us)
while (ADC1->SYNCBUSY.bit.SAMPCTRL)
; // Wait for synchronization
ADC1->CTRLA.reg = ADC_CTRLA_PRESCALER_DIV128; // Divide Clock ADC GCLK by 128 (48MHz/128 = 375kHz)
ADC1->CTRLB.reg = ADC_CTRLB_RESSEL_12BIT | // Set ADC resolution to 12 bits
ADC_CTRLB_FREERUN; // Set ADC to free run mode
while (ADC1->SYNCBUSY.bit.CTRLB)
; // Wait for synchronization
ADC1->CTRLA.bit.ENABLE = 1; // Enable the ADC
while (ADC1->SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
ADC1->SWTRIG.bit.START = 1; // Initiate a software trigger to start an ADC conversion
while (ADC1->SYNCBUSY.bit.SWTRIG)
; // Wait for synchronization
// Enable DMA channel 1
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode
TC5->COUNT16.CC[0].reg = 3000 - 1; // Set the trigger to 16 kHz: (4Mhz / 16000) - 1
while (TC5->COUNT16.SYNCBUSY.bit.CC0)
; // Wait for synchronization
// Start Timer/Counter 5
TC5->COUNT16.CTRLA.bit.ENABLE = 1; // Enable the TC5 timer
while (TC5->COUNT16.SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
}
uint16_t _adc_buf_0[ADC_BUF_LEN];
uint16_t _adc_buf_1[ADC_BUF_LEN];
// WAV files have a header. This struct defines that header
struct wavFileHeader
{
char riff[4]; /* "RIFF" */
long flength; /* file length in bytes */
char wave[4]; /* "WAVE" */
char fmt[4]; /* "fmt " */
long chunk_size; /* size of FMT chunk in bytes (usually 16) */
short format_tag; /* 1=PCM, 257=Mu-Law, 258=A-Law, 259=ADPCM */
short num_chans; /* 1=mono, 2=stereo */
long srate; /* Sampling rate in samples per second */
long bytes_per_sec; /* bytes per second = srate*bytes_per_samp */
short bytes_per_samp; /* 2=16-bit mono, 4=16-bit stereo */
short bits_per_samp; /* Number of bits per sample */
char data[4]; /* "data" */
long dlength; /* data length in bytes (filelength - 44) */
};
void initBufferHeader()
{
wavFileHeader wavh;
strncpy(wavh.riff, "RIFF", 4);
strncpy(wavh.wave, "WAVE", 4);
strncpy(wavh.fmt, "fmt ", 4);
strncpy(wavh.data, "data", 4);
wavh.chunk_size = 16;
wavh.format_tag = 1; // PCM
wavh.num_chans = 1; // mono
wavh.srate = RATE;
wavh.bytes_per_sec = (RATE * 1 * 16 * 1) / 8;
wavh.bytes_per_samp = 2;
wavh.bits_per_samp = 16;
wavh.dlength = RATE * 2 * 1 * 16 / 2;
wavh.flength = wavh.dlength + 44;
_writer.writeSfudBuffer((byte *)&wavh, 44);
}
void audioCallback(uint16_t *buf, uint32_t buf_len)
{
static uint32_t idx = 44;
if (_isRecording)
{
for (uint32_t i = 0; i < buf_len; i++)
{
int16_t audio_value = ((int16_t)buf[i] - 2048) * 16;
_writer.writeSfudBuffer(audio_value & 0xFF);
_writer.writeSfudBuffer((audio_value >> 8) & 0xFF);
}
idx += buf_len;
if (idx >= BUFFER_SIZE)
{
_writer.flushSfudBuffer();
idx = 44;
_isRecording = false;
_isRecordingReady = true;
}
}
}
};
Mic mic;
void DMAC_1_Handler()
{
mic.dmaHandler();
}

@ -0,0 +1,102 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
#include "flash_stream.h"
class SpeechToText
{
public:
void init()
{
_token_client.setCACert(TOKEN_CERTIFICATE);
_speech_client.setCACert(SPEECH_CERTIFICATE);
_access_token = getAccessToken();
}
String convertSpeechToText()
{
char url[128];
sprintf(url, SPEECH_URL, SPEECH_LOCATION, LANGUAGE);
HTTPClient httpClient;
httpClient.begin(_speech_client, url);
httpClient.addHeader("Authorization", String("Bearer ") + _access_token);
httpClient.addHeader("Content-Type", String("audio/wav; codecs=audio/pcm; samplerate=") + String(RATE));
httpClient.addHeader("Accept", "application/json;text/xml");
Serial.println("Sending speech...");
FlashStream stream;
int httpResponseCode = httpClient.sendRequest("POST", &stream, BUFFER_SIZE);
Serial.println("Speech sent!");
String text = "";
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
text = obj["DisplayText"].as<String>();
}
else if (httpResponseCode == 401)
{
Serial.println("Access token expired, trying again with a new token");
_access_token = getAccessToken();
return convertSpeechToText();
}
else
{
Serial.print("Failed to convert text to speech - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return text;
}
private:
String getAccessToken()
{
char url[128];
sprintf(url, TOKEN_URL, SPEECH_LOCATION);
HTTPClient httpClient;
httpClient.begin(_token_client, url);
httpClient.addHeader("Ocp-Apim-Subscription-Key", SPEECH_API_KEY);
int httpResultCode = httpClient.POST("{}");
if (httpResultCode != 200)
{
Serial.println("Error getting access token, trying again...");
delay(10000);
return getAccessToken();
}
Serial.println("Got access token.");
String result = httpClient.getString();
httpClient.end();
return result;
}
WiFiClientSecure _token_client;
WiFiClientSecure _speech_client;
String _access_token;
};
SpeechToText speechToText;

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -1,6 +1,6 @@
# Set a timer - Virtual IoT Hardware and Raspberry Pi
In this part of the lesson, you will set a timer on your virtual IoT device or Raspberry Pi based off a command from the IoT Hub.
In this part of the lesson, you will call your serverless code to understand the speech, and set a timer n your virtual IoT device or Raspberry Pi based off the results.
## Set a timer

@ -1,3 +1,287 @@
# Set a timer - Wio Terminal
Coming soon
In this part of the lesson, you will call your serverless code to understand the speech, and set a timer on your Wio Terminal based off the results.
## Set a timer
The text that comes back from the speech to text call needs to be sent to your serverless code to be processed by LUIS, getting back the number of seconds for the timer. This number of seconds can be used to set a timer.
Microcontrollers don't natively have support for multiple threads in Arduino, so there are no standard timer classes like you might find when coding in Python or other higher-level languages. Instead you can use timer libraries that work by measuring elapsed time in the `loop` function, and calling functions when the time is up.
### Task - send the text to the serverless function
1. Open the `smart-timer` project in VS Code if it is not already open.
1. Open the `config.h` header file and add the URL for your function app:
```cpp
const char *TEXT_TO_TIMER_FUNCTION_URL = "<URL>";
```
Replace `<URL>` with the URL for your function app that you obtained in the last step of the last lesson, pointing to the IP address of your local machine that is running the function app.
1. Create a new file in the `src` folder called `language_understanding.h`. This will be used to define a class to send the recognized speech to your function app to be converted to seconds using LUIS.
1. Add the following to the top of this file:
```cpp
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClient.h>
#include "config.h"
```
This includes some needed header files.
1. Define a class called `LanguageUnderstanding`, and declare an instance of this class:
```cpp
class LanguageUnderstanding
{
public:
private:
};
LanguageUnderstanding languageUnderstanding;
```
1. To call your functions app, you need to declare a WiFi client. Add the following to the `private` section of the class:
```cpp
WiFiClient _client;
```
1. In the `public` section, declare a method called `GetTimerDuration` to call the functions app:
```cpp
int GetTimerDuration(String text)
{
}
```
1. In the `GetTimerDuration` method, add the following code to build the JSON to be sent to the functions app:
```cpp
DynamicJsonDocument doc(1024);
doc["text"] = text;
String body;
serializeJson(doc, body);
```
This coverts the text passed to the `GetTimerDuration` method into the following JSON:
```json
{
"text" : "<text>"
}
```
where `<text>` is the text passed to the function.
1. Below this, add the following code to make the functions app call:
```cpp
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_TIMER_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
```
This makes a POST request to the functions app, passing the JSON body and getting the response code.
1. Add the following code below this:
```cpp
int seconds = 0;
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
seconds = obj["seconds"].as<int>();
}
else
{
Serial.print("Failed to understand text - error ");
Serial.println(httpResponseCode);
}
```
This code checks the response code. If it is 200 (success), then the number of seconds for the time is retrieved from the response body. Otherwise an error is sent to the serial monitor and the number of seconds is set to 0.
1. Add the following code to the end of this method to close the HTTP connection and return the number of seconds:
```cpp
httpClient.end();
return seconds;
```
1. In the `main.cpp` file, include this new header:
```cpp
#include "speech_to_text.h"
```
1. On the end of the `processAudio` function, call the `GetTimerDuration` method to get the timer duration:
```cpp
int total_seconds = languageUnderstanding.GetTimerDuration(text);
```
This converts the text from the call to the `SpeechToText` class into the number of seconds for the timer.
### Task - set a timer
The number of seconds can be used to set a timer.
1. Add the following library dependency to the `platformio.ini` file to add a library to set a timer:
```ini
contrem/arduino-timer @ 2.3.0
```
1. Add an include directive for this library to the `main.cpp` file:
```cpp
#include <arduino-timer.h>
```
1. Above the `processAudio` function, add the following code:
```cpp
auto timer = timer_create_default();
```
This code declares a timer called `timer`.
1. Below this, add the following code:
```cpp
void say(String text)
{
Serial.print("Saying ");
Serial.println(text);
}
```
This `say` function will eventually convert text to speech, but for now it will just write the passed in text to the serial monitor.
1. Below the `say` function, add the following code:
```cpp
bool timerExpired(void *announcement)
{
say((char *)announcement);
return false;
}
```
This is a callback function that will be called when a timer expires. It is passed a message to say when the timer expires. Timers can repeat, and this can be controlled by the return value of this callback - this returns `false`, to tell the timer to not run again.
1. Add the following code to the end of the `processAudio` function:
```cpp
if (total_seconds == 0)
{
return;
}
int minutes = total_seconds / 60;
int seconds = total_seconds % 60;
```
This code checks the total number of seconds, and if it is 0, returns from teh function call so no timers are set. It then converts the total number of seconds into minutes and seconds.
1. Below this code, add the following to create a message to say when the timer is started:
```cpp
String begin_message;
if (minutes > 0)
{
begin_message += minutes;
begin_message += " minute ";
}
if (seconds > 0)
{
begin_message += seconds;
begin_message += " second ";
}
begin_message += "timer started.";
```
1. Below this, add similar code to create a message to say when the timer has expired:
```cpp
String end_message("Times up on your ");
if (minutes > 0)
{
end_message += minutes;
end_message += " minute ";
}
if (seconds > 0)
{
end_message += seconds;
end_message += " second ";
}
end_message += "timer.";
```
1. After this, say the timer started message:
```cpp
say(begin_message);
```
1. At the end of this function, start the timer:
```cpp
timer.in(total_seconds * 1000, timerExpired, (void *)(end_message.c_str()));
```
This triggers the timer. The timer is set using milliseconds, so the total number of seconds is multiplied by 1,000 to convert to milliseconds. The `timerExpired` function is passed as the callback, and the `end_message` is passed as an argument to pass to the callback. This callback only takes `void *` arguments, so the string is converted appropriately.
1. Finally, the timer needs to *tick*, and this is done in the `loop` function. Add the following code at the end of the `loop` function:
```cpp
timer.tick();
```
1. Build this code, upload it to your Wio Terminal and test it out through the serial monitor. Once you see `Ready` in the serial monitor, press the C button (the one on the left-hand side, closest to the power switch), and speak. 4 seconds of audio will be captured, converted to text, then sent to your function app, and a timer will be set. Make sure your functions app is running locally.
You will see when the timer starts, and when it ends.
```output
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- Miniterm on /dev/cu.usbmodem1101 9600,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
Connecting to WiFi..
Connected!
Got access token.
Ready.
Starting recording...
Finished recording
Sending speech...
Speech sent!
{"RecognitionStatus":"Success","DisplayText":"Set a 2 minute and 27 second timer.","Offset":4700000,"Duration":35300000}
Set a 2 minute and 27 second timer.
{"seconds": 147}
2 minute 27 second timer started.
Times up on your 2 minute 27 second timer.
```
> 💁 You can find this code in the [code-timer/wio-terminal](code-timer/wio-terminal) folder.
😀 Your timer program was a success!

@ -1,3 +1,522 @@
# Text to speech - Wio Terminal
Coming soon
In this part of the lesson, you will convert text to speech to provide spoken feedback.
## Text to speech
The speech services SDK that you used in the last lesson to convert speech to text can be used to convert text back to speech.
## Get a list of voices
When requesting speech, you need to provide the voice to use as speech can be generated using a variety of different voices. Each language supports a range of different voices, and you can get the list of supported voices for each language from the speech services SDK. The limitations of microcontrollers come into play here - the call to get the list of voices supported by the text to speech services is a JSON document of over 77KB in size, far to large to be processed by the Wio Terminal. At the time of writing, the full list contains 215 voices, each defined by a JSON document like the following:
```json
{
"Name": "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)",
"DisplayName": "Aria",
"LocalName": "Aria",
"ShortName": "en-US-AriaNeural",
"Gender": "Female",
"Locale": "en-US",
"StyleList": [
"chat",
"customerservice",
"narration-professional",
"newscast-casual",
"newscast-formal",
"cheerful",
"empathetic"
],
"SampleRateHertz": "24000",
"VoiceType": "Neural",
"Status": "GA"
}
```
This JSON is for the **Aria** voice, which has multiple voice styles. All that is needed when converting text to speech is the shortname, `en-US-AriaNeural`.
Instead of downloading and decoding this entire list on your microcontroller, you will need to write some more serverless code to retrieve the list of voices for the language you are using, and call this from your Wio Terminal. Your code can then pick an appropriate voice from the list, such as the first one it finds.
### Task - create a serverless function to get a list of voices
1. Open your `smart-timer-trigger` project in VS Code, and open the terminal ensuring the virtual environment is activated. If not, kill and re-create the terminal.
1. Open the `local.settings.json` file and add settings for the speech API key and location:
```json
"SPEECH_KEY": "<key>",
"SPEECH_LOCATION": "<location>"
```
Replace `<key>` with the API key for your speech service resource. Replace `<location>` with the location you used when you created the speech service resource.
1. Add a new HTTP trigger to this app called `get-voices` using the following command from inside the VS Code terminal in the root folder of the functions app project:
```sh
func new --name get-voices --template "HTTP trigger"
```
This will create an HTTP trigger called `get-voices`.
1. Replace the contents of the `__init__.py` file in the `get-voices` folder with the following:
```python
import json
import os
import requests
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
req_body = req.get_json()
language = req_body['language']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
response = requests.get(url, headers=headers)
voices_json = json.loads(response.text)
voices = filter(lambda x: x['Locale'].lower() == language.lower(), voices_json)
voices = map(lambda x: x['ShortName'], voices)
return func.HttpResponse(json.dumps(list(voices)), status_code=200)
```
This code makes an HTTP request to the endpoint to get the voices. This voices list is a large block of JSON with voices for all languages, so the voices for the language passed in the request body are filtered out, then the shortname is extracted and returned as a JSON list. The shortname is the value needed to convert text to speech, so only this value is returned.
> 💁 You can change the filter as necessary to select just the voices you want.
This reduces the size of the data from 77KB (at the time of writing), to a much smaller JSON document. For example, for US voices this is 408 bytes.
1. Run your function app locally. You can then call this using a tool like curl in the same way that you tested your `text-to-timer` HTTP trigger. Make sure to pass your language as a JSON body:
```json
{
"language":"<language>"
}
```
Replace `<language>` with your language, such as `en-GB`, or `zh-CN`.
> 💁 You can find this code in the [code-spoken-response/functions](code-spoken-response/functions) folder.
### Task - retrieve the voice from your Wio Terminal
1. Open the `smart-timer` project in VS Code if it is not already open.
1. Open the `config.h` header file and add the URL for your function app:
```cpp
const char *GET_VOICES_FUNCTION_URL = "<URL>";
```
Replace `<URL>` with the URL for the `get-voices` HTTP trigger on your function app. This will be the same as the value for `TEXT_TO_TIMER_FUNCTION_URL`, except with a function name of `get-voices` instead of `text-to-timer`.
1. Create a new file in the `src` folder called `text_to_speech.h`. This will be used to define a class to convert from text to speech.
1. Add the following include directives to the top of the new `text_to_speech.h` file:
```cpp
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <Seeed_FS.h>
#include <SD/Seeed_SD.h>
#include <WiFiClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
#include "speech_to_text.h"
```
1. Add the following code below this to declare the `TextToSpeech` class, along with an instance that can be used in the rest of the application:
```cpp
class TextToSpeech
{
public:
private:
};
TextToSpeech textToSpeech;
```
1. To call your functions app, you need to declare a WiFi client. Add the following to the `private` section of the class:
```cpp
WiFiClient _client;
```
1. In the `private` section, add a field for the selected voice:
```cpp
String _voice;
```
1. To the `public` section, add an `init` function that will get the first voice:
```cpp
void init()
{
}
```
1. To get the voices, a JSON document needs to be sent to the function app with the language. Add the following code to the `init` function to create this JSON document:
```cpp
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
String body;
serializeJson(doc, body);
```
1. Next create an `HTTPClient`, then use it to call the functions app to get the voices, posting the JSON document:
```cpp
HTTPClient httpClient;
httpClient.begin(_client, GET_VOICES_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
```
1. Below this add code to check the response code, and if it is 200 (success), then extract the list of voices, retrieving the first one from the list:
```cpp
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonArray obj = doc.as<JsonArray>();
_voice = obj[0].as<String>();
Serial.print("Using voice ");
Serial.println(_voice);
}
else
{
Serial.print("Failed to get voices - error ");
Serial.println(httpResponseCode);
}
```
1. After this, end the HTTP client connection:
```cpp
httpClient.end();
```
1. Open the `main.cpp` file, and add the following include directive at the top to include this new header file:
```cpp
#include "text_to_speech.h"
```
1. In the `setup` function, underneath the call to `speechToText.init();`, add the following to initialize the `TextToSpeech` class:
```cpp
textToSpeech.init();
```
1. Build this code, upload it to your Wio Terminal and test it out through the serial monitor. Make sure your function app is running.
You will see the list of available voices returned from the function app, along with the selected voice.
```output
--- Available filters and text transformations: colorize, debug, default, direct, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- Miniterm on /dev/cu.usbmodem1101 9600,8,N,1 ---
--- Quit: Ctrl+C | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
Connecting to WiFi..
Connected!
Got access token.
["en-US-JennyNeural", "en-US-JennyMultilingualNeural", "en-US-GuyNeural", "en-US-AriaNeural", "en-US-AmberNeural", "en-US-AnaNeural", "en-US-AshleyNeural", "en-US-BrandonNeural", "en-US-ChristopherNeural", "en-US-CoraNeural", "en-US-ElizabethNeural", "en-US-EricNeural", "en-US-JacobNeural", "en-US-MichelleNeural", "en-US-MonicaNeural", "en-US-AriaRUS", "en-US-BenjaminRUS", "en-US-GuyRUS", "en-US-ZiraRUS"]
Using voice en-US-JennyNeural
Ready.
```
## Convert text to speech
Once you have a voice to use, it can be used to convert text to speech. The same memory limitations with voices also apply when converting speech to text, so you will need to write the speech to an SD card ready to be played over the ReSpeaker.
> 💁 In earlier lessons in this project you used flash memory to store speech captured from the microphone. This lesson uses an SD card as is it easier to play audio from it using the Seeed audio libraries.
There is also another limitation to consider, the available audio data from the speech service, and the formats that the Wio Terminal supports. Unlike full computers, audio libraries for microcontrollers can be very limited in the audio formats they support. For example, the Seeed Arduino Audio library that can play sound over the ReSpeaker only supports audio at a 44.1KHz sample rate. The Azure speech services can provide audio in a number of formats, but none of them use this sample rate, they only provide 8KHz, 16KHz, 24KHz and 48KHz. This means the audio needs to be re-sampled to 44.1KHz, something that would need more resources that the Wio Terminal has, especially memory.
When needing to manipulate data like this, it is often better to use serverless code, especially if the data is sourced via a web call. The Wio Terminal can call a serverless function, passing in the text to convert, and the serverless function can both call the speech service to convert text to speech, as well as re-sample the audio to the required sample rate. It can then return the audio in the form the Wio Terminal needs to be stored on the SD card and played over the ReSpeaker.
### Task - create a serverless function to convert text to speech
1. Open your `smart-timer-trigger` project in VS Code, and open the terminal ensuring the virtual environment is activated. If not, kill and re-create the terminal.
1. Add a new HTTP trigger to this app called `text-to-speech` using the following command from inside the VS Code terminal in the root folder of the functions app project:
```sh
func new --name text-to-speech --template "HTTP trigger"
```
This will create an HTTP trigger called `text-to-speech`.
1. The [librosa](https://librosa.org) Pip package has functions to re-sample audio, so add this to the `requirements.txt` file:
```sh
librosa
```
Once this has been added, install the Pip packages using the following command from the VS Code terminal:
```sh
pip install -r requirements.txt
```
> ⚠️ If you are using Linux, including Raspberry Pi OS, you may need to install `libsndfile` with the following command:
>
> ```sh
> sudo apt update
> sudo apt install libsndfile1-dev
> ```
1. To convert text to speech, you cannot use the speech API key directly, instead you need to request an access token, using the API key to authenticate the access token request. Open the `__init__.py` file from the `text-to-speech` folder and replace all the code in it with the following:
```python
import io
import os
import requests
import librosa
import soundfile as sf
import azure.functions as func
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
```
This defines constants for the location and speech key that will be read from the settings. It then defines the `get_access_token` function that will retrieve an access token for the speech service.
1. Below this code, add the following:
```python
playback_format = 'riff-48khz-16bit-mono-pcm'
def main(req: func.HttpRequest) -> func.HttpResponse:
req_body = req.get_json()
language = req_body['language']
voice = req_body['voice']
text = req_body['text']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': 'application/ssml+xml',
'X-Microsoft-OutputFormat': playback_format
}
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{voice}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
response = requests.post(url, headers=headers, data=ssml.encode('utf-8'))
raw_audio, sample_rate = librosa.load(io.BytesIO(response.content), sr=48000)
resampled = librosa.resample(raw_audio, sample_rate, 44100)
output_buffer = io.BytesIO()
sf.write(output_buffer, resampled, 44100, 'PCM_16', format='wav')
output_buffer.seek(0)
return func.HttpResponse(output_buffer.read(), status_code=200)
```
This defines the HTTP trigger that converts the text to speech. It extracts the text to convert, the language and the voice from the JSON body set to the request, builds some SSML to request the speech, then calls the relevant REST API authenticating using the access token. This REST API call returns the audio encoded as 16-bit, 48KHz mono WAV file, defined by the value of `playback_format`, which is sent to the REST API call.
This is then re-sampled by `librosa` from a sample rate of 48KHz to a sample rate of 44.1KHz, then this audio is saved to a binary buffer that is then returned.
1. Run your function app locally, or deploy it to the cloud. You can then call this using a tool like curl in the same way that you tested your `text-to-timer` HTTP trigger. Make sure to pass the language, voice and text as the JSON body:
```json
{
"language": "<language>",
"voice": "<voice>",
"text": "<text>"
}
```
Replace `<language>` with your language, such as `en-GB`, or `zh-CN`. Replace `<voice>` with the voice you want to use. Replace `<text>` with the text you want to convert to speech. You can save the output to a file and play it with any audio player that can play WAV files.
For example, to convert "Hello" to speech using US English with the Jenny Neural voice, with the function app running locally, you can use the following curl command:
```sh
curl -X GET 'http://localhost:7071/api/text-to-speech' \
-H 'Content-Type: application/json' \
-o hello.wav \
-d '{
"language":"en-US",
"voice": "en-US-JennyNeural",
"text": "Hello"
}'
```
This will save the audio to `hello.wav` in the current directory.
> 💁 You can find this code in the [code-spoken-response/functions](code-spoken-response/functions) folder.
### Task - retrieve the speech from your Wio Terminal
1. Open the `smart-timer` project in VS Code if it is not already open.
1. Open the `config.h` header file and add the URL for your function app:
```cpp
const char *TEXT_TO_SPEECH_FUNCTION_URL = "<URL>";
```
Replace `<URL>` with the URL for the `text-to-speech` HTTP trigger on your function app. This will be the same as the value for `TEXT_TO_TIMER_FUNCTION_URL`, except with a function name of `text-to-speech` instead of `text-to-timer`.
1. Open the `text_to_speech.h` header file, and add the following method to the `public` section of the `TextToSpeech` class:
```cpp
void convertTextToSpeech(String text)
{
}
```
1. To the `convertTextToSpeech` method, add the following code to create the JSON to send to the function app:
```cpp
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
doc["voice"] = _voice;
doc["text"] = text;
String body;
serializeJson(doc, body);
```
This writes the language, voice and text to the JSON document, then serializes it to a string.
1. Below this, add the following code to call the function app:
```cpp
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_SPEECH_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
```
This creates an HTTPClient, then makes a POST request using the JSON document to the text to speech HTTP trigger.
1. If the call works, the raw binary data returned from the function app call can be streamed to a file on the SD card. Add the following code to do this:
```cpp
if (httpResponseCode == 200)
{
File wav_file = SD.open("SPEECH.WAV", FILE_WRITE);
httpClient.writeToStream(&wav_file);
wav_file.close();
}
else
{
Serial.print("Failed to get speech - error ");
Serial.println(httpResponseCode);
}
```
This code checks the response, and if it is 200 (success), the binary data is streamed to a file in the root of the SD Card called `SPEECH.WAV`.
1. At the end of this method, close the HTTP connection:
```cpp
httpClient.end();
```
1. The text to be spoken can now be converted to audio. In the `main.cpp` file, add the following line to the end of the `say` function to convert the text to say into audio:
```cpp
textToSpeech.convertTextToSpeech(text);
```
### Task - play audio from your Wio Terminal
**Coming soon**
## Deploying your functions app to the cloud
The reason for running the functions app locally is because the `librosa` Pip package on linux has a dependency on a library that is not installed by default, and will need to be installed before the function app can run. Function apps are serverless - there are no servers you can manage yourself, so no way to install this library up front.
The way to do this is instead to deploy your functions app using a Docker container. This container is deployed by the cloud whenever it needs to spin up a new instance of your function app (such as when the demand exceeds the available resources, or if the function app hasn't been used for a while and is closed down).
You can find the instructions to set up a function app and deploy via Docker in the [create a function on Linux using a custom container documentation on Microsoft Docs](https://docs.microsoft.com/azure/azure-functions/functions-create-function-linux-custom-image?tabs=bash%2Cazurecli&pivots=programming-language-python&WT.mc_id=academic-17441-jabenn).
Once this has been deployed, you can port your Wio Terminal code to access this function:
1. Add the Azure Functions certificate to `config.h`:
```cpp
const char *FUNCTIONS_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIFWjCCBEKgAwIBAgIQDxSWXyAgaZlP1ceseIlB4jANBgkqhkiG9w0BAQsFADBa\r\n"
"MQswCQYDVQQGEwJJRTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJl\r\n"
"clRydXN0MSIwIAYDVQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTIw\r\n"
"MDcyMTIzMDAwMFoXDTI0MTAwODA3MDAwMFowTzELMAkGA1UEBhMCVVMxHjAcBgNV\r\n"
"BAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEgMB4GA1UEAxMXTWljcm9zb2Z0IFJT\r\n"
"QSBUTFMgQ0EgMDEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCqYnfP\r\n"
"mmOyBoTzkDb0mfMUUavqlQo7Rgb9EUEf/lsGWMk4bgj8T0RIzTqk970eouKVuL5R\r\n"
"IMW/snBjXXgMQ8ApzWRJCZbar879BV8rKpHoAW4uGJssnNABf2n17j9TiFy6BWy+\r\n"
"IhVnFILyLNK+W2M3zK9gheiWa2uACKhuvgCca5Vw/OQYErEdG7LBEzFnMzTmJcli\r\n"
"W1iCdXby/vI/OxbfqkKD4zJtm45DJvC9Dh+hpzqvLMiK5uo/+aXSJY+SqhoIEpz+\r\n"
"rErHw+uAlKuHFtEjSeeku8eR3+Z5ND9BSqc6JtLqb0bjOHPm5dSRrgt4nnil75bj\r\n"
"c9j3lWXpBb9PXP9Sp/nPCK+nTQmZwHGjUnqlO9ebAVQD47ZisFonnDAmjrZNVqEX\r\n"
"F3p7laEHrFMxttYuD81BdOzxAbL9Rb/8MeFGQjE2Qx65qgVfhH+RsYuuD9dUw/3w\r\n"
"ZAhq05yO6nk07AM9c+AbNtRoEcdZcLCHfMDcbkXKNs5DJncCqXAN6LhXVERCw/us\r\n"
"G2MmCMLSIx9/kwt8bwhUmitOXc6fpT7SmFvRAtvxg84wUkg4Y/Gx++0j0z6StSeN\r\n"
"0EJz150jaHG6WV4HUqaWTb98Tm90IgXAU4AW2GBOlzFPiU5IY9jt+eXC2Q6yC/Zp\r\n"
"TL1LAcnL3Qa/OgLrHN0wiw1KFGD51WRPQ0Sh7QIDAQABo4IBJTCCASEwHQYDVR0O\r\n"
"BBYEFLV2DDARzseSQk1Mx1wsyKkM6AtkMB8GA1UdIwQYMBaAFOWdWTCCR1jMrPoI\r\n"
"VDaGezq1BE3wMA4GA1UdDwEB/wQEAwIBhjAdBgNVHSUEFjAUBggrBgEFBQcDAQYI\r\n"
"KwYBBQUHAwIwEgYDVR0TAQH/BAgwBgEB/wIBADA0BggrBgEFBQcBAQQoMCYwJAYI\r\n"
"KwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTA6BgNVHR8EMzAxMC+g\r\n"
"LaArhilodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vT21uaXJvb3QyMDI1LmNybDAq\r\n"
"BgNVHSAEIzAhMAgGBmeBDAECATAIBgZngQwBAgIwCwYJKwYBBAGCNyoBMA0GCSqG\r\n"
"SIb3DQEBCwUAA4IBAQCfK76SZ1vae4qt6P+dTQUO7bYNFUHR5hXcA2D59CJWnEj5\r\n"
"na7aKzyowKvQupW4yMH9fGNxtsh6iJswRqOOfZYC4/giBO/gNsBvwr8uDW7t1nYo\r\n"
"DYGHPpvnpxCM2mYfQFHq576/TmeYu1RZY29C4w8xYBlkAA8mDJfRhMCmehk7cN5F\r\n"
"JtyWRj2cZj/hOoI45TYDBChXpOlLZKIYiG1giY16vhCRi6zmPzEwv+tk156N6cGS\r\n"
"Vm44jTQ/rs1sa0JSYjzUaYngoFdZC4OfxnIkQvUIA4TOFmPzNPEFdjcZsgbeEz4T\r\n"
"cGHTBPK4R28F44qIMCtHRV55VMX53ev6P3hRddJb\r\n"
"-----END CERTIFICATE-----\r\n";
```
1. Change all includes of `<WiFiClient.h>` to `<WiFiClientSecure.h>`.
1. Change all `WiFiClient` fields to `WiFiClientSecure`.
1. In every class that has a `WiFiClientSecure` field, add a constructor and set the certificate in that constructor:
```cpp
_client.setCACert(FUNCTIONS_CERTIFICATE);
```

@ -0,0 +1,26 @@
import json
import os
import requests
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
req_body = req.get_json()
language = req_body['language']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
response = requests.get(url, headers=headers)
voices_json = json.loads(response.text)
voices = filter(lambda x: x['Locale'].lower() == language.lower(), voices_json)
voices = map(lambda x: x['ShortName'], voices)
return func.HttpResponse(json.dumps(list(voices)), status_code=200)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,14 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>",
"SPEECH_KEY": "<key>",
"SPEECH_LOCATION": "<location>",
"TRANSLATOR_KEY": "<key>",
"TRANSLATOR_LOCATION": "<location>"
}
}

@ -0,0 +1,5 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis
librosa

@ -0,0 +1,52 @@
import io
import os
import requests
import librosa
import soundfile as sf
import azure.functions as func
location = os.environ['SPEECH_LOCATION']
speech_key = os.environ['SPEECH_KEY']
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': speech_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
playback_format = 'riff-48khz-16bit-mono-pcm'
def main(req: func.HttpRequest) -> func.HttpResponse:
req_body = req.get_json()
language = req_body['language']
voice = req_body['voice']
text = req_body['text']
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': 'application/ssml+xml',
'X-Microsoft-OutputFormat': playback_format
}
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{voice}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
response = requests.post(url, headers=headers, data=ssml.encode('utf-8'))
raw_audio, sample_rate = librosa.load(io.BytesIO(response.content), sr=48000)
resampled = librosa.resample(raw_audio, sample_rate, 44100)
output_buffer = io.BytesIO()
sf.write(output_buffer, resampled, 44100, 'PCM_16', format='wav')
output_buffer.seek(0)
return func.HttpResponse(output_buffer.read(), status_code=200)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,46 @@
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
def main(req: func.HttpRequest) -> func.HttpResponse:
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
req_body = req.get_json()
text = req_body['text']
logging.info(f'Request - {text}')
prediction_request = { 'query' : text }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_seconds = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_seconds += number * 60
else:
total_seconds += number
logging.info(f'Timer required for {total_seconds} seconds')
payload = {
'seconds': total_seconds
}
return func.HttpResponse(json.dumps(payload), status_code=200)
return func.HttpResponse(status_code=404)

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,36 @@
import logging
import os
import requests
import azure.functions as func
location = os.environ['TRANSLATOR_LOCATION']
translator_key = os.environ['TRANSLATOR_KEY']
def main(req: func.HttpRequest) -> func.HttpResponse:
req_body = req.get_json()
from_language = req_body['from_language']
to_language = req_body['to_language']
text = req_body['text']
logging.info(f'Translating {text} from {from_language} to {to_language}')
url = f'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0'
headers = {
'Ocp-Apim-Subscription-Key': translator_key,
'Ocp-Apim-Subscription-Region': location,
'Content-type': 'application/json'
}
params = {
'from': from_language,
'to': to_language
}
body = [{
'text' : text
}]
response = requests.post(url, headers=headers, params=params, json=body)
return func.HttpResponse(response.json()[0]['translations'][0]['text'])

@ -0,0 +1,20 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

@ -0,0 +1,23 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino FS @ 2.1.1
seeed-studio/Seeed Arduino SFUD @ 2.0.2
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.5
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
bblanchon/ArduinoJson @ 6.17.3
contrem/arduino-timer @ 2.3.0

@ -0,0 +1,95 @@
#pragma once
#define RATE 16000
#define SAMPLE_LENGTH_SECONDS 4
#define SAMPLES RATE * SAMPLE_LENGTH_SECONDS
#define BUFFER_SIZE (SAMPLES * 2) + 44
#define ADC_BUF_LEN 1600
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";
const char *SPEECH_API_KEY = "<API_KEY>";
const char *SPEECH_LOCATION = "<LOCATION>";
const char *LANGUAGE = "<LANGUAGE>";
const char *SERVER_LANGUAGE = "<LANGUAGE>";
const char *TOKEN_URL = "https://%s.api.cognitive.microsoft.com/sts/v1.0/issuetoken";
const char *SPEECH_URL = "https://%s.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=%s";
const char *TEXT_TO_TIMER_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/text-to-timer";
const char *GET_VOICES_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/get-voices";
const char *TEXT_TO_SPEECH_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/text-to-speech";
const char *TRANSLATE_FUNCTION_URL = "http://<IP_ADDRESS>:7071/api/translate-text";
const char *TOKEN_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQAueRcfuAIek/4tmDg0xQwDANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwNjCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBALVGARl56bx3KBUSGuPc4H5uoNFkFH4e7pvTCxRi4j/+z+Xb\r\n"
"wjEz+5CipDOqjx9/jWjskL5dk7PaQkzItidsAAnDCW1leZBOIi68Lff1bjTeZgMY\r\n"
"iwdRd3Y39b/lcGpiuP2d23W95YHkMMT8IlWosYIX0f4kYb62rphyfnAjYb/4Od99\r\n"
"ThnhlAxGtfvSbXcBVIKCYfZgqRvV+5lReUnd1aNjRYVzPOoifgSx2fRyy1+pO1Uz\r\n"
"aMMNnIOE71bVYW0A1hr19w7kOb0KkJXoALTDDj1ukUEDqQuBfBxReL5mXiu1O7WG\r\n"
"0vltg0VZ/SZzctBsdBlx1BkmWYBW261KZgBivrql5ELTKKd8qgtHcLQA5fl6JB0Q\r\n"
"gs5XDaWehN86Gps5JW8ArjGtjcWAIP+X8CQaWfaCnuRm6Bk/03PQWhgdi84qwA0s\r\n"
"sRfFJwHUPTNSnE8EiGVk2frt0u8PG1pwSQsFuNJfcYIHEv1vOzP7uEOuDydsmCjh\r\n"
"lxuoK2n5/2aVR3BMTu+p4+gl8alXoBycyLmj3J/PUgqD8SL5fTCUegGsdia/Sa60\r\n"
"N2oV7vQ17wjMN+LXa2rjj/b4ZlZgXVojDmAjDwIRdDUujQu0RVsJqFLMzSIHpp2C\r\n"
"Zp7mIoLrySay2YYBu7SiNwL95X6He2kS8eefBBHjzwW/9FxGqry57i71c2cDAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQU1cFnOsKjnfR3UltZEjgp5lVou6UwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQB2oWc93fB8esci/8esixj++N22meiGDjgF\r\n"
"+rA2LUK5IOQOgcUSTGKSqF9lYfAxPjrqPjDCUPHCURv+26ad5P/BYtXtbmtxJWu+\r\n"
"cS5BhMDPPeG3oPZwXRHBJFAkY4O4AF7RIAAUW6EzDflUoDHKv83zOiPfYGcpHc9s\r\n"
"kxAInCedk7QSgXvMARjjOqdakor21DTmNIUotxo8kHv5hwRlGhBJwps6fEVi1Bt0\r\n"
"trpM/3wYxlr473WSPUFZPgP1j519kLpWOJ8z09wxay+Br29irPcBYv0GMXlHqThy\r\n"
"8y4m/HyTQeI2IMvMrQnwqPpY+rLIXyviI2vLoI+4xKE4Rn38ZZ8m\r\n"
"-----END CERTIFICATE-----\r\n";
const char *SPEECH_CERTIFICATE =
"-----BEGIN CERTIFICATE-----\r\n"
"MIIF8zCCBNugAwIBAgIQCq+mxcpjxFFB6jvh98dTFzANBgkqhkiG9w0BAQwFADBh\r\n"
"MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3\r\n"
"d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH\r\n"
"MjAeFw0yMDA3MjkxMjMwMDBaFw0yNDA2MjcyMzU5NTlaMFkxCzAJBgNVBAYTAlVT\r\n"
"MR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xKjAoBgNVBAMTIU1pY3Jv\r\n"
"c29mdCBBenVyZSBUTFMgSXNzdWluZyBDQSAwMTCCAiIwDQYJKoZIhvcNAQEBBQAD\r\n"
"ggIPADCCAgoCggIBAMedcDrkXufP7pxVm1FHLDNA9IjwHaMoaY8arqqZ4Gff4xyr\r\n"
"RygnavXL7g12MPAx8Q6Dd9hfBzrfWxkF0Br2wIvlvkzW01naNVSkHp+OS3hL3W6n\r\n"
"l/jYvZnVeJXjtsKYcXIf/6WtspcF5awlQ9LZJcjwaH7KoZuK+THpXCMtzD8XNVdm\r\n"
"GW/JI0C/7U/E7evXn9XDio8SYkGSM63aLO5BtLCv092+1d4GGBSQYolRq+7Pd1kR\r\n"
"EkWBPm0ywZ2Vb8GIS5DLrjelEkBnKCyy3B0yQud9dpVsiUeE7F5sY8Me96WVxQcb\r\n"
"OyYdEY/j/9UpDlOG+vA+YgOvBhkKEjiqygVpP8EZoMMijephzg43b5Qi9r5UrvYo\r\n"
"o19oR/8pf4HJNDPF0/FJwFVMW8PmCBLGstin3NE1+NeWTkGt0TzpHjgKyfaDP2tO\r\n"
"4bCk1G7pP2kDFT7SYfc8xbgCkFQ2UCEXsaH/f5YmpLn4YPiNFCeeIida7xnfTvc4\r\n"
"7IxyVccHHq1FzGygOqemrxEETKh8hvDR6eBdrBwmCHVgZrnAqnn93JtGyPLi6+cj\r\n"
"WGVGtMZHwzVvX1HvSFG771sskcEjJxiQNQDQRWHEh3NxvNb7kFlAXnVdRkkvhjpR\r\n"
"GchFhTAzqmwltdWhWDEyCMKC2x/mSZvZtlZGY+g37Y72qHzidwtyW7rBetZJAgMB\r\n"
"AAGjggGtMIIBqTAdBgNVHQ4EFgQUDyBd16FXlduSzyvQx8J3BM5ygHYwHwYDVR0j\r\n"
"BBgwFoAUTiJUIBiV5uNu5g/6+rkS7QYXjzkwDgYDVR0PAQH/BAQDAgGGMB0GA1Ud\r\n"
"JQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMHYG\r\n"
"CCsGAQUFBwEBBGowaDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQu\r\n"
"Y29tMEAGCCsGAQUFBzAChjRodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGln\r\n"
"aUNlcnRHbG9iYWxSb290RzIuY3J0MHsGA1UdHwR0MHIwN6A1oDOGMWh0dHA6Ly9j\r\n"
"cmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5jcmwwN6A1oDOG\r\n"
"MWh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEdsb2JhbFJvb3RHMi5j\r\n"
"cmwwHQYDVR0gBBYwFDAIBgZngQwBAgEwCAYGZ4EMAQICMBAGCSsGAQQBgjcVAQQD\r\n"
"AgEAMA0GCSqGSIb3DQEBDAUAA4IBAQAlFvNh7QgXVLAZSsNR2XRmIn9iS8OHFCBA\r\n"
"WxKJoi8YYQafpMTkMqeuzoL3HWb1pYEipsDkhiMnrpfeYZEA7Lz7yqEEtfgHcEBs\r\n"
"K9KcStQGGZRfmWU07hPXHnFz+5gTXqzCE2PBMlRgVUYJiA25mJPXfB00gDvGhtYa\r\n"
"+mENwM9Bq1B9YYLyLjRtUz8cyGsdyTIG/bBM/Q9jcV8JGqMU/UjAdh1pFyTnnHEl\r\n"
"Y59Npi7F87ZqYYJEHJM2LGD+le8VsHjgeWX2CJQko7klXvcizuZvUEDTjHaQcs2J\r\n"
"+kPgfyMIOY1DMJ21NxOJ2xPRC/wAh/hzSBRVtoAnyuxtkZ4VjIOh\r\n"
"-----END CERTIFICATE-----\r\n";

@ -0,0 +1,69 @@
#pragma once
#include <Arduino.h>
#include <HTTPClient.h>
#include <sfud.h>
#include "config.h"
class FlashStream : public Stream
{
public:
FlashStream()
{
_pos = 0;
_flash_address = 0;
_flash = sfud_get_device_table() + 0;
populateBuffer();
}
virtual size_t write(uint8_t val)
{
return 0;
}
virtual int available()
{
int remaining = BUFFER_SIZE - ((_flash_address - HTTP_TCP_BUFFER_SIZE) + _pos);
int bytes_available = min(HTTP_TCP_BUFFER_SIZE, remaining);
if (bytes_available == 0)
{
bytes_available = -1;
}
return bytes_available;
}
virtual int read()
{
int retVal = _buffer[_pos++];
if (_pos == HTTP_TCP_BUFFER_SIZE)
{
populateBuffer();
}
return retVal;
}
virtual int peek()
{
return _buffer[_pos];
}
private:
void populateBuffer()
{
sfud_read(_flash, _flash_address, HTTP_TCP_BUFFER_SIZE, _buffer);
_flash_address += HTTP_TCP_BUFFER_SIZE;
_pos = 0;
}
size_t _pos;
size_t _flash_address;
const sfud_flash *_flash;
byte _buffer[HTTP_TCP_BUFFER_SIZE];
};

@ -0,0 +1,60 @@
#pragma once
#include <Arduino.h>
#include <sfud.h>
class FlashWriter
{
public:
void init()
{
_flash = sfud_get_device_table() + 0;
_sfudBufferSize = _flash->chip.erase_gran;
_sfudBuffer = new byte[_sfudBufferSize];
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void reset()
{
_sfudBufferPos = 0;
_sfudBufferWritePos = 0;
}
void writeSfudBuffer(byte b)
{
_sfudBuffer[_sfudBufferPos++] = b;
if (_sfudBufferPos == _sfudBufferSize)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void flushSfudBuffer()
{
if (_sfudBufferPos > 0)
{
sfud_erase_write(_flash, _sfudBufferWritePos, _sfudBufferSize, _sfudBuffer);
_sfudBufferWritePos += _sfudBufferSize;
_sfudBufferPos = 0;
}
}
void writeSfudBuffer(byte *b, size_t len)
{
for (size_t i = 0; i < len; ++i)
{
writeSfudBuffer(b[i]);
}
}
private:
byte *_sfudBuffer;
size_t _sfudBufferSize;
size_t _sfudBufferPos;
size_t _sfudBufferWritePos;
const sfud_flash *_flash;
};

@ -0,0 +1,53 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClient.h>
#include "config.h"
class LanguageUnderstanding
{
public:
int GetTimerDuration(String text)
{
DynamicJsonDocument doc(1024);
doc["text"] = text;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_TIMER_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
int seconds = 0;
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
seconds = obj["seconds"].as<int>();
}
else
{
Serial.print("Failed to understand text - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return seconds;
}
private:
WiFiClient _client;
};
LanguageUnderstanding languageUnderstanding;

@ -0,0 +1,133 @@
#include <Arduino.h>
#include <arduino-timer.h>
#include <rpcWiFi.h>
#include <sfud.h>
#include <SPI.h>
#include "config.h"
#include "language_understanding.h"
#include "mic.h"
#include "speech_to_text.h"
#include "text_to_speech.h"
#include "text_translator.h"
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
Serial.println("Connected!");
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
while (!(sfud_init() == SFUD_SUCCESS))
;
sfud_qspi_fast_read_enable(sfud_get_device(SFUD_W25Q32_DEVICE_INDEX), 2);
pinMode(WIO_KEY_C, INPUT_PULLUP);
mic.init();
speechToText.init();
textToSpeech.init();
Serial.println("Ready.");
}
auto timer = timer_create_default();
void say(String text)
{
text = textTranslator.translateText(text, SERVER_LANGUAGE, LANGUAGE);
Serial.println(text);
textToSpeech.convertTextToSpeech(text);
}
bool timerExpired(void *announcement)
{
say((char *)announcement);
return false;
}
void processAudio()
{
String text = speechToText.convertSpeechToText();
text = textTranslator.translateText(text, LANGUAGE, SERVER_LANGUAGE);
Serial.println(text);
int total_seconds = languageUnderstanding.GetTimerDuration(text);
if (total_seconds == 0)
{
return;
}
int minutes = total_seconds / 60;
int seconds = total_seconds % 60;
String begin_message;
if (minutes > 0)
{
begin_message += minutes;
begin_message += " minute ";
}
if (seconds > 0)
{
begin_message += seconds;
begin_message += " second ";
}
begin_message += "timer started.";
String end_message("Times up on your ");
if (minutes > 0)
{
end_message += minutes;
end_message += " minute ";
}
if (seconds > 0)
{
end_message += seconds;
end_message += " second ";
}
end_message += "timer.";
say(begin_message);
timer.in(total_seconds * 1000, timerExpired, (void *)(end_message.c_str()));
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW && !mic.isRecording())
{
Serial.println("Starting recording...");
mic.startRecording();
}
if (!mic.isRecording() && mic.isRecordingReady())
{
Serial.println("Finished recording");
processAudio();
mic.reset();
}
timer.tick();
}

@ -0,0 +1,242 @@
#pragma once
#include <Arduino.h>
#include "config.h"
#include "flash_writer.h"
class Mic
{
public:
Mic()
{
_isRecording = false;
_isRecordingReady = false;
}
void startRecording()
{
_isRecording = true;
_isRecordingReady = false;
}
bool isRecording()
{
return _isRecording;
}
bool isRecordingReady()
{
return _isRecordingReady;
}
void init()
{
analogReference(AR_INTERNAL2V23);
_writer.init();
initBufferHeader();
configureDmaAdc();
}
void reset()
{
_isRecordingReady = false;
_isRecording = false;
_writer.reset();
initBufferHeader();
}
void dmaHandler()
{
static uint8_t count = 0;
if (DMAC->Channel[1].CHINTFLAG.bit.SUSP)
{
DMAC->Channel[1].CHCTRLB.reg = DMAC_CHCTRLB_CMD_RESUME;
DMAC->Channel[1].CHINTFLAG.bit.SUSP = 1;
if (count)
{
audioCallback(_adc_buf_0, ADC_BUF_LEN);
}
else
{
audioCallback(_adc_buf_1, ADC_BUF_LEN);
}
count = (count + 1) % 2;
}
}
private:
volatile bool _isRecording;
volatile bool _isRecordingReady;
FlashWriter _writer;
typedef struct
{
uint16_t btctrl;
uint16_t btcnt;
uint32_t srcaddr;
uint32_t dstaddr;
uint32_t descaddr;
} dmacdescriptor;
// Globals - DMA and ADC
volatile dmacdescriptor _wrb[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor_section[DMAC_CH_NUM] __attribute__((aligned(16)));
dmacdescriptor _descriptor __attribute__((aligned(16)));
void configureDmaAdc()
{
// Configure DMA to sample from ADC at a regular interval (triggered by timer/counter)
DMAC->BASEADDR.reg = (uint32_t)_descriptor_section; // Specify the location of the descriptors
DMAC->WRBADDR.reg = (uint32_t)_wrb; // Specify the location of the write back descriptors
DMAC->CTRL.reg = DMAC_CTRL_DMAENABLE | DMAC_CTRL_LVLEN(0xf); // Enable the DMAC peripheral
DMAC->Channel[1].CHCTRLA.reg = DMAC_CHCTRLA_TRIGSRC(TC5_DMAC_ID_OVF) | // Set DMAC to trigger on TC5 timer overflow
DMAC_CHCTRLA_TRIGACT_BURST; // DMAC burst transfer
_descriptor.descaddr = (uint32_t)&_descriptor_section[1]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_0 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_0 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[0], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
_descriptor.descaddr = (uint32_t)&_descriptor_section[0]; // Set up a circular descriptor
_descriptor.srcaddr = (uint32_t)&ADC1->RESULT.reg; // Take the result from the ADC0 RESULT register
_descriptor.dstaddr = (uint32_t)_adc_buf_1 + sizeof(uint16_t) * ADC_BUF_LEN; // Place it in the adc_buf_1 array
_descriptor.btcnt = ADC_BUF_LEN; // Beat count
_descriptor.btctrl = DMAC_BTCTRL_BEATSIZE_HWORD | // Beat size is HWORD (16-bits)
DMAC_BTCTRL_DSTINC | // Increment the destination address
DMAC_BTCTRL_VALID | // Descriptor is valid
DMAC_BTCTRL_BLOCKACT_SUSPEND; // Suspend DMAC channel 0 after block transfer
memcpy(&_descriptor_section[1], &_descriptor, sizeof(_descriptor)); // Copy the descriptor to the descriptor section
// Configure NVIC
NVIC_SetPriority(DMAC_1_IRQn, 0); // Set the Nested Vector Interrupt Controller (NVIC) priority for DMAC1 to 0 (highest)
NVIC_EnableIRQ(DMAC_1_IRQn); // Connect DMAC1 to Nested Vector Interrupt Controller (NVIC)
// Activate the suspend (SUSP) interrupt on DMAC channel 1
DMAC->Channel[1].CHINTENSET.reg = DMAC_CHINTENSET_SUSP;
// Configure ADC
ADC1->INPUTCTRL.bit.MUXPOS = ADC_INPUTCTRL_MUXPOS_AIN12_Val; // Set the analog input to ADC0/AIN2 (PB08 - A4 on Metro M4)
while (ADC1->SYNCBUSY.bit.INPUTCTRL)
; // Wait for synchronization
ADC1->SAMPCTRL.bit.SAMPLEN = 0x00; // Set max Sampling Time Length to half divided ADC clock pulse (2.66us)
while (ADC1->SYNCBUSY.bit.SAMPCTRL)
; // Wait for synchronization
ADC1->CTRLA.reg = ADC_CTRLA_PRESCALER_DIV128; // Divide Clock ADC GCLK by 128 (48MHz/128 = 375kHz)
ADC1->CTRLB.reg = ADC_CTRLB_RESSEL_12BIT | // Set ADC resolution to 12 bits
ADC_CTRLB_FREERUN; // Set ADC to free run mode
while (ADC1->SYNCBUSY.bit.CTRLB)
; // Wait for synchronization
ADC1->CTRLA.bit.ENABLE = 1; // Enable the ADC
while (ADC1->SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
ADC1->SWTRIG.bit.START = 1; // Initiate a software trigger to start an ADC conversion
while (ADC1->SYNCBUSY.bit.SWTRIG)
; // Wait for synchronization
// Enable DMA channel 1
DMAC->Channel[1].CHCTRLA.bit.ENABLE = 1;
// Configure Timer/Counter 5
GCLK->PCHCTRL[TC5_GCLK_ID].reg = GCLK_PCHCTRL_CHEN | // Enable perhipheral channel for TC5
GCLK_PCHCTRL_GEN_GCLK1; // Connect generic clock 0 at 48MHz
TC5->COUNT16.WAVE.reg = TC_WAVE_WAVEGEN_MFRQ; // Set TC5 to Match Frequency (MFRQ) mode
TC5->COUNT16.CC[0].reg = 3000 - 1; // Set the trigger to 16 kHz: (4Mhz / 16000) - 1
while (TC5->COUNT16.SYNCBUSY.bit.CC0)
; // Wait for synchronization
// Start Timer/Counter 5
TC5->COUNT16.CTRLA.bit.ENABLE = 1; // Enable the TC5 timer
while (TC5->COUNT16.SYNCBUSY.bit.ENABLE)
; // Wait for synchronization
}
uint16_t _adc_buf_0[ADC_BUF_LEN];
uint16_t _adc_buf_1[ADC_BUF_LEN];
// WAV files have a header. This struct defines that header
struct wavFileHeader
{
char riff[4]; /* "RIFF" */
long flength; /* file length in bytes */
char wave[4]; /* "WAVE" */
char fmt[4]; /* "fmt " */
long chunk_size; /* size of FMT chunk in bytes (usually 16) */
short format_tag; /* 1=PCM, 257=Mu-Law, 258=A-Law, 259=ADPCM */
short num_chans; /* 1=mono, 2=stereo */
long srate; /* Sampling rate in samples per second */
long bytes_per_sec; /* bytes per second = srate*bytes_per_samp */
short bytes_per_samp; /* 2=16-bit mono, 4=16-bit stereo */
short bits_per_samp; /* Number of bits per sample */
char data[4]; /* "data" */
long dlength; /* data length in bytes (filelength - 44) */
};
void initBufferHeader()
{
wavFileHeader wavh;
strncpy(wavh.riff, "RIFF", 4);
strncpy(wavh.wave, "WAVE", 4);
strncpy(wavh.fmt, "fmt ", 4);
strncpy(wavh.data, "data", 4);
wavh.chunk_size = 16;
wavh.format_tag = 1; // PCM
wavh.num_chans = 1; // mono
wavh.srate = RATE;
wavh.bytes_per_sec = (RATE * 1 * 16 * 1) / 8;
wavh.bytes_per_samp = 2;
wavh.bits_per_samp = 16;
wavh.dlength = RATE * 2 * 1 * 16 / 2;
wavh.flength = wavh.dlength + 44;
_writer.writeSfudBuffer((byte *)&wavh, 44);
}
void audioCallback(uint16_t *buf, uint32_t buf_len)
{
static uint32_t idx = 44;
if (_isRecording)
{
for (uint32_t i = 0; i < buf_len; i++)
{
int16_t audio_value = ((int16_t)buf[i] - 2048) * 16;
_writer.writeSfudBuffer(audio_value & 0xFF);
_writer.writeSfudBuffer((audio_value >> 8) & 0xFF);
}
idx += buf_len;
if (idx >= BUFFER_SIZE)
{
_writer.flushSfudBuffer();
idx = 44;
_isRecording = false;
_isRecordingReady = true;
}
}
}
};
Mic mic;
void DMAC_1_Handler()
{
mic.dmaHandler();
}

@ -0,0 +1,102 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
#include "flash_stream.h"
class SpeechToText
{
public:
void init()
{
_token_client.setCACert(TOKEN_CERTIFICATE);
_speech_client.setCACert(SPEECH_CERTIFICATE);
_access_token = getAccessToken();
}
String convertSpeechToText()
{
char url[128];
sprintf(url, SPEECH_URL, SPEECH_LOCATION, LANGUAGE);
HTTPClient httpClient;
httpClient.begin(_speech_client, url);
httpClient.addHeader("Authorization", String("Bearer ") + _access_token);
httpClient.addHeader("Content-Type", String("audio/wav; codecs=audio/pcm; samplerate=") + String(RATE));
httpClient.addHeader("Accept", "application/json;text/xml");
Serial.println("Sending speech...");
FlashStream stream;
int httpResponseCode = httpClient.sendRequest("POST", &stream, BUFFER_SIZE);
Serial.println("Speech sent!");
String text = "";
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonObject obj = doc.as<JsonObject>();
text = obj["DisplayText"].as<String>();
}
else if (httpResponseCode == 401)
{
Serial.println("Access token expired, trying again with a new token");
_access_token = getAccessToken();
return convertSpeechToText();
}
else
{
Serial.print("Failed to convert text to speech - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return text;
}
private:
String getAccessToken()
{
char url[128];
sprintf(url, TOKEN_URL, SPEECH_LOCATION);
HTTPClient httpClient;
httpClient.begin(_token_client, url);
httpClient.addHeader("Ocp-Apim-Subscription-Key", SPEECH_API_KEY);
int httpResultCode = httpClient.POST("{}");
if (httpResultCode != 200)
{
Serial.println("Error getting access token, trying again...");
delay(10000);
return getAccessToken();
}
Serial.println("Got access token.");
String result = httpClient.getString();
httpClient.end();
return result;
}
WiFiClientSecure _token_client;
WiFiClientSecure _speech_client;
String _access_token;
};
SpeechToText speechToText;

@ -0,0 +1,86 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <Seeed_FS.h>
#include <SD/Seeed_SD.h>
#include <WiFiClient.h>
#include <WiFiClientSecure.h>
#include "config.h"
class TextToSpeech
{
public:
void init()
{
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, GET_VOICES_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
if (httpResponseCode == 200)
{
String result = httpClient.getString();
Serial.println(result);
DynamicJsonDocument doc(1024);
deserializeJson(doc, result.c_str());
JsonArray obj = doc.as<JsonArray>();
_voice = obj[0].as<String>();
Serial.print("Using voice ");
Serial.println(_voice);
}
else
{
Serial.print("Failed to get voices - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
}
void convertTextToSpeech(String text)
{
DynamicJsonDocument doc(1024);
doc["language"] = LANGUAGE;
doc["voice"] = _voice;
doc["text"] = text;
String body;
serializeJson(doc, body);
HTTPClient httpClient;
httpClient.begin(_client, TEXT_TO_SPEECH_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
if (httpResponseCode == 200)
{
File wav_file = SD.open("SPEECH.WAV", FILE_WRITE);
httpClient.writeToStream(&wav_file);
wav_file.close();
}
else
{
Serial.print("Failed to get speech - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
}
private:
WiFiClient _client;
String _voice;
};
TextToSpeech textToSpeech;

@ -0,0 +1,58 @@
#pragma once
#include <Arduino.h>
#include <ArduinoJson.h>
#include <HTTPClient.h>
#include <WiFiClient.h>
#include "config.h"
class TextTranslator
{
public:
String translateText(String text, String from_language, String to_language)
{
DynamicJsonDocument doc(1024);
doc["text"] = text;
doc["from_language"] = from_language;
doc["to_language"] = to_language;
String body;
serializeJson(doc, body);
Serial.print("Translating ");
Serial.print(text);
Serial.print(" from ");
Serial.print(from_language);
Serial.print(" to ");
Serial.println(to_language);
HTTPClient httpClient;
httpClient.begin(_client, TRANSLATE_FUNCTION_URL);
int httpResponseCode = httpClient.POST(body);
String translated_text = "";
if (httpResponseCode == 200)
{
translated_text = httpClient.getString();
Serial.print("Translated: ");
Serial.println(translated_text);
}
else
{
Serial.print("Failed to translate text - error ");
Serial.println(httpResponseCode);
}
httpClient.end();
return translated_text;
}
private:
WiFiClient _client;
};
TextTranslator textTranslator;

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

@ -8,7 +8,7 @@ The speech service REST API doesn't support direct translations, instead you can
### Task - use the translator resource to translate text
1. Your smart timer will have 2 languages set - the language of the server that was used to train LUIS, and the language spoken by the user. Update the `language` variable to be the language that will be spoken by the used, and add a new variable called `server_language` for the language used to train LUIS:
1. Your smart timer will have 2 languages set - the language of the server that was used to train LUIS (the same language is also used to build the messages to speak to the user), and the language spoken by the user. Update the `language` variable to be the language that will be spoken by the user, and add a new variable called `server_language` for the language used to train LUIS:
```python
language = '<user language>'

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save