Merge branch 'main' into arabic

pull/111/head
Zina Kamel 4 years ago committed by GitHub
commit 30522c46fe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -5,6 +5,7 @@
"Geospatial", "Geospatial",
"Kbps", "Kbps",
"Mbps", "Mbps",
"SSML",
"Seeed", "Seeed",
"Siri", "Siri",
"Twilio", "Twilio",

@ -0,0 +1,222 @@
# 物联网IoT简介
![这个课程概述的涂鸦笔记sketchnote](../../../sketchnotes/lesson-1.png)
> Sketchnote by [Nitya Narasimhan](https://github.com/nitya). 如果你想看比较大的图片,请点击它。
## 知识检查(初)
[知识检查(初)](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/1)
## 简介
本课程涵盖一些介绍物联网IoT的主题以及教你怎么开始设置你的硬件。
本课程将涵盖:
* [什么是 物联网IoT](#what-is-the-internet-of-things)
* [IoT 设备](#iot-devices)
* [设置你的设备](#set-up-your-device)
* [IoT 的应用场景](#applications-of-iot)
* [在你的周围的IoT 设备例子](#examples-of-iot-devices-you-may-have-around-you)
## 什么是 物联网IoT
为了形容运用感应器来链接网络与物质世界1999年 [凯文·阿什顿Kevin Ashton](https://wikipedia.org/wiki/Kevin_Ashton) 生造了物联网IoT这个词。自从那时起这个生造词被用来形容任何能够跟周围的世界交互的设备。这些设备可以使用感应器收集数据或者使用执行器会做事—例如打开开关、发光二极管等—的设备在物质世界完成任务。通常执行器会连接到其它设备或者网络。
> **感应器** 从世界中收集数据,例如:速度、温度或地点。
>
> **执行器** 将电信号转换成行动,例如:打开灯,发出声音或将控制信号传送到其它硬件。
IoT 不仅是设备,还包含云服务;这些服务能处理数据,或者将请求传送给跟 IoT 设备有链接的执行器。它也包括没有链接的设备它们通常被称为“边缘设备”而且它们有能力用基于云的AI模型自己处理与回应感应器的数据。
IoT 是一个快速发展的技术领域。专家预计2020底世界上有三百亿 IoT 设备跟网络有链接。专家也预计2025年IoT 设备将来收集大概80 ZB80万亿GB。那是个非常大的数量
![这个图表展示随着时间的推移的有源 IoT 设备它展示出一个上升趋势从2015年不超过50亿到2025年超过300亿](../../../images/connected-iot-devices.svg)
✅ 做一点儿研究: IoT 设备收集的数据,多少是有用的、多少是被浪费的?为什么那么多数据被忽略了?
对于 IoT 的成功,这些数据是不可或缺的。想成为一名有成就的 IoT 开发者,就必须了解你需要收集的数据、怎么收集它,怎么利用它来作出决定以及如果有必要的话,怎么用那些决定来跟物质世界交互。
## IoT 设备
IoT 的 **T** 代表 **Things**(物)—— 可以跟物质世界交互的设备;它们使用感应器收集数据或者使用执行器在物质世界完成任务。
为生产或商业的设备(例:健身追踪器、机器控制器等)通常是自定义生成的。它们利用的自定义生成电路板——有时连自定义生成处理器都有——设计使它们能够满足某某任务的需求。例:要戴在手上的需要够小,或者要承受高温度、高压力、高振动的工厂环境的需要够耐用。
无论你正在学 IoT 或者在创立原型设备,作为一名 IoT 开发者,你必须由一个开发者套件开始。这些是为 IoT 开发者设计的通用设备,而它们通常不会有生产设备的特点,例如用来链接感应器和执行器的外部引脚、帮助排除错误的硬件或者将生产运行中加不必要的成本的额外资源。
这些开发者套件通常有两种:微控制器和单板机。我们会在这儿介绍它们,而将在下一课更详细地解释它们。
> 💁 你的手机也算是一个通用 IoT 设备;它拥有感应器与执行器,以及有不同应用程序用不同的方式来跟不同云服务利用它们。你甚至可以找到几个用手机的应用程序当作 IoT 设备的 IoT 教程。
### 微控制器
一个微控制器MCU是一个小电脑。它包含
🧠 至少一个中央处理器CPU它就是微控制器的“脑”——运行你的程序
💾 内存随机存取存储器RAM和程序存储器——储存你的程序、数据变量的地方
🔌 可编程输入输出I/O连接——为了跟外围设备如感应器或执行器沟通
微控制器通常是较便宜的计算设备;自定义生成硬件的平均成本下降到 US$0.50,而也有些设备到 US$0.03 那么便宜。开发者套件的价钱可以从 US$4 起,但你加上越多特点,价钱就越高。[Wio Terminal](https://www.seeedstudio.com/Wio-Terminal-p-4509.html) 是个来自 [Seeed studios](https://www.seeedstudio.com) 的微控制器它包含感应器、执行器、Wi-Fi和一个屏幕总共算起来大约 US$30。
![一个Wio Terminal](../../../images/wio-terminal.png)
> 💁 当你在网上寻找微控制器时,要小心用 **MCU** 这个词因为这回带来许多关于漫威电影宇宙Marvel Cinematic Universe的搜索结果而不是关于微控制器的。
微控制器的设计允许它们被编程完成几个非常特定的任务,不像比较通用的电脑。除了一些很具体的情况,你无法连接显示器、键盘和鼠标并利用它完成通用任务。
微控制器开发者套件平时包括额外的感应器和执行器。大多数的会有至少一个能被编程的发光二极管LEDs还有其它设备如普通插头用来链接更多应用其或执行器或内置感应器平时最常见的如温度。有些微控制器有内置的无线连接如蓝牙或 Wi-Fi或者有额外微控制器用来加这个连接性能。
> 💁 我们通常用 C 或 C++ 来为微控制器写程序。
### 单板机
单板机指的是一个小计算器;它把一个电脑的所有要素装在单单一个小板上。这些设备的规格跟台式电脑或笔记本电脑比较相似,它们也运行完整的操作系统,但它们较小,用比较少电力以及便宜多了。
![一个 Raspberry Pi 4](../../../images/raspberry-pi-4.jpg)
***Raspberry Pi 4. Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)***
Raspberry Pi 是其中最流行的单板机。
就像一个微控制器,单板机有个中央处理器、内存和输入输出引脚,但它们也有额外的特点如一个让你链接显示器的图形芯片、音频输出与 USB 端口让你链接键盘、鼠标和其它普通 USB 设备如网络摄像头和外置储存。程序将在 SD 卡或硬盘以及一个操作系统被储存,而不是通过一个内置的存储芯片。
> 🎓 你可以把单板机当成一个较小、较便宜的电脑版本,就像你现在正在用来读这些的电脑。可是,单板机还加了通用输入/输出端口,让你和感应器、执行器交互。
单板机有电脑的所有要素,所以你可以用任何编程语言来为它写程序。我们通常用 Python 为 IoT 设备写程序。
### 为其余的课的硬件选择
其余的课程都包括作业,而且你必须用一个 IoT 设备跟物质世界交互以及跟云沟通。每个课程会支持3种设备选择Arduino通过一个 Seeed Studios Wio Terminal或者一个单板机——一个物质设备一个 Raspberry Pi 4 或一个在你的电脑上运行的虚拟单板机。
你能在[硬件手册](../../../hardware.md)查到需要用来完成作业的硬件。
> 💁 你不需要为了完成作业而买任何 IoT 硬件;所有东西可以使用一个虚拟单板机来做。
要使用哪个硬件是你的选择,取决于你家里或学校里有什么,以及你知道或想学的编程语言。两种硬件都利用同样的感应器系统,所以万一你想途中改变你的选择,你也不需要替换大部分的套件。用虚拟单板机学跟用一个 Raspberry Pi 学差不多一模一样,而且你可以把大多数的程序转换去你的 Pi 如果你后来得到一个设备和感应器。
### Arduino 开发者套件
如果你对微控制器的开发感兴趣,那你可以用一个 Arduino 设备完成作业。你需要对 C 或 C++ 的编程语言有基本的理解,因为将来的课程只会教关于 Arduino 框架的程序、需要用到的感应器和执行器以及跟云交互的库。
作业将用 [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=academic-17441-jabenn) 跟 [为微控制器开发的 PlatformIO 扩展](https://platformio.org). 如果你对 Arduino IDE 熟悉的话,你也能用它,但我们不会提供指示。
### 单板机开发者套件
如果你对使用单板机学 IoT 开发有兴趣,你可以用一个 Raspberry Pi 完成作业,或者在你的电脑运行的虚拟设备。
你需要对 Python 有基本的理解,因为将来的课程只会教关于需要用到的感应器和执行器的程序以及跟云交互的库。
> 💁 如果你想学怎么用 Python 写程序,看一看一下的两个视频系列:
>
> * [Python for beginners为初学者的 Python](https://channel9.msdn.com/Series/Intro-to-Python-Development?WT.mc_id=academic-17441-jabenn)
> * [More Python for beginners更多为初学者的 Python](https://channel9.msdn.com/Series/More-Python-for-Beginners?WT.mc_id=academic-7372-jabenn)
作业将用 [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=academic-17441-jabenn)。
如果你在用一个 Raspberry Pi为了运行你的 Pi你可以通过完整的桌面 Raspberry Pi 操作系统以及用 [VS Code 的 Raspberry Pi 操作系统版本](https://code.visualstudio.com/docs/setup/raspberry-pi?WT.mc_id=academic-17441-jabenn)直接在你的 Pi 写程序,或者把它当成一个无头设备,从你的电脑用 VS Code 的 [Remote SSH 扩展](https://code.visualstudio.com/docs/remote/ssh?WT.mc_id=academic-17441-jabenn)写程序;这个扩展让你链接你的 Pi便编辑你的程序、从程序排除错误和运行程序就像如果你直接在 Pi上写程序一样。
如果你选择用虚拟设备,你会直接在你的电脑上写程序。你不会读取感应器和执行器,反而你会用模拟工具来定义传感器值以及在屏幕上查看执行器的结果。
##设置你的设备
在你为你的 IoT 设备写程序前,你需要做点设置。请根据你将用到的设备,按照以下的指示。
> 💁 如果你还缺少了一个设备,请用[硬件手册](../../../hardware.md) 帮你决定你要用的是哪个设备,以及你需要买的额外硬件。你不必买硬件,因为你可以用虚拟硬件运行所有的项目。
这些指示包括第三方网站的链接;这些网站由你将用到的硬件或工具的创造者。这是为了确保你会一直在按照各种工具和硬件的最新指示。
按照相当的指南来设置你的设备并完成一个“Hello World”项目。我们将在这个介绍部分用4个课程创造一个 IoT 夜灯,而这是第一步。
* [ArduinoWio Terminal](wio-terminal.md)
* [单板机Raspberry Pi](pi.md)
* [单板机:虚拟设备](virtual-device.md)
## IoT 的应用场景
IoT 有好多用例,跨过几组:
* 消费者 IoT
* 商业 IoT
* 工业 IoT
* 基础设施 IoT
✅ 做一点儿研究:关于以下的每个范围,找一下一个不在内容里的详细例子。
###消费者 IoT
消费者 IoT 指的是消费者将买以及在家里用的 IoT 设备。这些设备中有的非常有用,例如:智能音箱、智能供暖和机器人吸尘器。其它的有些用例比较可疑,好像声控水龙头;你无法把它们关掉因为有了流水的声音,声控就无法听到你的语音。
消费者 IoT 设备使人能够在他们的周围做成更多东西尤其是世界上的10亿个残障人士。机器人吸尘器能为移动有困难、无法自己清扫的人提供干净的地板、声控烤箱让视力或移动力较差的人用自己的语音来给烤箱加热、健康监测器使患者能够自己监测自己的慢性病情况并定期得到更加详细的信息。这些设备将变得普及到连小孩子也在天天用着它们如学生们在冠状病毒疫情时进行居家学习、利用智能家居设备的计时器来记录他们的功课或者设置闹钟来提醒他们参与他们未来的课程。
✅ 你人身上或家里有什么消费者 IoT 设备呢?
### 商业 IoT
商业 IoT 包含公司里的 IoT 用例。在办公室里有可能会有空间占用传感器和移动探测器被用来管理灯光和供暖以及在不需要的时候把它们关掉以避免浪费钱和减少碳排放。在个工厂IoT 设备可以监测安全隐患例如没有戴安全帽的人员或过于大的巨响。在店里IoT 设备可以量冷库的温度,并通知店主如果某个冰箱的温度超过理想范围,或者它们可以监测架子上的产品,并通知工作人员如果他们为买完的产品补货。交通运输业也越来越依靠 IoT 设备来监测交通工具的地点、为道路使用者收费记录行驶里程、记录司机的工作时间和徐熙时间或者通知工作人员如果有货车即将来到仓库,并为上货或下货做准备。
✅ 你的学校或公司里有什么消费者 IoT 设备呢?
### 工业 IoT (IIoT)
工业 IoT也称为 “IIoT”指的是使用 IoT 设备在大范围上来控制与管理机械。这包含很多用例,从工厂到数字农业。
IoT 设备在工厂中有很多用例。它们能使用各种感应器温度、振动、旋转速度等来监测机械。我们将可以观察这些数据而如果机器超出某些公差如它的温度太高我们可以把它停下来。我们也能收集并分析这些数据让人工智能AI模型看故障前的数据再利用它预报其它未来的故障这就叫做“预测性维护”。
为了养活不断增长的人口,数字农业非要不可,尤其是对于依靠[自给农业](https://wikipedia.org/wiki/Subsistence_agriculture) 的5亿家户中的20亿人。数字农业的范围包含才几块钱的感应器也包含大大的初创企业。首先一位农民可以监测温度以及用[生长度日GDD](https://wikipedia.org/wiki/Growing_degree-day),预测农作物能什么时候收割。再次,为了确保植物有充足的水量和避免浪费太多水,他们可以连接土壤水分监测。最后,农民可以进一步、用无人驾驶飞机、卫星数据、人工智能来监测大面积农田的作物生长、疾病和土壤质量。
✅ 还有什么 IoT 设备可以用来帮助农民呢?
### 基础设施 IoT
基础设施 IoT 指的是监测与控制民众天天用的本地与全球基础设施。
[智慧城市](https://wikipedia.org/wiki/Smart_city)是用 IoT 设备来收集关于城市的数据再利用它们来改善城市运行方式的城市地区。这些城市通常靠本地政府、学术界和本地企业之间的合作,监测和管理各种东西——从交通到污染。一个例子是在哥本哈根(丹麦王国首都),空气污染对人民来说非常重要,所以城市量它,再用它给人民提供关于最环保的骑自行车路线与步道的信息。
[智能电网](https://wikipedia.org/wiki/Smart_grid)以收集各各家户使用电力的数据的方式来允许更好的电力需求分析。这些数据能影响国家的某些决定,包括在哪里建新发电厂。它们也能影响我们的个人决定;它们让我们明确地了解我们使用多少电力、我们在什么时候使用电力,还可以为我们提供减少浪费的意见,例如晚上为电动汽车充电。
✅ 假如你可以在你住的地方加 IoT 设备来量任何东西,那会是什么?
##在你的周围的 IoT 设备例子
你会惊讶于你身边有多少 IoT 设备。我正在家里写这个课程的内容,而却在我的周围通过智能特点(应用程式控制、语音控制、通过手机把数据寄给我的能力)跟互联网有连接有以下的设备:
* 好几个智能音箱
* 冰箱、洗碗机、烤箱和微波炉
* 为太阳能电池板的电量监测器
* 智能插座
* 摄像门铃和监视器
* 有好几个在房间里的智能传感器的智能恒温器
* 车库开门器
* 家庭娱乐系统和声控电视
* 灯光
* 健身和健康追踪器
这些设备都有感应器和/或执行器与跟互联网沟通。从我的手机,我能看得出如果我的车库门还开着,再叫我的智能音箱替我把它关上。我甚至能用计时器,那万一它晚上还开着,它可以自动关上。每当我的门铃响着,无论我在世界的哪儿个地方,我都能从手机看到门前是谁,并通过门铃的音箱和麦克风跟他们沟通。我能监测我的血糖、心率和睡眠周期,再用数据中的趋势来改善自己的健康状况。我能通过云控制我的灯,而当我的网络连接出状况,我能在黑暗中坐着。
---
## 🚀 挑战
将在你的家、学校或工作场所中的 IoT 设备列成单子——有可能比你的想象中还要多!
##知识检查(后)
[知识检查(后)](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/2)
## 复习和自学
读一下关于消费者 IoT 项目的成功和失败。在新闻网站上找一找关于失败的文章,例如:隐私问题、硬件问题或者因缺少连接性能而发生的问题。
几个例子:
* 这个推特户口 **[Internet of Sh*t](https://twitter.com/internetofshit)** *(亵渎警告)* 有几个关于消费者 IoT 失败的好例子。
* [c|net - My Apple Watch saved my life: 5 people share their stories](https://www.cnet.com/news/apple-watch-lifesaving-health-features-read-5-peoples-stories/)
* [c|net - ADT technician pleads guilty to spying on customer camera feeds for years](https://www.cnet.com/news/adt-home-security-technician-pleads-guilty-to-spying-on-customer-camera-feeds-for-years/) *(触发警告:未经同意的偷窥)*
## 作业
[调查一个物联网IoT项目](assignment.md)

@ -0,0 +1,13 @@
# একটি IoT প্রজেক্ট পর্যালোচনা
## নির্দেশাবলী
স্মার্ট ফার্ম থেকে শুরু করে স্মার্ট শহরগুলিতে, স্বাস্থ্যসেবা পর্যবেক্ষণ, পরিবহন এবং জনসাধারণের ব্যবহারের জন্য বিশ্বব্যাপী বড় এবং ছোট আকারের অনেক আইওটি প্রকল্প আসছে।
আপনার বসবাসের জায়গার আশেপাশের এমন কোন প্রকল্প থাকলে, সেটি সম্পর্কে ইন্টারনেটে সার্চ করুন। প্রজেক্টটির ইতিবাচক এবং নেতিবাচক দিকগুলো (যেমন: এটির কারণে কী কী সুবিধা হচ্ছে, কোন সমস্যা তৈরী করছে কিনা বা তথ্যের গোপনীয়তা সংক্রান্ত বিষয়গুলি কীভাবে দেখা হচ্ছে) ব্যখ্যা করুন।
## এসাইনমেন্ট মূল্যায়ন মানদন্ড
| ক্রাইটেরিয়া | দৃষ্টান্তমূলক ব্যখ্যা | পর্যাপ্ত ব্যখ্যা | আরো উন্নতির প্রয়োজন |
| -------- | --------- | -------- | -----------------|
| ইতিবাচক এবং নেতিবাচক দিকগুলোর ব্যখ্যা করুন | বিশদভাব ব্যখ্যা করা হয়েছে | সংক্ষিপ্ত ব্যখ্যা করা হয়েছে | ভালোভাবে ব্যখ্যা করা হয়নি |

@ -0,0 +1,244 @@
# রাস্পবেরি পাই
[রাস্পবেরি পাই](https://raspberrypi.org) হলো একটি সিংগেল বোর্ড কম্পিউটার । আমরা বিভিন্ন ইকোসিস্টেমের সেন্সর এবং অ্যাকচুয়েটর ব্যবহার করতে পারি, আর এই লেসনে আমরা [Grove](https://www.seeedstudio.com/category/Grove-c-1003.html) নামের বেশ সমৃদ্ধ একটি হার্ডওয়্যার ইকোসিস্টেম ব্যবহার করবো। আমাদের রাস্পবেরি পাই (সংক্ষেপে "পাই") এর কোডিং এবং Grove সেন্সরগুলো আমরা নিয়ন্ত্রণ করবো পাইথন ল্যাংগুয়েজে।
![A Raspberry Pi 4](../../../images/raspberry-pi-4.jpg)
***রাস্পবেরি পাই - Michael Henzler / [Wikimedia Commons](https://commons.wikimedia.org/wiki/Main_Page) / [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)***
## সেটাপ
যদি আমরা আমাদের আইওটি হার্ডওয়্যার হিসাবে রাস্পবেরি পাই ব্যবহার করি, তবে দুটি অপশন আছে - সবগুলো লেসন পড়ে সরাসরি রাসপবেরি পাই তে কোডের মাধ্যমে কাজ করা অথবা কম্পিউটার থেকে 'হেডলেস' পাই এবং কোডের সাথে দূরবর্তীভাবে সংযোগ করতে পারেন।
কাজ শুরু করার আগে আমাদের গ্রোভ বেস হ্যাটটি আপনার পাইয়ের সাথে সংযুক্ত করতে হবে।
### কাজ - সেটাপ
Grove বেস হ্যাটটি রাস্পবেরি পাই এ ইন্সটল করা এবং পাই কে সেই অনুসারে কনফিগার করা।
১. গ্রোভ বেস টুপিটি রাস্পবেরি পাই এর সাথে সংযুক্ত করতে হবে। নিচের ছবির মতো, জিপিআইও পিনগুলো বরাবর আমরা গ্রোভ হ্যাট বসাতে পারবো।
![Fitting the grove hat](../../../images/pi-grove-hat-fitting.gif)
২. কীভাবে রাস্পবেরি পাই তে কাজ করতে চাচ্ছি, সে সম্পর্কিত সিদ্ধান্ত নিয়ে - নিচের যেকোন একটি প্রাসঙ্গিক সেকশন এ যেতে হবে
* [সরাসরি রাস্পবেরি পাই তে কাজ করা](#সরাসরি-রাস্পবেরি-পাই-তে-কাজ-করা)
* [রাস্পবেরি পাই তে রিমোট একসেস নেয়া](#পাই-তে-রিমোট-একসেস)
### সরাসরি রাস্পবেরি পাই তে কাজ করা
আমরা যদি সরাসরি রাস্পবেরি পাই তে কাজ করতে চাই, সেক্ষত্রে আমাদেরকে Raspberry Pi OS এর ডেস্কটপ ভার্সন ব্যবহার করতে হবে এবং প্রয়োজনীয় সব উপাদান ইন্সটল করতে হবে।
#### কাজ - সরাসরি রাস্পবেরি পাই
আমাদেরকে রাস্পবেরি পাই সেটাপ করে নিতে হবে।
1. [ রাস্পবেরি পাই সেটাপ গাইড](https://projects.raspberrypi.org/en/projects/raspberry-pi-setting-up) থেকে সব নির্দেশ অনুসরণ করে আমাদের পাই সেটাপ করে নিতে হবে। এটিকে এবার কীবোর্ড/মাউস/মনিটরের সাথে যুক্ত করি। তারপর ওয়াইফাই বা ইথারনেটে সংযুক্ত করে, সফটওয়্যর আপডেট করে নিতে হবে। এক্ষেত্রে যে অপারেটিং সিস্টেম আমরা ডাউনলোড করবো তা হলো **Raspberry Pi OS (32 bit)** , এটিই রেকমেন্ডেড হিসেবে মার্ক করা থাকে ।
গ্রোভ সেন্সর ও একচুয়েটর ব্যবহার করে কাজ করার জন্য, আগেই একটি এডিটর ইন্সটল করতে হবে যাতে আমরা কোড লিখতে পারি এবং বিভিন্ন লাইব্রেরি ও ট্যুল ব্যবহার করতে পারি - এতে করে সহজেই আমরা গ্রোভে কাজ করতে পারবো।
1. পাই রিব্যুট করার পরে, উপরের মেন্যু বার থেকে **Terminal** আইকনে ক্লিক করে তা চালু করতে হবে অথবা *Menu -> Accessories -> Terminal* এভাবে চালু করতে হবে।
2. ওএস এবং সব সফটওয়্যর আপডেট করা আছে কিনা তার জন্য নীচের কমান্ড টা রান করতে হবে।
```sh
sudo apt update && sudo apt full-upgrade --yes
```
3. গ্রোভে সকল লাইব্রেরি ইন্সটল করার জন্য নিচের কমান্ড রান দিই।
```sh
curl -sL https://github.com/Seeed-Studio/grove.py/raw/master/install.sh | sudo bash -s -
```
পাইথনের অন্যতম শক্তিশালী একটি সুবিধা হলো [pip packages](https://pypi.org) ইন্সটল করতে পারা - পিপ প্যাকেজ হলো অন্যদের তৈরী ও পাবলিশ করা কোডের প্যাকেজ। মাত্র ১টা কমান্ড দিয়েই পিপ ইন্সটল করে ব্যবহার করা যায়। এই গ্রুভ ইন্সটল স্ক্রিপ্ট টি রান করলে, তা আমাদের প্রয়োজনীয় সকল ট্যুল ইন্সটল করে নিবে।
4. আমাদের রাস্পবেরি পাই টি মেন্যু থেকে অথবা নিচের স্ক্রিপ্ট রান করে রিব্যুট করে নিই।
```sh
sudo reboot
```
5. পাই রিব্যুট হওয়ার পর, টার্মিনাল আবারো চালু করতে হবে আর [Visual Studio Code (VS Code)](https://code.visualstudio.com?WT.mc_id=academic-17441-jabenn) ইন্সটল করতে হবে। এই এডিটরের সাহায্যেই মূলত আমরা সব কোড লিখবো।
```sh
sudo apt install code
```
ইন্সটলেশনের পর টপ মেন্যু থেকেই ভিএস কোড পাওয়া যাবে।
> 💁 পছন্দানুসারে যেকোন পাইথন আইডিই বা এডিটর ব্যবহার করলেই হয়, কিন্তু আমরা এখানে সম্পূর্ণ টিউটোরিয়াল সাজিয়েছি ভিএস কোডের উপর ভিত্তি করে।
6. Pylance ইন্সটল করতে হবে। পাইথনে কোড করার জন্য, এটি ভিএস কোডের একটি এক্সটেনশন। [Pylance extension documentation](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance&WT.mc_id=academic-17441-jabenn) থেকে এটি ইন্সটল করার সকল নির্দেশনা পাওয়া যাবে।
### পাই তে রিমোট একসেস
কম্পিউটারের কীবোর্ড/মাউস/মনিটরের সাথে যুক্ত না রেখে, সরাসরি এটিতে কোডিং না করে, আমরা 'হেডলেস' হিসেবে এটাকে রান করতে পারি। এক্ষেত্রেও ভিস্যুয়াল স্টুডিও কোড ব্যবহার করে আমরা কনফিগারেশন এবং কোডিং করবো।
#### পাই অপারেটিং সিস্টেম সেটাপ করা
পাই ওএস কে একটি এসডি কার্ডে ইন্সটল করে রাখতে হবে।
##### কাজ- পাই ওএস সেটাপ
হেডলেস পাই ওএস সেটাপ করা :
1. [Raspberry Pi OS software page](https://www.raspberrypi.org/software/) থেকে **Raspberry Pi Imager** ডাউনলোড এবং ইন্সটল করতে হবে।
1. কম্পিউটারে একটি এসডি কার্ড প্রবেশ করাতে হবে (এডাপটার প্রয়োজন হতে পারে) ।
1. রাস্পবেরি পাই ইমেজার চালু করতে হবে ।
1. রাস্পবেরি পাই ইমেজার থেকে **CHOOSE OS** সিলেক্ট করি। তারপর *Raspberry Pi OS (Other)* সিলেক্ট করতে হবে *Raspberry Pi OS Lite (32-bit)* এর পরে ।
![The Raspberry Pi Imager with Raspberry Pi OS Lite selected](../../../images/raspberry-pi-imager.png)
> 💁 Raspberry Pi OS Lite হলো মূলত Raspberry Pi OS এরই একটি ভার্সন যার ডেস্কটপ ইন্টারফেস বা এই সংক্রান্ত ট্যুল নেই। হেডলেস পাই তে এসব দরকার নেই বলেই লাইট ভার্সন নেয়া হচ্ছে যাতে প্রক্রিয়াটি সংক্ষিপ্ত হয় এবং দ্রুত ব্যুট করা যায়।
1. **CHOOSE STORAGE** এ ক্লিক করে, এসডি কার্ড সিলেক্ট করি ।
1. **Advanced Options** চালু করতে হবে, `Ctrl+Shift+X` প্রেস করে। এখান থেকে আমরা পাই এর কিছু প্রি-কনফিগারেশন করতে পারবো।
1. এখন **Enable SSH** বক্সে টিক দিতে হবে এবং পাই ইউজারের জন্য পাসওয়ার্ড সেট করতে হবে। এই পাসওয়ার্ডটি আমরা পরে পাইতে লগ ইন করতে ব্যবহার করবো।
1. যদি আমাদেরকে পাই কে ওয়াইফাইয়ের সাথে সংযোগ স্থাপনের মাধ্যমে কাজ করার ইচ্ছে থাকে, তাহলে **Configure WiFi** চেক বাক্সটি চেক করুন এবং আমাদের ওয়াইফাই, এসএসআইডি এবং পাসওয়ার্ড লিখে, দেশ নির্বাচন করতে হবে। যদি ইথারনেট ক্যাবল ব্যবহার করি, তবে এটি করার দরকার নেই। এখানে অবশ্যই নিশ্চিত থাকতে হবে যে, পাই কে যে নেটওয়ার্কে সংযোগ করা হচ্ছে, সেই একই নেটওয়ার্কে কম্পিউটারটি যুক্ত রয়েছে।
1. **Set locale settings** বক্সে টিক দিতে হবে এব দেশ এবং টাইমজোন দিতে হবে।
1. **SAVE** সিলেক্ট করি।
1. এবার **WRITE** ক্লিক করলে ওএস আর এসডি কার্ডের কাজ শুরু। যদি ম্যাক-ওএস ব্যবহার করলে এক্ষেত্রে পাসওয়ার্ডটি প্রবেশ করতে বলা হবে যা ডিস্ক ইমেজ এ কাজ করার একসেস দেয়।
অপারেটিং সিস্টেমটি এসডি কার্ডে 'write' করা হবে এবং কার্ডটি সম্পূর্ণ হয়ে গেলে ওএস দ্বারা 'ইজেক্ট' করে দেওয়া হবে এবং ইউজারকে অবহিত করা হবে। এটি হয়ে গেলে, কম্পিউটার থেকে এসডি কার্ড সরিয়ে তা পাই তে প্রবেশ করিয়ে তা চালু করতে হবে।
#### পাই এর সাথে সংযোগ
পরবর্তী ধাপ হলো পাই তে রিমোট একসেস পাওয়া। এটি `ssh` ব্যবহার করে করা যায়, যা ম্যাক-ওএস, লিনাক্স এবং উইন্ডোজের সাম্প্রতিক ভার্সনগুলোতে রয়েছে।
##### কাজ - পাই এর সাথে সংযোগ
পাই এ রিমোট একসেস
1. টার্মিনাল বা কমান্ড প্রম্পট চালু করতে হবে এবং পাই কে যুক্ত করার জন্য নীচের কমান্ডটি চালু করতে হবে।
```sh
ssh pi@raspberrypi.local
```
উইন্ডোজের পুরাতন ভার্সন, যেসবে `ssh` নেই সেখানে কী করা যায় ? খুব সহজ - OpenSSH ব্যবহার করা যাবে। ইন্সটল করার সব নির্দেশনা [OpenSSH installation documentation](https://docs.microsoft.com//windows-server/administration/openssh/openssh_install_firstuse?WT.mc_id=academic-17441-jabenn) এ পাওয়া যাবে।
1. এটি পাইয়ের সাথে সংযুক্ত হয়ে এবং পাসওয়ার্ড চাইবে।
আমাদের কম্পিউটার নেটওয়ার্ক `<hostname>.local` এই কমান্ডের মাধ্যমে জানতে পারাটা বেশ নতুন একটি ফীচার লিনাক্স এবং উইন্ডোজে। যদি লিনাক্স বা উইন্ডোজে এইক্ষেত্রে হোস্টনেম পাওয়া না গিয়ে বরং এরর আসে তাহলে, অতিরিক্ত সফটওয়্যার ইন্সটল করতে হবে যাতে করে 'ZeroConf networking' চালু করা যায় (এপল ডিভাইসের জন্য 'Bonjour'):
1. লিনাক্স ব্যবহারকারী হলে, Avahi ইন্সটল করতে হবে নীচের কমান্ড ব্যবহার করে:
```sh
sudo apt-get install avahi-daemon
```
1. উইন্ডোজে সবচেয়ে সহজে 'ZeroConf networking' চালু করার জন্য [Bonjour Print Services for Windows](http://support.apple.com/kb/DL999) ইন্সটল করলেই হবে। এছাড়াও [iTunes for Windows](https://www.apple.com/itunes/download/) ইন্সটল করলেও হবে, আর এতে কিছু নতুন সুবিধা রয়েছে যা স্ট্যান্ড-এলোন হিসেবে সাধারণত পাওয়া যায়না।
> 💁 যদি `raspberrypi.local` ব্যবহার করে কানেক্ট করা না যায় , তখন পাই এর আইপি এড্রেস ব্যবহার করতে হবে। এই সংক্রান্ত নির্দেশনা [Raspberry Pi IP address documentation](https://www.raspberrypi.org/documentation/remote-access/ip-address.md) এ বিস্তারিত দেয়া রয়েছে।
1. রাস্পবেরি পাই ইমেজার এডভান্সড অপশনে যে পাসওয়ার্ডটি সেট করা হয়েছিল, তা প্রবেশ করাতে হবে।
#### পাই এ সফ্টওয়্যার কনফিগার
একবার পাইয়ের সাথে সংযুক্ত হয়ে গেলে, আমাদেরকে খেয়াল রাখতে হবে ওএস আপ টু ডেট রয়েছে কিনা এবং গ্রোভ হার্ডওয়ারের সাথে যুক্ত বিভিন্ন লাইব্রেরি এবং সরঞ্জাম ইনস্টল করতে হবে।
##### কাজ - পাই সফ্টওয়্যার কনফিগার
পাই সফ্টওয়্যার কনফিগার এবং গ্রোভ লাইব্রেরি ইন্সটল করা।
1. `ssh` সেশন থেকে, নিচের কমান্ডগুলো রান করতে হবে এবং আপডেট করার পর , পাই রিব্যুট করতে হবে।
```sh
sudo apt update && sudo apt full-upgrade --yes && sudo reboot
```
আপডেট এবং রিব্যুট হয়ে যাবে আর তা শেষ হলে `ssh`সেশন শেষ হয়ে যাবে। তাই ৩০ সেকেন্ড পর পুনরায় কানেক্ট করতে হবে।
1. রিকানেক্ট করা `ssh` সেশনে , নিচের কমান্ডগুলো রান করতে হবে গ্রোভ লাইব্রেরি ইন্সটল করার জন্য:
```sh
curl -sL https://github.com/Seeed-Studio/grove.py/raw/master/install.sh | sudo bash -s -
```
পাইথনের অন্যতম শক্তিশালী একটি সুবিধা হলো [pip packages](https://pypi.org) ইন্সটল করতে পারা - পিপ প্যাকেজ হলো অন্যদের তৈরী ও পাবলিশ করা কোডের প্যাকেজ। মাত্র ১টা কমান্ড দিয়েই পিপ ইন্সটল করে ব্যবহার করা যায়। এই গ্রুভ ইন্সটল স্ক্রিপ্ট টি রান করলে, তা আমাদের প্রয়োজনীয় সকল ট্যুল ইন্সটল করে নিবে।
1. নিচের কমান্ডটি রান করে রিব্যুট করতে হবে:
```sh
sudo reboot
```
পাই রিব্যুট হওয়ার পর `ssh`সেশন শেষ হয়ে যাবে। রিকানেক্ট করার আর প্রয়োজন নেই।
#### রিমোট একসেসের জন্য ভিএস কোড কনফিগার
পাই কনফিগার করার পরে, এটাতে Visual Studio Code (অর্থাৎ VS Code) এর মাধ্যমে কানেক্ট করা যাবে।
##### কাজ - রিমোট একসেসের জন্য ভিএস কোড কনফিগার
প্রয়োজনীয় সফ্টওয়্যার ইনস্টল করে এবং পাই এর সাথে রিমোট বা দূরবর্তী সংযোগ স্থাপন করতে হবে।
1. [VS Code documentation](https://code.visualstudio.com?WT.mc_id=academic-17441-jabenn) অনুসারে ভিসুয়াল স্টুডিও কোড ইন্সটল করতে হবে।
1. তারপর [VS Code Remote Development using SSH documentation](https://code.visualstudio.com/docs/remote/ssh?WT.mc_id=academic-17441-jabenn) অনুসরণ করে প্রয়োজনীয় সব কম্পোনেন্ট ইন্সটল করতে হবে।
1. একই গাইড ফলো করে রাস্পবেরি পাই কে ভিএস কোডের সাথে সংযুক্ত করতে হবে।
1. কানেক্ট হয়ে যাওয়ার পরে [managing extensions](https://code.visualstudio.com/docs/remote/ssh#_managing-extensions?WT.mc_id=academic-17441-jabenn) অনুসারে [Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance&WT.mc_id=academic-17441-jabenn) রিমোট মাধ্যমে পাই তে ইন্সটল করতে হবে।
## Hello world (হ্যালো ওয়ার্লড)
কোন নতুন প্রোগ্রামিং ভাষা বা প্রযুক্তি শেখা শুরু করার সময় এটি প্রচলিত রয়েছে যে - একটি 'হ্যালো ওয়ার্ল্ড' অ্যাপ্লিকেশন তৈরির মাধ্যমে যাত্রা শুরু করা । 'হ্যালো ওয়ার্ল্ড'- একটি ছোট অ্যাপ্লিকেশন যা `"Hello World"` আউটপুট হিসেবে প্রদান করবে আর এতে আমরা বুঝতে পারি যে আমাদের সব কনফিগারেশন ঠিক আছে কিনা।
এক্ষেত্রে 'হ্যালো ওয়ার্ল্ড' অ্যাপটি নিশ্চিত করবে যে আমাদের পাইথন এবং ভিজ্যুয়াল স্টুডিও কোডটি সঠিকভাবে ইনস্টল করা হয়েছে।
এই অ্যাপ্লিকেশনটি `nightlight` নামে একটি ফোল্ডারে থাকবে এবং নাইটলাইট অ্যাপ্লিকেশনটি তৈরি করতে এই অ্যাসাইনমেন্টের পরবর্তী অংশগুলিতে এটি বিভিন্ন কোডের সাথে পুনরায় ব্যবহার করা হবে।
### কাজ - হ্যালো ওয়ার্লড
'হ্যালো ওয়ার্ল্ড' অ্যাপ তৈরী করা
1. ভিএস কোড চালু করতে হবে। সরসারি রাস্পবেরি পাই অথবা কম্পিউটার থেকে এটি করা যাবে যা Remote SSH extension দিয়ে পাই এর সাথে যুক্ত ।
1. ভিএস কোডের টার্মিনাল চালু করতে হবে, এজন্য আমাদেরকে এই ধারা অনুসরণ করতে হবে *Terminal -> New Terminal অথবা `` CTRL+` `` টাইপ করে। এটি `pi` ইউজারের হোম ডিরেক্টরি চালু করবে।
1. নিচের কমান্ড রান করার মাধ্যমে কোড এর জন্য একটি ডিরেক্টরি ক্রিয়েট করা হবে এবং আমরা `app.py` নামের একটি পাইথন ফাইল সেই ডিরেক্টরি তে তৈরী করছি:
```sh
mkdir nightlight
cd nightlight
touch app.py
```
1. এই ফোল্ডারটি ভিএস কোডের মাধ্যমে ওপেন করতে হবেঃ *File -> Open...* তারপর *nightlight* folder সিলেক্ট করে **OK** তে ক্লিক করতে হবে।
![The VS Code open dialog showing the nightlight folder](../../../images/vscode-open-nightlight-remote.png)
1. `app.py` ফাইলটি ভিএস কোড এক্সপ্লোরারের মাধ্যমে ওপেন করে, নিম্নের কোডটি লিখি
```python
print('Hello World!')
```
এখানে `print` ফাংশনটি এর ভেতরে যা রাখা হবে, তাকে আউটপুট হিসেবে প্রদর্শন করবে।
1. ভিএস কোড টার্মিনাল থেকে নিচের কমান্ডটি রান করানো হলে, পাইথন ফাইলটি রান করবে :
```sh
python3 app.py
```
> 💁 এই কোডটি চালানোর জন্য স্পষ্টভাবে `Python 3` কল করতে হবে কেননা ডিভাইসে যদি পাইথন 3 (সর্বশেষ সংস্করণ) ছাড়াও পাইথন 2 ইনস্টল করা থাকে সেক্ষেত্রে সমস্যা এড়ানোর জন্য। যদি আমাদের পাইথন 2 ইনস্টল থাকে তবে `Python` কল করলে, পাইথন 3 এর পরিবর্তে পাইথন 2 ব্যবহার হবে যা আমাদের অবশ্যই এড়িয়ে চলতে হবে।
টার্মিনালে নিম্নোক্ত আউটপুট দেখাবে :
```output
pi@raspberrypi:~/nightlight $ python3 app.py
Hello World!
```
> 💁 এই সম্পূর্ণ কোডটি পাওয়া যাবে [code/pi](code/pi) ফোল্ডারে ।
😀 আমাদের 'Hello World'প্রোগ্রাম সফল হলো !

@ -66,7 +66,7 @@ An even smarter version could use AI in the cloud with data from other sensors c
Although the I in IoT stands for Internet, these devices don't have to connect to the Internet. In some cases, devices can connect to 'edge' devices - gateway devices that run on your local network meaning you can process data without making a call over the Internet. This can be faster when you have a lot of data or a slow Internet connection, it allows you to run offline where Internet connectivity is not possible such as on a ship or in a disaster area when responding to a humanitarian crisis, and allows you to keep data private. Some devices will contain processing code created using cloud tools and run this locally to gather and respond to data without using an Internet connection to make a decision. Although the I in IoT stands for Internet, these devices don't have to connect to the Internet. In some cases, devices can connect to 'edge' devices - gateway devices that run on your local network meaning you can process data without making a call over the Internet. This can be faster when you have a lot of data or a slow Internet connection, it allows you to run offline where Internet connectivity is not possible such as on a ship or in a disaster area when responding to a humanitarian crisis, and allows you to keep data private. Some devices will contain processing code created using cloud tools and run this locally to gather and respond to data without using an Internet connection to make a decision.
One example of this is a smart home device such as an Apple HomePod, Amazon Alexa, or Google Home, which will listen to your voice using AI models trained in the cloud, and will 'wake up' when a certain word or phrase is spoken, and only then send your speech to the Internet for processing, keeping everything else you say private. One example of this is a smart home device such as an Apple HomePod, Amazon Alexa, or Google Home, which will listen to your voice using AI models trained in the cloud, but running locally on the device. These devices will 'wake up' when a certain word or phrase is spoken, and only then send your speech over the Internet for processing. The device will stop sending speech at an appropriate point such as when it detects a pause in your speech. Everything you say before waking up the device with the wake word, and everything you say after the device has stopped listening will not be sent over the internet to the device provider, and therefore will be private.
✅ Think of other scenarios where privacy is important so processing of data would be better done on the edge rather than in the cloud. As a hint - think about IoT devices with cameras or other imaging devices on them. ✅ Think of other scenarios where privacy is important so processing of data would be better done on the edge rather than in the cloud. As a hint - think about IoT devices with cameras or other imaging devices on them.

@ -192,6 +192,23 @@ The Azure Functions CLI can be used to create a new Functions app.
> ⚠️ If you get a firewall notification, grant access as the `func` application needs to be able to read and write to your network. > ⚠️ If you get a firewall notification, grant access as the `func` application needs to be able to read and write to your network.
> ⚠️ If you are using macOS, there may be warnings in the output:
>
> ```output
> (.venv) ➜ soil-moisture-trigger func start
> Found Python version 3.9.1 (python3).
>
> Azure Functions Core Tools
> Core Tools Version: 3.0.3442 Commit hash: 6bfab24b2743f8421475d996402c398d2fe4a9e0 (64-bit)
> Function Runtime Version: 3.0.15417.0
>
> [2021-06-16T08:18:28.315Z] Cannot create directory for shared memory usage: /dev/shm/AzureFunctions
> [2021-06-16T08:18:28.316Z] System.IO.FileSystem: Access to the path '/dev/shm/AzureFunctions' is denied. Operation not permitted.
> [2021-06-16T08:18:30.361Z] No job functions found.
> ```
>
> You can ignore these as long as the Functions app starts correctly and lists the running functions. As mentioned in [this question on the Microsoft Docs Q&A](https://docs.microsoft.com/answers/questions/396617/azure-functions-core-tools-error-osx-devshmazurefu.html?WT.mc_id=academic-17441-jabenn) it can be ignored.
1. Stop the Functions app by pressing `ctrl+c`. 1. Stop the Functions app by pressing `ctrl+c`.
1. Open the current folder in VS Code, either by opening VS Code, then opening this folder, or by running the following: 1. Open the current folder in VS Code, either by opening VS Code, then opening this folder, or by running the following:
@ -213,23 +230,6 @@ The Azure Functions CLI can be used to create a new Functions app.
1. Make sure the Python virtual environment is running in the VS Code terminal. Terminate it and restart it if necessary. 1. Make sure the Python virtual environment is running in the VS Code terminal. Terminate it and restart it if necessary.
1. There may be warnings in the output:
```output
(.venv) ➜ soil-moisture-trigger func start
Found Python version 3.9.1 (python3).
Azure Functions Core Tools
Core Tools Version: 3.0.3442 Commit hash: 6bfab24b2743f8421475d996402c398d2fe4a9e0 (64-bit)
Function Runtime Version: 3.0.15417.0
[2021-06-16T08:18:28.315Z] Cannot create directory for shared memory usage: /dev/shm/AzureFunctions
[2021-06-16T08:18:28.316Z] System.IO.FileSystem: Access to the path '/dev/shm/AzureFunctions' is denied. Operation not permitted.
[2021-06-16T08:18:30.361Z] No job functions found.
```
but don't worry about them as long as the Functions app starts correctly and lists the running functions. As mentioned in this question on the [Docs Q&A](https://docs.microsoft.com/answers/questions/396617/azure-functions-core-tools-error-osx-devshmazurefu.html?WT.mc_id=academic-17441-jabenn) it can be ignored.
## Create an IoT Hub event trigger ## Create an IoT Hub event trigger
The Functions app is the shell of your serverless code. To respond to IoT hub events, you can add an IoT Hub trigger to this app. This trigger needs to connect to the stream of messages that are sent to the IoT Hub and respond to them. To get this stream of messages, your trigger needs to connect to the IoT Hubs *event hub compatible endpoint*. The Functions app is the shell of your serverless code. To respond to IoT hub events, you can add an IoT Hub trigger to this app. This trigger needs to connect to the stream of messages that are sent to the IoT Hub and respond to them. To get this stream of messages, your trigger needs to connect to the IoT Hubs *event hub compatible endpoint*.

@ -1,10 +1,10 @@
# Transport from farm to factory - using IoT to track food deliveries # Transport from farm to factory - using IoT to track food deliveries
Many farmers grow food to sell - either they are commercial growers who sell everything they grow, or they are subsistence farmers who sell their excess produce to buy necessities. Somehow the food has to get from the farm to the consumer, and this usually relies on bulk transport from farms, to hubs or processing plants, then on to stores. For example, a tomato farmer will harvest tomatoes, pack them into boxes, load the boxes into a truck then deliver to a processing plant. The tomatoes will then be sorted, and from there delivered to the consumers in the form of retail, food processing, or restaurants. Many farmers grow food to sell - either they are commercial farmers who sell everything they grow, or they are subsistence farmers who sell their excess produce to buy necessities. Somehow the food has to get from the farm to the consumer, and this usually relies on bulk transport from farms, to hubs or processing plants, then to stores. For example, a tomato farmer will harvest tomatoes, pack them into boxes, load the boxes into a truck then deliver to a processing plant. The tomatoes will then be sorted, and from there delivered to the consumers in the form of processed food, retail sales, or consumed at restaurants.
IoT can help with this supply chain by tracking the food in transit - ensuring drivers are going where they should, monitoring vehicle locations, and getting alerts when vehicles arrive so that food can be unloaded, ready for processing as soon as possible. IoT can help with this supply chain by tracking the food in transit - ensuring drivers are going where they should, monitoring vehicle locations, and getting alerts when vehicles arrive so that food can be unloaded, and be ready for processing as soon as possible.
> 🎓 A *supply chain* is the sequence of activities to make and deliver something. For example, in tomato farming it covers seed, soil, fertilizer and water supply, growing tomatoes, delivering tomatoes to a central hub, transporting them to a supermarkets local hub, transporting to the individual supermarket, being put out on display, then sold to a consumer and taken home to eat. Each step is like the links in a chain. > 🎓 A *supply chain* is the sequence of activities to make and deliver something. For example, in tomato farming it covers seed, soil, fertilizer and water supply, growing tomatoes, delivering tomatoes to a central hub, transporting them to a supermarket's local hub, transporting to the individual supermarket, being put out on display, then sold to a consumer and taken home to eat. Each step is like the links in a chain.
> 🎓 The transportation part of the supply chain is know as *logistics*. > 🎓 The transportation part of the supply chain is know as *logistics*.
@ -15,7 +15,7 @@ In these 4 lessons, you'll learn how to apply the Internet of Things to improve
## Topics ## Topics
1. [Location tracking](lessons/1-location-tracking/README.md) 1. [Location tracking](lessons/1-location-tracking/README.md)
1. [Store location data](./3-transport/lessons/2-store-location-data/README.md) 1. [Store location data](lessons/2-store-location-data/README.md)
1. [Visualize location data](lessons/3-visualize-location-data/README.md) 1. [Visualize location data](lessons/3-visualize-location-data/README.md)
1. [Geofences](lessons/4-geofences/README.md) 1. [Geofences](lessons/4-geofences/README.md)

@ -10,13 +10,13 @@ Add a sketchnote if possible/appropriate
## Introduction ## Introduction
The main process for getting food from a farmer to a consumer involves loading boxes of produce on to trucks, ships, airplanes, or other commercial transport vehicles, and delivering the food somewhere - either direct to a customer, or to a central hub or warehouse for processing. The whole end-to-end process from farm to consumer is part of a process called the *supply chain*. The video below from Arizona State University's W. P. Carey School of Business talks about the idea of the supply chain and how it is managed in more detail. The main process for getting food from a farmer to a consumer involves loading boxes of produce on to trucks, ships, airplanes, or other commercial transport vehicles, and delivering the food somewhere - either directly to a customer, or to a central hub or warehouse for processing. The whole end-to-end process from farm to consumer is part of a process called the *supply chain*. The video below from Arizona State University's W. P. Carey School of Business talks about the idea of the supply chain and how it is managed in more detail.
[![What is Supply Chain Management? A video from Arizona State University's W. P. Carey School of Business](https://img.youtube.com/vi/Mi1QBxVjZAw/0.jpg)](https://www.youtube.com/watch?v=Mi1QBxVjZAw) [![What is Supply Chain Management? A video from Arizona State University's W. P. Carey School of Business](https://img.youtube.com/vi/Mi1QBxVjZAw/0.jpg)](https://www.youtube.com/watch?v=Mi1QBxVjZAw)
Adding IoT devices can drastically improve your supply chain, allowing you to manage where items are, plan transport and goods handling better, and respond quicker to problems. Adding IoT devices can drastically improve your supply chain, allowing you to manage where items are, plan transport and goods handling better, and respond quicker to problems.
When managing a fleet of vehicles such as trucks, it is helpful to know where each vehicle is at a given time. Vehicles can be fitted with GPS sensors that send their location to IoT systems, allowing the owners to pinpoint their location, see the route they have taken, and know when they will arrive at their destination. Most vehicles operate outside of WiFi coverage, so they use cellular networks to send this kind of data. Sometimes the GPS sensor is built into more complex IoT devices such as electronic log books. These devices track how long a truck has been driven for to ensure drivers are in compliance with local laws on working hours. When managing a fleet of vehicles such as trucks, it is helpful to know where each vehicle is at a given time. Vehicles can be fitted with GPS sensors that send their location to IoT systems, allowing the owners to pinpoint their location, see the route they have taken, and know when they will arrive at their destination. Most vehicles operate outside of WiFi coverage, so they use cellular networks to send this kind of data. Sometimes the GPS sensor is built into more complex IoT devices such as electronic log books. These devices track how long a truck has been in transit to ensure drivers are in compliance with local laws on working hours.
In this lesson you will learn how to track a vehicles location using a Global Positioning System (GPS) sensor. In this lesson you will learn how to track a vehicles location using a Global Positioning System (GPS) sensor.
@ -53,11 +53,11 @@ The core component of vehicle tracking is GPS - sensors that can pinpoint their
## Geospatial coordinates ## Geospatial coordinates
Geospatial coordinates are used to define points on the Earth's surface, similar to how coordinates can be used to draw to a pixel on a computer screen or position stitches in cross stitch. For a single point, you have a pair of coordinates. For example, the Microsoft Campus in Redmond, Washington, USA is located at 47.6423109,-122.1390293. Geospatial coordinates are used to define points on the Earth's surface, similar to how coordinates can be used to draw to a pixel on a computer screen or position stitches in cross stitch. For a single point, you have a pair of coordinates. For example, the Microsoft Campus in Redmond, Washington, USA is located at 47.6423109, -122.1390293.
### Latitude and longitude ### Latitude and longitude
The Earth is a sphere - a three-dimensional circle. Because of this, points are defined is by dividing it into 360 degrees, the same as the geometry of circles. Latitude measures the number of degrees north to south, longitude measures the number of degrees east to west. The Earth is a sphere - a three-dimensional circle. Because of this, points are defined by dividing it into 360 degrees, the same as the geometry of circles. Latitude measures the number of degrees north to south, longitude measures the number of degrees east to west.
> 💁 No-one really knows the original reason why circles are divided into 360 degrees. The [degree (angle) page on Wikipedia](https://wikipedia.org/wiki/Degree_(angle)) covers some of the possible reasons. > 💁 No-one really knows the original reason why circles are divided into 360 degrees. The [degree (angle) page on Wikipedia](https://wikipedia.org/wiki/Degree_(angle)) covers some of the possible reasons.
@ -178,7 +178,7 @@ Rather than use the raw NMEA data, it is better to decode it into a more useful
### Task - decode GPS sensor data ### Task - decode GPS sensor data
Work through the relevant guide to measure soil moisture using your IoT device: Work through the relevant guide to decode GPS sensor data using your IoT device:
* [Arduino - Wio Terminal](wio-terminal-gps-decode.md) * [Arduino - Wio Terminal](wio-terminal-gps-decode.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-gps-decode.md) * [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-gps-decode.md)

@ -6,7 +6,7 @@ The NMEA sentences that come from your GPS sensor have other data in addition to
For example - can you get the current date and time? If you are using a microcontroller, can you set the clock using GPS data in the same way you set is using NTP signals in the previous project? Can you get elevation (your height above sea level), or your current speed? For example - can you get the current date and time? If you are using a microcontroller, can you set the clock using GPS data in the same way you set is using NTP signals in the previous project? Can you get elevation (your height above sea level), or your current speed?
If you are using a virtual IoT device, then you can get some of this data by sending MENA sentences generated using tools [nmeagen.org](https://www.nmeagen.org). If you are using a virtual IoT device, then you can get some of this data by sending NMEA sentences generated using tools [nmeagen.org](https://www.nmeagen.org).
## Rubric ## Rubric

@ -2,21 +2,12 @@ import time
import serial import serial
import pynmea2 import pynmea2
import json import json
from azure.iot.device import IoTHubDeviceClient, Message
connection_string = '<connection_string>'
serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1) serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1)
serial.reset_input_buffer() serial.reset_input_buffer()
serial.flush() serial.flush()
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string) def print_gps_data(line):
print('Connecting')
device_client.connect()
print('Connected')
def printGPSData(line):
msg = pynmea2.parse(line) msg = pynmea2.parse(line)
if msg.sentence_type == 'GGA': if msg.sentence_type == 'GGA':
lat = pynmea2.dm_to_sd(msg.lat) lat = pynmea2.dm_to_sd(msg.lat)
@ -28,16 +19,13 @@ def printGPSData(line):
if msg.lon_dir == 'W': if msg.lon_dir == 'W':
lon = lon * -1 lon = lon * -1
message_json = { "gps" : { "lat":lat, "lon":lon } } print(f'{lat},{lon} - from {msg.num_sats} satellites')
print("Sending telemetry", message_json)
message = Message(json.dumps(message_json))
device_client.send_message(message)
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
while len(line) > 0: while len(line) > 0:
printGPSData(line) print_gps_data(line)
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(1) time.sleep(1)

@ -5,18 +5,11 @@ import time
import counterfit_shims_serial import counterfit_shims_serial
import pynmea2 import pynmea2
import json import json
from azure.iot.device import IoTHubDeviceClient, Message
connection_string = '<connection_string>' connection_string = '<connection_string>'
serial = counterfit_shims_serial.Serial('/dev/ttyAMA0') serial = counterfit_shims_serial.Serial('/dev/ttyAMA0')
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def send_gps_data(line): def send_gps_data(line):
msg = pynmea2.parse(line) msg = pynmea2.parse(line)
if msg.sentence_type == 'GGA': if msg.sentence_type == 'GGA':
@ -29,10 +22,7 @@ def send_gps_data(line):
if msg.lon_dir == 'W': if msg.lon_dir == 'W':
lon = lon * -1 lon = lon * -1
message_json = { "gps" : { "lat":lat, "lon":lon } } print(f'{lat},{lon} - from {msg.num_sats} satellites')
print("Sending telemetry", message_json)
message = Message(json.dumps(message_json))
device_client.send_message(message)
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
@ -41,4 +31,4 @@ while True:
send_gps_data(line) send_gps_data(line)
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(60) time.sleep(1)

@ -5,14 +5,14 @@ serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1)
serial.reset_input_buffer() serial.reset_input_buffer()
serial.flush() serial.flush()
def printGPSData(): def print_gps_data():
print(line.rstrip()) print(line.rstrip())
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
while len(line) > 0: while len(line) > 0:
printGPSData() print_gps_data()
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(1) time.sleep(1)

@ -6,14 +6,14 @@ import counterfit_shims_serial
serial = counterfit_shims_serial.Serial('/dev/ttyAMA0') serial = counterfit_shims_serial.Serial('/dev/ttyAMA0')
def printGPSData(line): def print_gps_data(line):
print(line.rstrip()) print(line.rstrip())
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
while len(line) > 0: while len(line) > 0:
printGPSData(line) print_gps_data(line)
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(1) time.sleep(1)

@ -24,7 +24,7 @@ Connect the GPS sensor.
1. With the Raspberry Pi powered off, connect the other end of the Grove cable to the UART socket marked **UART** on the Grove Base hat attached to the Pi. This socket is on the middle row, on the side nearest the SD Card slot, the other end from the USB ports and ethernet socket. 1. With the Raspberry Pi powered off, connect the other end of the Grove cable to the UART socket marked **UART** on the Grove Base hat attached to the Pi. This socket is on the middle row, on the side nearest the SD Card slot, the other end from the USB ports and ethernet socket.
![The grove GPS sensor connected to the UART socket](../../../images/pi-gps-sensor.png) ![The grove GPS sensor connected to the UART socket](../../../images/pi-gps-sensor.png)
1. Position the GPS sensor so that the attached antenna has visibility to the sky - ideally next to an open window or outside. It's easier to get a clearer signal with nothing in the way of the antenna. 1. Position the GPS sensor so that the attached antenna has visibility to the sky - ideally next to an open window or outside. It's easier to get a clearer signal with nothing in the way of the antenna.
@ -42,7 +42,7 @@ Program the device.
1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension. 1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension.
> ⚠️ You can refer to [the instructions for setting up and launch VS Code in lesson 1 if needed](../../../1-getting-started/lessons/1-introduction-to-iot/pi.md). > ⚠️ You can refer to [the instructions for setting up and launching VS Code in lesson 1 if needed](../../../1-getting-started/lessons/1-introduction-to-iot/pi.md).
1. With newer versions of the Raspberry Pi that support Bluetooth, there is a conflict between the serial port used for Bluetooth, and the one used by the Grove UART port. To fix this, do the following: 1. With newer versions of the Raspberry Pi that support Bluetooth, there is a conflict between the serial port used for Bluetooth, and the one used by the Grove UART port. To fix this, do the following:
@ -118,14 +118,14 @@ Program the device.
serial.reset_input_buffer() serial.reset_input_buffer()
serial.flush() serial.flush()
def printGPSData(line): def print_gps_data(line):
print(line.rstrip()) print(line.rstrip())
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
while len(line) > 0: while len(line) > 0:
printGPSData(line) print_gps_data(line)
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(1) time.sleep(1)
@ -133,9 +133,9 @@ Program the device.
This code imports the `serial` module from the `pyserial` Pip package. It then connects to the `/dev/ttyAMA0` serial port - this is the address of the serial port that the Grove Pi Base Hat uses for its UART port. It then clears any existing data from this serial connection. This code imports the `serial` module from the `pyserial` Pip package. It then connects to the `/dev/ttyAMA0` serial port - this is the address of the serial port that the Grove Pi Base Hat uses for its UART port. It then clears any existing data from this serial connection.
Next a function called `printGPSData` is defined that prints out the line passed to it to the console. Next a function called `print_gps_data` is defined that prints out the line passed to it to the console.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `printGPSData` function for each line. Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `print_gps_data` function for each line.
After all the data has been read, the loop sleeps for 1 second, then tries again. After all the data has been read, the loop sleeps for 1 second, then tries again.

@ -24,7 +24,7 @@ Program the device to decode the GPS data.
import pynmea2 import pynmea2
``` ```
1. Replace the contents of the `printGPSData` function with the following: 1. Replace the contents of the `print_gps_data` function with the following:
```python ```python
msg = pynmea2.parse(line) msg = pynmea2.parse(line)

@ -77,28 +77,28 @@ Program the GPS sensor app.
1. Add the following code below this to read from the serial port and print the values to the console: 1. Add the following code below this to read from the serial port and print the values to the console:
```python ```python
def printGPSData(line): def print_gps_data(line):
print(line.rstrip()) print(line.rstrip())
while True: while True:
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
while len(line) > 0: while len(line) > 0:
printGPSData(line) print_gps_data(line)
line = serial.readline().decode('utf-8') line = serial.readline().decode('utf-8')
time.sleep(1) time.sleep(1)
``` ```
A function called `printGPSData` is defined that prints out the line passed to it to the console. A function called `print_gps_data` is defined that prints out the line passed to it to the console.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `printGPSData` function for each line. Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `print_gps_data` function for each line.
After all the data has been read, the loop sleeps for 1 second, then tries again. After all the data has been read, the loop sleeps for 1 second, then tries again.
1. Run this code, ensuring you are using a different terminal to the one that the CounterFit app is running it, so that the CounterFit app remains running. 1. Run this code, ensuring you are using a different terminal to the one that the CounterFit app is running it, so that the CounterFit app remains running.
1. From the CounterFit app, change the value of the gps sensor. You can do this in one of thess ways: 1. From the CounterFit app, change the value of the gps sensor. You can do this in one of these ways:
* Set the **Source** to `Lat/Lon`, and set an explicit latitude, longitude and number of satellites used to get the GPS fix. This value will be sent only once, so check the **Repeat** box to have the data repeat every second. * Set the **Source** to `Lat/Lon`, and set an explicit latitude, longitude and number of satellites used to get the GPS fix. This value will be sent only once, so check the **Repeat** box to have the data repeat every second.

@ -31,7 +31,7 @@ Program the device to decode the GPS data.
TinyGPSPlus gps; TinyGPSPlus gps;
``` ```
1. Change the contents of the `printGPSData` function to be the following: 1. Change the contents of the `printGPSData` function to the following:
```cpp ```cpp
if (gps.encode(Serial3.read())) if (gps.encode(Serial3.read()))

@ -22,15 +22,15 @@ Connect the GPS sensor.
1. Insert one end of a Grove cable into the socket on the GPS sensor. It will only go in one way round. 1. Insert one end of a Grove cable into the socket on the GPS sensor. It will only go in one way round.
1. With the Wio Terminal disconnected from your computer or other power supply, connect the other end of the Grove cable to the left-hand side Grove socket on the Wio Terminal as you look at the screen. This is the socket closest to from the power button. 1. With the Wio Terminal disconnected from your computer or other power supply, connect the other end of the Grove cable to the left-hand side Grove socket on the Wio Terminal as you look at the screen. This is the socket closest to the power button.
![The grove GPS sensor connected to the left hand socket](../../../images/wio-gps-sensor.png) ![The grove GPS sensor connected to the left hand socket](../../../images/wio-gps-sensor.png)
1. Position the GPS sensor so that the attached antenna has visibility to the sky - ideally next to an open window or outside. It's easier to get a clearer signal with nothing in the way of the antenna. 1. Position the GPS sensor so that the attached antenna has visibility to the sky - ideally next to an open window or outside. It's easier to get a clearer signal with nothing in the way of the antenna.
1. You can now connect the Wio Terminal to your computer. 1. You can now connect the Wio Terminal to your computer.
1. The GPS sensor has 2 LEDs - a blue LED that flashes when data is transmitted, and a green LED that flashes every second when receiving data from satellites. Ensure the blue LED is flashing when you power up the Pi. After a few minutes the green LED will flash - if not, you may need to reposition the antenna. 1. The GPS sensor has 2 LEDs - a blue LED that flashes when data is transmitted, and a green LED that flashes every second when receiving data from satellites. Ensure the blue LED is flashing when you power up the Wio Terminal. After a few minutes the green LED will flash - if not, you may need to reposition the antenna.
## Program the GPS sensor ## Program the GPS sensor

@ -1,13 +1,14 @@
import time import time
from grove.adc import ADC import serial
from grove.grove_relay import GroveRelay import pynmea2
import json import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse from azure.iot.device import IoTHubDeviceClient, Message
connection_string = '<connection_string>' connection_string = '<connection_string>'
adc = ADC() serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1)
relay = GroveRelay(5) serial.reset_input_buffer()
serial.flush()
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string) device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
@ -15,24 +16,28 @@ print('Connecting')
device_client.connect() device_client.connect()
print('Connected') print('Connected')
def handle_method_request(request): def print_gps_data(line):
print("Direct method received - ", request.name) msg = pynmea2.parse(line)
if msg.sentence_type == 'GGA':
if request.name == "relay_on": lat = pynmea2.dm_to_sd(msg.lat)
relay.on() lon = pynmea2.dm_to_sd(msg.lon)
elif request.name == "relay_off":
relay.off()
method_response = MethodResponse.create_from_method_request(request, 200) if msg.lat_dir == 'S':
device_client.send_method_response(method_response) lat = lat * -1
device_client.on_method_request_received = handle_method_request if msg.lon_dir == 'W':
lon = lon * -1
message_json = { "gps" : { "lat":lat, "lon":lon } }
print("Sending telemetry", message_json)
message = Message(json.dumps(message_json))
device_client.send_message(message)
while True: while True:
soil_moisture = adc.read(0) line = serial.readline().decode('utf-8')
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ 'soil_moisture': soil_moisture })) while len(line) > 0:
device_client.send_message(message) print_gps_data(line)
line = serial.readline().decode('utf-8')
time.sleep(10) time.sleep(60)

@ -2,15 +2,14 @@ from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000) CounterFitConnection.init('127.0.0.1', 5000)
import time import time
from counterfit_shims_grove.adc import ADC import counterfit_shims_serial
from counterfit_shims_grove.grove_relay import GroveRelay import pynmea2
import json import json
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse from azure.iot.device import IoTHubDeviceClient, Message
connection_string = '<connection_string>' connection_string = '<connection_string>'
adc = ADC() serial = counterfit_shims_serial.Serial('/dev/ttyAMA0')
relay = GroveRelay(5)
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string) device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
@ -18,24 +17,28 @@ print('Connecting')
device_client.connect() device_client.connect()
print('Connected') print('Connected')
def handle_method_request(request): def send_gps_data(line):
print("Direct method received - ", request.name) msg = pynmea2.parse(line)
if msg.sentence_type == 'GGA':
if request.name == "relay_on": lat = pynmea2.dm_to_sd(msg.lat)
relay.on() lon = pynmea2.dm_to_sd(msg.lon)
elif request.name == "relay_off":
relay.off()
method_response = MethodResponse.create_from_method_request(request, 200) if msg.lat_dir == 'S':
device_client.send_method_response(method_response) lat = lat * -1
device_client.on_method_request_received = handle_method_request if msg.lon_dir == 'W':
lon = lon * -1
message_json = { "gps" : { "lat":lat, "lon":lon } }
print("Sending telemetry", message_json)
message = Message(json.dumps(message_json))
device_client.send_message(message)
while True: while True:
soil_moisture = adc.read(0) line = serial.readline().decode('utf-8')
print("Soil moisture:", soil_moisture)
message = Message(json.dumps({ 'soil_moisture': soil_moisture })) while len(line) > 0:
device_client.send_message(message) send_gps_data(line)
line = serial.readline().decode('utf-8')
time.sleep(10) time.sleep(60)

@ -14,11 +14,11 @@ print('Connecting')
device_client.connect() device_client.connect()
print('Connected') print('Connected')
speech_config = SpeechConfig(subscription=api_key, recognizer_config = SpeechConfig(subscription=api_key,
region=location, region=location,
speech_recognition_language=language) speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=speech_config) recognizer = SpeechRecognizer(speech_config=recognizer_config)
def recognized(args): def recognized(args):
if len(args.result.text) > 0: if len(args.result.text) > 0:

@ -5,11 +5,11 @@ api_key = '<key>'
location = '<location>' location = '<location>'
language = '<language>' language = '<language>'
speech_config = SpeechConfig(subscription=api_key, recognizer_config = SpeechConfig(subscription=api_key,
region=location, region=location,
speech_recognition_language=language) speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=speech_config) recognizer = SpeechRecognizer(speech_config=recognizer_config)
def recognized(args): def recognized(args):
print(args.result.text) print(args.result.text)

@ -45,9 +45,9 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
location = '<location>' location = '<location>'
language = '<language>' language = '<language>'
speech_config = SpeechConfig(subscription=api_key, recognizer_config = SpeechConfig(subscription=api_key,
region=location, region=location,
speech_recognition_language=language) speech_recognition_language=language)
``` ```
Replace `<key>` with the API key for your speech service. Replace `<location>` with the location you used when you created the speech service resource. Replace `<key>` with the API key for your speech service. Replace `<location>` with the location you used when you created the speech service resource.
@ -59,7 +59,7 @@ On Windows, Linux, and macOS, the speech services Python SDK can be used to list
1. Add the following code to create a speech recognizer: 1. Add the following code to create a speech recognizer:
```python ```python
recognizer = SpeechRecognizer(speech_config=speech_config) recognizer = SpeechRecognizer(speech_config=recognizer_config)
``` ```
1. The speech recognizer runs on a background thread, listening for audio and converting any speech in it to text. You can get the text using a callback function - a function you define and pass to the recognizer. Every time speech is detected, the callback is called. Add the following code to define a callback that prints the text to the console, and pass this callback to the recognizer: 1. The speech recognizer runs on a background thread, listening for audio and converting any speech in it to text. You can get the text using a callback function - a function you define and pass to the recognizer. Every time speech is detected, the callback is called. Add the following code to define a callback that prints the text to the console, and pass this callback to the recognizer:

@ -347,7 +347,7 @@ Once published, the LUIS model can be called from code. In the last lesson you s
if prediction_response.prediction.top_intent == 'set timer': if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number'] numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit'] time_units = prediction_response.prediction.entities['time unit']
total_time = 0 total_seconds = 0
``` ```
The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers - 4 and 17. The `number` entities wil be an array of numbers. For example, if you said *"Set a four minute 17 second timer."*, then the `number` array will contain 2 integers - 4 and 17.
@ -392,15 +392,15 @@ Once published, the LUIS model can be called from code. In the last lesson you s
```python ```python
if time_unit == 'minute': if time_unit == 'minute':
total_time += number * 60 total_seconds += number * 60
else: else:
total_time += number total_seconds += number
``` ```
1. Finally, outside this loop through the entities, log the total time for the timer: 1. Finally, outside this loop through the entities, log the total time for the timer:
```python ```python
logging.info(f'Timer required for {total_time} seconds') logging.info(f'Timer required for {total_seconds} seconds')
``` ```
1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output: 1. Run the function app and speak into your IoT device. You will see the total time for the timer in the function app output:

@ -2,7 +2,7 @@
## Instructions ## Instructions
So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven. So far in this lesson you have trained a model to understand setting a timer. Another useful feature is cancelling a timer - maybe your bread is ready and can be taken out of the oven before the timer is elapsed.
Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized. Add a new intent to your LUIS app to cancel the timer. It won't need any entities, but will need some example sentences. Handle this in your serverless code if it is the top intent, logging that the intent was recognized.

@ -28,16 +28,16 @@ def main(events: List[func.EventHubEvent]):
if prediction_response.prediction.top_intent == 'set timer': if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number'] numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit'] time_units = prediction_response.prediction.entities['time unit']
total_time = 0 total_seconds = 0
for i in range(0, len(numbers)): for i in range(0, len(numbers)):
number = numbers[i] number = numbers[i]
time_unit = time_units[i][0] time_unit = time_units[i][0]
if time_unit == 'minute': if time_unit == 'minute':
total_time += number * 60 total_seconds += number * 60
else: else:
total_time += number total_seconds += number
logging.info(f'Timer required for {total_time} seconds') logging.info(f'Timer required for {total_seconds} seconds')

@ -26,6 +26,50 @@ In this lesson we'll cover:
## Text to speech ## Text to speech
Text to speech, as the name suggests, is the process of converting text into audio that contains the text as spoken words. The basic principle is to break down the words in the text into their constituent sounds (known as phonemes), and stitch together audio for those sounds, either using pre-recorded audio or using audio generated by AI models.
![The three stages of typical text to speech systems](../../../images/tts-overview.png)
Text to speech systems typically have 3 stages:
* Text analysis
* Linguistic analysis
* Wave-form generation
### Text analysis
Text analysis involves taking the text provided, and converting into words that can be used to generate speech. For example, if you convert "Hello world", there there is no text analysis needed, the two words can be converted to speech. If you have "1234" however, then this might need to be converted either into the words "One thousand, two hundred thirty four" or "One, two, three, four" depending on the context. For "I have 1234 apples", then it would be "One thousand, two hundred thirty four", but for "The child counted 1234" then it would be "One, two, three, four".
The words created vary not only for the language, but the locale of that language. For example, in American English, 120 would be "One hundred twenty", in British English it would be "One hundred and twenty", with the use of "and" after the hundreds.
✅ Some other examples that require text analysis include "in" as a short form of inch, and "st" as a short form of saint and street. Can you think of other examples in your language of words that are ambiguous without context.
Once the words have been defined, they are sent for linguistic analysis.
### Linguistic analysis
Linguistic analysis breaks the words down into phonemes. Phonemes are based not just on the letters used, but the other letters in the word. For example, in English the 'a' sound in 'car' and 'care' is different. The English language has 44 different phonemes for the 26 letters in the alphabet, some shared by different letters, such as the same phoneme used at the start of 'circle' and 'serpent'.
✅ Do some research: What are the phonemes for you language?
Once the words have been converted to phonemes, these phonemes need additional data to support intonation, adjusting the tone or duration depending on the context. One example is in English pitch increases can be used to convert a sentence into a question, having a raised pitch for the last word implies a question.
For example - the sentence "You have an apple" is a statement saying that you have an apple. If the pitch goes up at the end, increasing for the word apple, it becomes the question "You have an apple?", asking if you have an apple. The linguistic analysis needs to use the question mark at the end to decide to increase pitch.
Once the phonemes have been generated, they can be sent for wave-form generation to produce the audio output.
### Wave-form generation
The first electronic text to speech systems used single audio recordings for each phoneme, leading to very monotonous, robotic sounding voices. The linguistic analysis would produce phonemes, these would be loaded from a database of sounds and stitched together to make the audio.
✅ Do some research: Find some audio recordings from early speech synthesis systems. Compare it to modern speech synthesis, such as that used in smart assistants.
More modern wave-form generation uses ML models built using deep learning (very large neural networks that act in a similar way to neurons in the brain) to produce more natural sounding voices that can be indistinguishable from humans.
> 💁 Some of these ML models can be re-trained using transfer learning to sound like real people. This means using voice as a security system, something banks are increasingly trying to do, is no longer a good idea as anyone with a recording of a few minutes of your voice can impersonate you.
These large ML models are being trained to combine all three steps into end-to-end speech synthesizers.
## Set the timer ## Set the timer
The timer can be set by sending a command from the serverless code, instructing the IoT device to set the timer. This command will contain the time in seconds till the timer needs to go off. The timer can be set by sending a command from the serverless code, instructing the IoT device to set the timer. This command will contain the time in seconds till the timer needs to go off.
@ -38,11 +82,11 @@ The timer can be set by sending a command from the serverless code, instructing
You will need to set up the connection string for the IoT Hub with the service policy (*NOT* the device) in your `local.settings.json` file and add the `azure-iot-hub` pip package to your `requirements.txt` file. The device ID can be extracted from the event. You will need to set up the connection string for the IoT Hub with the service policy (*NOT* the device) in your `local.settings.json` file and add the `azure-iot-hub` pip package to your `requirements.txt` file. The device ID can be extracted from the event.
1. The direct method you send needs to be called `set-timer`, and will need to send the length of the timer as a JSON property called `time`. Use the following code to build the `CloudToDeviceMethod` using the `total_time` calculated from the data extracted by LUIS: 1. The direct method you send needs to be called `set-timer`, and will need to send the length of the timer as a JSON property called `seconds`. Use the following code to build the `CloudToDeviceMethod` using the `total_seconds` calculated from the data extracted by LUIS:
```python ```python
payload = { payload = {
'time': total_time 'seconds': total_seconds
} }
direct_method = CloudToDeviceMethod(method_name='set-timer', payload=json.dumps(payload)) direct_method = CloudToDeviceMethod(method_name='set-timer', payload=json.dumps(payload))
``` ```
@ -60,11 +104,23 @@ The timer can be set by sending a command from the serverless code, instructing
* [Arduino - Wio Terminal](wio-terminal-set-timer.md) * [Arduino - Wio Terminal](wio-terminal-set-timer.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-set-timer.md) * [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-set-timer.md)
> 💁 You can find this code in the [code-command/wio-terminal](code-command/wio-terminal), [code-command/virtual-device](code-command/virtual-device), or [code-command/pi](code-command/pi) folder.
## Convert text to speech ## Convert text to speech
The same speech service you used to convert speech to text can be used to convert text back into speech, and this can be played through a microphone on your IoT device. The same speech service you used to convert speech to text can be used to convert text back into speech, and this can be played through a speaker on your IoT device. The text to convert is sent to the speech service, along with the type of audio required (such as the sample rate), and binary data containing the audio is returned.
When you send this request, you send it using *Speech Synthesis Markup Language* (SSML), an XML-based markup language for speech synthesis applications. This defines not only the text to be converted, but the language of the text, the voice to use, and can even be used to define speed, volume, and pitch for some or all of the words in the text.
For example, this SSML defines a request to convert the text "Your 3 minute 5 second time has been set" to speech using a British English voice called `en-GB-MiaNeural`
```xml
<speak version='1.0' xml:lang='en-GB'>
<voice xml:lang='en-GB' name='en-GB-MiaNeural'>
Your 3 minute 5 second time has been set
</voice>
</speak>
```
> 💁 Most text to speech systems have multiple voices for different languages, with relevant accents such as a British English voice with an English accent and a New Zealand English voice with a New Zealand accent.
### Task - convert text to speech ### Task - convert text to speech
@ -78,12 +134,17 @@ Work through the relevant guide to convert text to speech using your IoT device:
## 🚀 Challenge ## 🚀 Challenge
SSML has ways to change how words are spoken, such as adding emphasis to certain words, adding pauses, or changing pitch. Try some of these out, sending different SSML from your IoT device and comparing the output. You can read more about SSML, including how to change the way words are spoken in the [Speech Synthesis Markup Language (SSML) Version 1.1 specification from the World Wide Web consortium](https://www.w3.org/TR/speech-synthesis11/).
## Post-lecture quiz ## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/46) [Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/46)
## Review & Self Study ## Review & Self Study
* Read more on speech synthesis on the [Speech synthesis page on Wikipedia](https://wikipedia.org/wiki/Speech_synthesis)
* Read more on ways criminals are using speech synthesis to steal on the [Fake voices 'help cyber crooks steal cash' story on BBC news](https://www.bbc.com/news/technology-48908736)
## Assignment ## Assignment
[](assignment.md) [Cancel the timer](assignment.md)

@ -1,9 +1,12 @@
# # Cancel the timer
## Instructions ## Instructions
In the assignment for the last lesson, you added a cancel timer intent to LUIS. For this assignment you need to handle this intent in the serverless code, send a command to the IoT device, then cancel the timer.
## Rubric ## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement | | Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- | | -------- | --------- | -------- | ----------------- |
| | | | | | Handle the intent in serverless code and send a command | Was able to handle the intent and send a command to the device | Was able to handle the intent but was unable to send the command to the device | Was unable to handle the intent |
| Cancel the timer on the device | Was able to receive the command and cancel the timer | Was able to receive the command but not cancel the timer | Was unable to receive the command |

@ -0,0 +1,15 @@
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}

@ -0,0 +1,12 @@
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"IOT_HUB_CONNECTION_STRING": "<connection string>",
"LUIS_KEY": "<primary key>",
"LUIS_ENDPOINT_URL": "<endpoint url>",
"LUIS_APP_ID": "<app id>",
"REGISTRY_MANAGER_CONNECTION_STRING": "<connection string>"
}
}

@ -0,0 +1,4 @@
# Do not include azure-functions-worker as it may conflict with the Azure Functions platform
azure-functions
azure-cognitiveservices-language-luis

@ -0,0 +1,60 @@
from typing import List
import logging
import azure.functions as func
import json
import os
from azure.cognitiveservices.language.luis.runtime import LUISRuntimeClient
from msrest.authentication import CognitiveServicesCredentials
from azure.iot.hub import IoTHubRegistryManager
from azure.iot.hub.models import CloudToDeviceMethod
def main(events: List[func.EventHubEvent]):
luis_key = os.environ['LUIS_KEY']
endpoint_url = os.environ['LUIS_ENDPOINT_URL']
app_id = os.environ['LUIS_APP_ID']
registry_manager_connection_string = os.environ['REGISTRY_MANAGER_CONNECTION_STRING']
credentials = CognitiveServicesCredentials(luis_key)
client = LUISRuntimeClient(endpoint=endpoint_url, credentials=credentials)
for event in events:
logging.info('Python EventHub trigger processed an event: %s',
event.get_body().decode('utf-8'))
device_id = event.iothub_metadata['connection-device-id']
event_body = json.loads(event.get_body().decode('utf-8'))
prediction_request = { 'query' : event_body['speech'] }
prediction_response = client.prediction.get_slot_prediction(app_id, 'Staging', prediction_request)
if prediction_response.prediction.top_intent == 'set timer':
numbers = prediction_response.prediction.entities['number']
time_units = prediction_response.prediction.entities['time unit']
total_seconds = 0
for i in range(0, len(numbers)):
number = numbers[i]
time_unit = time_units[i][0]
if time_unit == 'minute':
total_seconds += number * 60
else:
total_seconds += number
logging.info(f'Timer required for {total_seconds} seconds')
payload = {
'seconds': total_seconds
}
direct_method = CloudToDeviceMethod(method_name='set-timer', payload=json.dumps(payload))
registry_manager_connection_string = os.environ['REGISTRY_MANAGER_CONNECTION_STRING']
registry_manager = IoTHubRegistryManager(registry_manager_connection_string)
registry_manager.invoke_device_method(device_id, direct_method)

@ -0,0 +1,15 @@
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "samples-workitems",
"connection": "IOT_HUB_CONNECTION_STRING",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}

@ -0,0 +1,184 @@
import io
import json
import pyaudio
import requests
import time
import wave
import threading
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
rate = 16000
def capture_audio():
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': api_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
def convert_speech_to_text(buffer):
url = f'https://{location}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': f'audio/wav; codecs=audio/pcm; samplerate={rate}',
'Accept': 'application/json;text/xml'
}
params = {
'language': language
}
response = requests.post(url, headers=headers, params=params, data=buffer)
response_json = json.loads(response.text)
if response_json['RecognitionStatus'] == 'Success':
return response_json['DisplayText']
else:
return ''
def get_voice():
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
headers = {
'Authorization': 'Bearer ' + get_access_token()
}
response = requests.get(url, headers=headers)
voices_json = json.loads(response.text)
first_voice = next(x for x in voices_json if x['Locale'].lower() == language.lower())
return first_voice['ShortName']
voice = get_voice()
print(f"Using voice {voice}")
playback_format = 'riff-48khz-16bit-mono-pcm'
def get_speech(text):
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': 'application/ssml+xml',
'X-Microsoft-OutputFormat': playback_format
}
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{voice}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
response = requests.post(url, headers=headers, data=ssml.encode('utf-8'))
return io.BytesIO(response.content)
def play_speech(speech):
with wave.open(speech, 'rb') as wave_file:
stream = audio.open(format=audio.get_format_from_width(wave_file.getsampwidth()),
channels=wave_file.getnchannels(),
rate=wave_file.getframerate(),
output_device_index=speaker_card_number,
output=True)
data = wave_file.readframes(4096)
while len(data) > 0:
stream.write(data)
data = wave_file.readframes(4096)
stream.stop_stream()
stream.close()
def say(text):
speech = get_speech(text)
play_speech(speech)
def announce_timer(minutes, seconds):
announcement = 'Times up on your '
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer.'
say(announcement)
def create_timer(total_seconds):
minutes, seconds = divmod(total_seconds, 60)
threading.Timer(total_seconds, announce_timer, args=[minutes, seconds]).start()
announcement = ''
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer started.'
say(announcement)
def handle_method_request(request):
if request.name == 'set-timer':
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
device_client.on_method_request_received = handle_method_request
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
text = convert_speech_to_text(buffer)
if len(text) > 0:
print(text)
message = Message(json.dumps({ 'speech': text }))
device_client.send_message(message)

@ -0,0 +1,86 @@
import json
import threading
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer, SpeechSynthesizer
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
recognizer_config = SpeechConfig(subscription=api_key,
region=location,
speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=recognizer_config)
def recognized(args):
if len(args.result.text) > 0:
message = Message(json.dumps({ 'speech': args.result.text }))
device_client.send_message(message)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
speech_config = SpeechConfig(subscription=api_key,
region=location)
speech_config.speech_synthesis_language = language
speech_synthesizer = SpeechSynthesizer(speech_config=speech_config)
voices = speech_synthesizer.get_voices_async().get().voices
first_voice = next(x for x in voices if x.locale.lower() == language.lower())
speech_config.speech_synthesis_voice_name = first_voice.short_name
def say(text):
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{first_voice.short_name}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
recognizer.stop_continuous_recognition()
speech_synthesizer.speak_ssml(ssml)
recognizer.start_continuous_recognition()
def announce_timer(minutes, seconds):
announcement = 'Times up on your '
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer.'
say(announcement)
def create_timer(total_seconds):
minutes, seconds = divmod(total_seconds, 60)
threading.Timer(total_seconds, announce_timer, args=[minutes, seconds]).start()
announcement = ''
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer started.'
say(announcement)
def handle_method_request(request):
if request.name == 'set-timer':
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
device_client.on_method_request_received = handle_method_request
while True:
time.sleep(1)

@ -0,0 +1,130 @@
import io
import json
import pyaudio
import requests
import time
import wave
import threading
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
from grove.factory import Factory
button = Factory.getButton('GPIO-HIGH', 5)
audio = pyaudio.PyAudio()
microphone_card_number = 1
speaker_card_number = 1
rate = 16000
def capture_audio():
stream = audio.open(format = pyaudio.paInt16,
rate = rate,
channels = 1,
input_device_index = microphone_card_number,
input = True,
frames_per_buffer = 4096)
frames = []
while button.is_pressed():
frames.append(stream.read(4096))
stream.stop_stream()
stream.close()
wav_buffer = io.BytesIO()
with wave.open(wav_buffer, 'wb') as wavefile:
wavefile.setnchannels(1)
wavefile.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
wavefile.setframerate(rate)
wavefile.writeframes(b''.join(frames))
wav_buffer.seek(0)
return wav_buffer
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
def get_access_token():
headers = {
'Ocp-Apim-Subscription-Key': api_key
}
token_endpoint = f'https://{location}.api.cognitive.microsoft.com/sts/v1.0/issuetoken'
response = requests.post(token_endpoint, headers=headers)
return str(response.text)
def convert_speech_to_text(buffer):
url = f'https://{location}.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': f'audio/wav; codecs=audio/pcm; samplerate={rate}',
'Accept': 'application/json;text/xml'
}
params = {
'language': language
}
response = requests.post(url, headers=headers, params=params, data=buffer)
response_json = json.loads(response.text)
if response_json['RecognitionStatus'] == 'Success':
return response_json['DisplayText']
else:
return ''
def say(text):
print(text)
def announce_timer(minutes, seconds):
announcement = 'Times up on your '
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer.'
say(announcement)
def create_timer(total_seconds):
minutes, seconds = divmod(total_seconds, 60)
threading.Timer(total_seconds, announce_timer, args=[minutes, seconds]).start()
announcement = ''
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer started.'
say(announcement)
def handle_method_request(request):
if request.name == 'set-timer':
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
device_client.on_method_request_received = handle_method_request
while True:
while not button.is_pressed():
time.sleep(.1)
buffer = capture_audio()
text = convert_speech_to_text(buffer)
if len(text) > 0:
print(text)
message = Message(json.dumps({ 'speech': text }))
device_client.send_message(message)

@ -0,0 +1,69 @@
import json
import threading
import time
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer
from azure.iot.device import IoTHubDeviceClient, Message, MethodResponse
api_key = '<key>'
location = '<location>'
language = '<language>'
connection_string = '<connection_string>'
device_client = IoTHubDeviceClient.create_from_connection_string(connection_string)
print('Connecting')
device_client.connect()
print('Connected')
recognizer_config = SpeechConfig(subscription=api_key,
region=location,
speech_recognition_language=language)
recognizer = SpeechRecognizer(speech_config=recognizer_config)
def recognized(args):
if len(args.result.text) > 0:
message = Message(json.dumps({ 'speech': args.result.text }))
device_client.send_message(message)
recognizer.recognized.connect(recognized)
recognizer.start_continuous_recognition()
def say(text):
print(text)
def announce_timer(minutes, seconds):
announcement = 'Times up on your '
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer.'
say(announcement)
def create_timer(total_seconds):
minutes, seconds = divmod(total_seconds, 60)
threading.Timer(total_seconds, announce_timer, args=[minutes, seconds]).start()
announcement = ''
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer started.'
say(announcement)
def handle_method_request(request):
if request.name == 'set-timer':
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
method_response = MethodResponse.create_from_method_request(request, 200)
device_client.send_method_response(method_response)
device_client.on_method_request_received = handle_method_request
while True:
time.sleep(1)

@ -0,0 +1,140 @@
# Text to speech - Raspberry Pi
In this part of the lesson, you will write code to convert text to speech using the speech service.
## Convert text to speech using the speech service
The text can be sent to the speech service using the REST API to get speech as an audio file that can be played back on your IoT device. When requesting speech, you need to provide the voice to use as speech can be generated using a variety of different voices.
Each language supports a range of different voices, and you can make a REST request against the speech service to get the list of supported voices for each language.
### Task - get a voice
1. Add the following code above the `say` function to request the list of voices for a language:
```python
def get_voice():
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/voices/list'
headers = {
'Authorization': 'Bearer ' + get_access_token()
}
response = requests.get(url, headers=headers)
voices_json = json.loads(response.text)
first_voice = next(x for x in voices_json if x['Locale'].lower() == language.lower() and x['VoiceType'] == 'Neural')
return first_voice['ShortName']
voice = get_voice()
print(f"Using voice {voice}")
```
This code defines a function called `get_voice` that uses the speech service to get a list of voices. It then finds the first voice that matches the language that is being used.
This function is then called to store the first voice, and the voice name is printed to the console. This voice can be requested once and the value used for every call to convert text to speech.
> 💁 You can get the full list of supported voices from the [Language and voice support documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support?WT.mc_id=academic-17441-jabenn#text-to-speech). If you want to use a specific voice, then you can remove this function and hard code the voice to the voice name from this documentation. For example:
>
> ```python
> voice = 'hi-IN-SwaraNeural'
> ```
### Task - convert text to speech
1. Below this, define a constant for the audio format to be retrieved from the speech services. When you request audio, you can do it in a range of different formats.
```python
playback_format = 'riff-48khz-16bit-mono-pcm'
```
The format you can use depends on your hardware. If you get `Invalid sample rate` errors when playing the audio then change this to another value. You can find the list of supported values in the [Text to speech REST API documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech?WT.mc_id=academic-17441-jabenn#audio-outputs). You will need to use `riff` format audio, and the values to try are `riff-16khz-16bit-mono-pcm`, `riff-24khz-16bit-mono-pcm` and `riff-48khz-16bit-mono-pcm`.
1. Below this, declare a function called `get_speech` that will convert the text to speech using the speech service REST API:
```python
def get_speech(text):
```
1. In the `get_speech` function, define the URL to call and the headers to pass:
```python
url = f'https://{location}.tts.speech.microsoft.com/cognitiveservices/v1'
headers = {
'Authorization': 'Bearer ' + get_access_token(),
'Content-Type': 'application/ssml+xml',
'X-Microsoft-OutputFormat': playback_format
}
```
This set the headers to use a generated access token, set the content to SSML and define the audio format needed.
1. Below this, define the SSML to send to the REST API:
```python
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{voice}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
```
This SSML sets the language and the voice to use, along with the text to convert.
1. Finally, add code in this function to make the REST request and return the binary audio data:
```python
response = requests.post(url, headers=headers, data=ssml.encode('utf-8'))
return io.BytesIO(response.content)
```
### Task - play the audio
1. Below the `get_speech` function, define a new function to play the audio returned by the REST API call:
```python
def play_speech(speech):
```
1. The `speech` passed to this function will be the binary audio data returned from the REST API. Use the following code to open this as a wave file and pass it to PyAudio to play the audio:
```python
def play_speech(speech):
with wave.open(speech, 'rb') as wave_file:
stream = audio.open(format=audio.get_format_from_width(wave_file.getsampwidth()),
channels=wave_file.getnchannels(),
rate=wave_file.getframerate(),
output_device_index=speaker_card_number,
output=True)
data = wave_file.readframes(4096)
while len(data) > 0:
stream.write(data)
data = wave_file.readframes(4096)
stream.stop_stream()
stream.close()
```
This code uses a PyAudio stream, the same as capturing audio. The difference here is the stream is set as an output stream, and data is read from the audio data and pushed to the stream.
Rather than hard coding the stream details such as the sample rate, it is read from the audio data.
1. Replace the contents of the `say` function to the following:
```python
speech = get_speech(text)
play_speech(speech)
```
This code converts the text to speech as binary audio data, and plays the audio.
1. Run the app, and ensure the function app is also running. Set some timers, and you will hear a spoken response saying that your timer has been set, then another spoken response when the timer is complete.
If you get `Invalid sample rate` errors, change the `playback_format` as described above.
> 💁 You can find this code in the [code-spoken-response/pi](code-spoken-response/pi) folder.
😀 Your timer program was a success!

@ -0,0 +1,97 @@
# Set a timer - Virtual IoT Hardware and Raspberry Pi
In this part of the lesson, you will set a timer on your virtual IoT device or Raspberry Pi based off a command from the IoT Hub.
## Set a timer
The command sent from the serverless function contains the time for the timer in seconds as the payload. This time can be used to set a timer.
Timers can be set using the Python `threading.Timer` class. This class takes a delay time and a function, and after the delay time, the function is executed.
### Task - set a timer
1. Open the `smart-timer` project in VS Code, and ensure the virtual environment is loaded in the terminal if you are using a virtual IoT device.
1. Add the following import statement at the top of the file to import the threading Python library:
```python
import threading
```
1. Above the `handle_method_request` function that handles the method request, add a function to speak a response. Fow now this will just write to the console, but later in this lesson this will speak the text.
```python
def say(text):
print(text)
```
1. Below this add a function that will be called by a timer to announce that the timer is complete:
```python
def announce_timer(minutes, seconds):
announcement = 'Times up on your '
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer.'
say(announcement)
```
This function takes the number of minutes and seconds for the timer, and builds a sentence to say that the timer is complete. It will check the number of minutes and seconds, and only include each time unit if it has a number. For example, if the number of minutes is 0 then only seconds are included in the message. This sentence is then sent to the `say` function.
1. Below this, add the following `create_timer` function to create a timer:
```python
def create_timer(seconds):
minutes, seconds = divmod(seconds, 60)
threading.Timer(seconds, announce_timer, args=[minutes, seconds]).start()
```
This function takes the total number of seconds for the timer that will be sent in the command, and converts this to minutes and seconds. It then creates and starts a timer object using the total number of seconds, passing in the `announce_timer` function and a list containing the minutes and seconds. When the timer elapses, it will call the `announce_timer` function, and pass the contents of this list as the parameters - so the first item in the list gets passes as the `minutes` parameter, and the second item as the `seconds` parameter.
1. To the end of the `create_timer` function, add some code to build a message to be spoken to the user to announce that the timer is starting:
```python
announcement = ''
if minutes > 0:
announcement += f'{minutes} minute'
if seconds > 0:
announcement += f'{seconds} second'
announcement += ' timer started.'
say(announcement)
```
Again, this only includes the time unit that has a value. This sentence is then sent to the `say` function.
1. At the start of the `handle_method_request` function, add the following code to check that the `set-timer` direct method was requested:
```python
if request.name == 'set-timer':
```
1. Inside this `if` statement, extract the timer time in seconds from the payload and use this to create a timer:
```python
payload = json.loads(request.payload)
seconds = payload['seconds']
if seconds > 0:
create_timer(payload['seconds'])
```
The timer is only created if the number of seconds is greater than 0
1. Run the app, and ensure the function app is also running. Set some timers, and the output will show the timer being set, and then will show when it elapses:
```output
pi@raspberrypi:~/smart-timer $ python3 app.py
Connecting
Connected
Set a one minute 4 second timer.
1 minute, 4 second timer started
Times up on your 1 minute, 4 second timer
```
> 💁 You can find this code in the [code-timer/pi](code-timer/pi) or [code-timer/virtual-iot-device](code-timer/virtual-iot-device) folder.
😀 Your timer program was a success!

@ -0,0 +1,72 @@
# Text to speech - Virtual IoT device
In this part of the lesson, you will write code to convert text to speech using the speech service.
## Convert text to speech
The speech services SDK that you used in the last lesson to convert speech to text can be used to convert text back to speech. When requesting speech, you need to provide the voice to use as speech can be generated using a variety of different voices.
Each language supports a range of different voices, and you can get the list of supported voices for each language from the speech services SDK.
### Task - convert text to speech
1. Import the `SpeechSynthesizer` from the `azure.cognitiveservices.speech` package by adding it to the existing imports:
```python
from azure.cognitiveservices.speech import SpeechConfig, SpeechRecognizer, SpeechSynthesizer
```
1. Above the `say` function, create a speech configuration to use with the speech synthesizer:
```python
speech_config = SpeechConfig(subscription=api_key,
region=location)
speech_config.speech_synthesis_language = language
speech_synthesizer = SpeechSynthesizer(speech_config=speech_config)
```
This uses the same API key, location and language that was used by the recognizer.
1. Below this, add the following code to get a voice and set it on the speech config:
```python
voices = speech_synthesizer.get_voices_async().get().voices
first_voice = next(x for x in voices if x.locale.lower() == language.lower())
speech_config.speech_synthesis_voice_name = first_voice.short_name
```
This retrieves a list of all the available voices, then finds the first voice that matches the language that is being used.
> 💁 You can get the full list of supported voices from the [Language and voice support documentation on Microsoft Docs](https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support?WT.mc_id=academic-17441-jabenn#text-to-speech). If you want to use a specific voice, then you can remove this function and hard code the voice to the voice name from this documentation. For example:
>
> ```python
> speech_config.speech_synthesis_voice_name = 'hi-IN-SwaraNeural'
> ```
1. Update the contents of the `say` function to generate SSML for the response:
```python
ssml = f'<speak version=\'1.0\' xml:lang=\'{language}\'>'
ssml += f'<voice xml:lang=\'{language}\' name=\'{first_voice.short_name}\'>'
ssml += text
ssml += '</voice>'
ssml += '</speak>'
```
1. Below this, stop the speech recognition, speak the SSML, then start the recognition again:
```python
recognizer.stop_continuous_recognition()
speech_synthesizer.speak_ssml(ssml)
recognizer.start_continuous_recognition()
```
The recognition is stopped whilst the text is spoken to avoid the announcement of the timer starting being detected, sent to LUIS and possibly interpreted as a request to set a new timer.
> 💁 You can test this out by commenting out the lines to stop and restart the recognition. Set one timer, and you may find the announcement sets a new timer, which causes a new announcement, leading to a new timer, and so on for ever!
1. Run the app, and ensure the function app is also running. Set some timers, and you will hear a spoken response saying that your timer has been set, then another spoken response when the timer is complete.
> 💁 You can find this code in the [code-spoken-response/virtual-iot-device](code-spoken-response/virtual-iot-device) folder.
😀 Your timer program was a success!

@ -0,0 +1,3 @@
# Set a timer - Wio Terminal
Coming soon

@ -0,0 +1,3 @@
# Text to speech - Wio Terminal
Coming soon

@ -2,7 +2,9 @@
Add a sketchnote if possible/appropriate Add a sketchnote if possible/appropriate
![Embed a video here if available](video-url) This video gives an overview of the Azure speech services, covering speech to text and text to speech from earlier lessons, as well as translating speech, a topic covered in this lesson:
[![Recognizing speech with a few lines of Python from Microsoft Build 2020](https://img.youtube.com/vi/h6xbpMPSGEA/0.jpg)](https://www.youtube.com/watch?v=h6xbpMPSGEA)
## Pre-lecture quiz ## Pre-lecture quiz

@ -20,7 +20,9 @@ The projects cover the journey of food from farm to table. This includes farming
![A road map for the course showing 24 lessons covering intro, farming, transport, processing, retail and cooking](sketchnotes/Roadmap.png) ![A road map for the course showing 24 lessons covering intro, farming, transport, processing, retail and cooking](sketchnotes/Roadmap.png)
**Hearty thanks to our authors [Jen Fox](https://github.com/jenfoxbot), [Jen Looper](https://github.com/jlooper), [Jim Bennett](https://github.com/jimbobbennett), and sketchnote artist [Nitya Narasimhan](https://github.com/nitya)** **Hearty thanks to our authors [Jen Fox](https://github.com/jenfoxbot), [Jen Looper](https://github.com/jlooper), [Jim Bennett](https://github.com/jimbobbennett), and our sketchnote artist [Nitya Narasimhan](https://github.com/nitya).**
**Thanks as well to our team of [Microsoft Learn Student Ambassadors](https://studentambassadors.microsoft.com?WT.mc_id=academic-17441-jabenn) who have been reviewing and translating this curriculum - [Bhavesh Suneja](https://github.com/EliteWarrior315), [Lateefah Bello](https://www.linkedin.com/in/lateefah-bello/), [Manvi Jha](https://github.com/Severus-Matthew), [Mireille Tan](https://www.linkedin.com/in/mireille-tan-a4834819a/), [Mohammad Iftekher (Iftu) Ebne Jalal](https://github.com/Iftu119), [Priyanshu Srivastav](https://www.linkedin.com/in/priyanshu-srivastav-b067241ba), and [Zina Kamel](https://www.linkedin.com/in/zina-kamel/).**
> **Teachers**, we have [included some suggestions](for-teachers.md) on how to use this curriculum. If you would like to create your own lessons, we have also included a [lesson template](lesson-template/README.md). > **Teachers**, we have [included some suggestions](for-teachers.md) on how to use this curriculum. If you would like to create your own lessons, we have also included a [lesson template](lesson-template/README.md).

@ -46,7 +46,7 @@ All the device code for Raspberry Pi is in Python. To complete all the assignmen
These are specific to using the Raspberry Pi, and are not relevant to using the Arduino device. These are specific to using the Raspberry Pi, and are not relevant to using the Arduino device.
* [Grove Pi base hat](https://wiki.seeedstudio.com/Grove_Base_Hat_for_Raspberry_Pi) * [Grove Pi base hat](https://www.seeedstudio.com/Grove-Base-Hat-for-Raspberry-Pi.html)
* [Raspberry Pi Camera module](https://www.raspberrypi.org/products/camera-module-v2/) * [Raspberry Pi Camera module](https://www.raspberrypi.org/products/camera-module-v2/)
* Microphone and speaker: * Microphone and speaker:

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Loading…
Cancel
Save