Lesson 16 (#62)
* Adding content * Update en.json * Update README.md * Update TRANSLATIONS.md * Adding lesson tempolates * Fixing code files with each others code in * Update README.md * Adding lesson 16 * Adding virtual camera * Adding Wio Terminal camera capture * Adding wio terminal code * Adding SBC classification to lesson 16 * Adding challenge, review and assignmentpull/63/head
@ -1,9 +1,13 @@
|
||||
#
|
||||
# Respond to classification results
|
||||
|
||||
## Instructions
|
||||
|
||||
Your device has classified images, and has the values for the predictions. Your device could use this information to do something - it could sent it to IoT Hub for processing by other systems, or it could control an actuator such as an LED to light up when the fruit is unripe.
|
||||
|
||||
Add code to your device to respond in a way of your choosing - either send data to an IoT Hub, control an actuator, or combine the two and send data to an IoT Hub with some serverless code that determines if the fruit is ripe or not and sends back a command to control an actuator.
|
||||
|
||||
## Rubric
|
||||
|
||||
| Criteria | Exemplary | Adequate | Needs Improvement |
|
||||
| -------- | --------- | -------- | ----------------- |
|
||||
| | | | |
|
||||
| Respond to predictions | Was able to implement a response to predictions that works consistently with predictions of the same value. | Was able to implement a response that is not dependant on the predictions, such as just sending raw data to an IoT Hub | Was unable to program the device to respond to the predictions |
|
||||
|
@ -0,0 +1,16 @@
|
||||
import io
|
||||
import time
|
||||
from picamera import PiCamera
|
||||
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
@ -0,0 +1,16 @@
|
||||
from counterfit_connection import CounterFitConnection
|
||||
CounterFitConnection.init('127.0.0.1', 5000)
|
||||
|
||||
import io
|
||||
from counterfit_shims_picamera import PiCamera
|
||||
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
@ -0,0 +1,5 @@
|
||||
.pio
|
||||
.vscode/.browse.c_cpp.db*
|
||||
.vscode/c_cpp_properties.json
|
||||
.vscode/launch.json
|
||||
.vscode/ipch
|
@ -0,0 +1,7 @@
|
||||
{
|
||||
// See http://go.microsoft.com/fwlink/?LinkId=827846
|
||||
// for the documentation about the extensions.json format
|
||||
"recommendations": [
|
||||
"platformio.platformio-ide"
|
||||
]
|
||||
}
|
@ -0,0 +1,39 @@
|
||||
|
||||
This directory is intended for project header files.
|
||||
|
||||
A header file is a file containing C declarations and macro definitions
|
||||
to be shared between several project source files. You request the use of a
|
||||
header file in your project source file (C, C++, etc) located in `src` folder
|
||||
by including it, with the C preprocessing directive `#include'.
|
||||
|
||||
```src/main.c
|
||||
|
||||
#include "header.h"
|
||||
|
||||
int main (void)
|
||||
{
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Including a header file produces the same results as copying the header file
|
||||
into each source file that needs it. Such copying would be time-consuming
|
||||
and error-prone. With a header file, the related declarations appear
|
||||
in only one place. If they need to be changed, they can be changed in one
|
||||
place, and programs that include the header file will automatically use the
|
||||
new version when next recompiled. The header file eliminates the labor of
|
||||
finding and changing all the copies as well as the risk that a failure to
|
||||
find one copy will result in inconsistencies within a program.
|
||||
|
||||
In C, the usual convention is to give header files names that end with `.h'.
|
||||
It is most portable to use only letters, digits, dashes, and underscores in
|
||||
header file names, and at most one dot.
|
||||
|
||||
Read more about using header files in official GCC documentation:
|
||||
|
||||
* Include Syntax
|
||||
* Include Operation
|
||||
* Once-Only Headers
|
||||
* Computed Includes
|
||||
|
||||
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html
|
@ -0,0 +1,46 @@
|
||||
|
||||
This directory is intended for project specific (private) libraries.
|
||||
PlatformIO will compile them to static libraries and link into executable file.
|
||||
|
||||
The source code of each library should be placed in a an own separate directory
|
||||
("lib/your_library_name/[here are source files]").
|
||||
|
||||
For example, see a structure of the following two libraries `Foo` and `Bar`:
|
||||
|
||||
|--lib
|
||||
| |
|
||||
| |--Bar
|
||||
| | |--docs
|
||||
| | |--examples
|
||||
| | |--src
|
||||
| | |- Bar.c
|
||||
| | |- Bar.h
|
||||
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
|
||||
| |
|
||||
| |--Foo
|
||||
| | |- Foo.c
|
||||
| | |- Foo.h
|
||||
| |
|
||||
| |- README --> THIS FILE
|
||||
|
|
||||
|- platformio.ini
|
||||
|--src
|
||||
|- main.c
|
||||
|
||||
and a contents of `src/main.c`:
|
||||
```
|
||||
#include <Foo.h>
|
||||
#include <Bar.h>
|
||||
|
||||
int main (void)
|
||||
{
|
||||
...
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
PlatformIO Library Dependency Finder will find automatically dependent
|
||||
libraries scanning project source files.
|
||||
|
||||
More information about PlatformIO Library Dependency Finder
|
||||
- https://docs.platformio.org/page/librarymanager/ldf.html
|
@ -0,0 +1,24 @@
|
||||
; PlatformIO Project Configuration File
|
||||
;
|
||||
; Build options: build flags, source filter
|
||||
; Upload options: custom upload port, speed and extra flags
|
||||
; Library options: dependencies, extra library storages
|
||||
; Advanced options: extra scripting
|
||||
;
|
||||
; Please visit documentation for the other options and examples
|
||||
; https://docs.platformio.org/page/projectconf.html
|
||||
|
||||
[env:seeed_wio_terminal]
|
||||
platform = atmelsam
|
||||
board = seeed_wio_terminal
|
||||
framework = arduino
|
||||
lib_deps =
|
||||
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
|
||||
seeed-studio/Seeed Arduino FS @ 2.0.2
|
||||
seeed-studio/Seeed Arduino SFUD @ 2.0.1
|
||||
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
|
||||
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
|
||||
seeed-studio/Seeed Arduino RTC @ 2.0.0
|
||||
build_flags =
|
||||
-DARDUCAM_SHIELD_V2
|
||||
-DOV2640_CAM
|
@ -0,0 +1,160 @@
|
||||
#pragma once
|
||||
|
||||
#include <ArduCAM.h>
|
||||
#include <Wire.h>
|
||||
|
||||
class Camera
|
||||
{
|
||||
public:
|
||||
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
|
||||
{
|
||||
_format = format;
|
||||
_image_size = image_size;
|
||||
}
|
||||
|
||||
bool init()
|
||||
{
|
||||
// Reset the CPLD
|
||||
_arducam.write_reg(0x07, 0x80);
|
||||
delay(100);
|
||||
|
||||
_arducam.write_reg(0x07, 0x00);
|
||||
delay(100);
|
||||
|
||||
// Check if the ArduCAM SPI bus is OK
|
||||
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
|
||||
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// Change MCU mode
|
||||
_arducam.set_mode(MCU2LCD_MODE);
|
||||
|
||||
uint8_t vid, pid;
|
||||
|
||||
// Check if the camera module type is OV2640
|
||||
_arducam.wrSensorReg8_8(0xff, 0x01);
|
||||
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
|
||||
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
|
||||
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
_arducam.set_format(_format);
|
||||
_arducam.InitCAM();
|
||||
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||
_arducam.OV2640_set_Light_Mode(Auto);
|
||||
_arducam.OV2640_set_Special_effects(Normal);
|
||||
delay(1000);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void startCapture()
|
||||
{
|
||||
_arducam.flush_fifo();
|
||||
_arducam.clear_fifo_flag();
|
||||
_arducam.start_capture();
|
||||
}
|
||||
|
||||
bool captureReady()
|
||||
{
|
||||
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
|
||||
}
|
||||
|
||||
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
|
||||
{
|
||||
if (!captureReady()) return false;
|
||||
|
||||
// Get the image file length
|
||||
uint32_t length = _arducam.read_fifo_length();
|
||||
buffer_length = length;
|
||||
|
||||
if (length >= MAX_FIFO_SIZE)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
if (length == 0)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// create the buffer
|
||||
byte *buf = new byte[length];
|
||||
|
||||
uint8_t temp = 0, temp_last = 0;
|
||||
int i = 0;
|
||||
uint32_t buffer_pos = 0;
|
||||
bool is_header = false;
|
||||
|
||||
_arducam.CS_LOW();
|
||||
_arducam.set_fifo_burst();
|
||||
|
||||
while (length--)
|
||||
{
|
||||
temp_last = temp;
|
||||
temp = SPI.transfer(0x00);
|
||||
//Read JPEG data from FIFO
|
||||
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
|
||||
{
|
||||
buf[buffer_pos] = temp;
|
||||
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
_arducam.CS_HIGH();
|
||||
}
|
||||
if (is_header == true)
|
||||
{
|
||||
//Write image data to buffer if not full
|
||||
if (i < 256)
|
||||
{
|
||||
buf[buffer_pos] = temp;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
}
|
||||
else
|
||||
{
|
||||
_arducam.CS_HIGH();
|
||||
|
||||
i = 0;
|
||||
buf[buffer_pos] = temp;
|
||||
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
_arducam.CS_LOW();
|
||||
_arducam.set_fifo_burst();
|
||||
}
|
||||
}
|
||||
else if ((temp == 0xD8) & (temp_last == 0xFF))
|
||||
{
|
||||
is_header = true;
|
||||
|
||||
buf[buffer_pos] = temp_last;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
buf[buffer_pos] = temp;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
_arducam.clear_fifo_flag();
|
||||
|
||||
_arducam.set_format(_format);
|
||||
_arducam.InitCAM();
|
||||
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||
|
||||
// return the buffer
|
||||
*buffer = buf;
|
||||
}
|
||||
|
||||
private:
|
||||
ArduCAM _arducam;
|
||||
int _format;
|
||||
int _image_size;
|
||||
};
|
@ -0,0 +1,9 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
|
||||
using namespace std;
|
||||
|
||||
// WiFi credentials
|
||||
const char *SSID = "<SSID>";
|
||||
const char *PASSWORD = "<PASSWORD>";
|
@ -0,0 +1,112 @@
|
||||
#include <Arduino.h>
|
||||
#include <rpcWiFi.h>
|
||||
#include "SD/Seeed_SD.h"
|
||||
#include <Seeed_FS.h>
|
||||
#include <SPI.h>
|
||||
|
||||
#include "config.h"
|
||||
#include "camera.h"
|
||||
|
||||
Camera camera = Camera(JPEG, OV2640_640x480);
|
||||
|
||||
void setupCamera()
|
||||
{
|
||||
pinMode(PIN_SPI_SS, OUTPUT);
|
||||
digitalWrite(PIN_SPI_SS, HIGH);
|
||||
|
||||
Wire.begin();
|
||||
SPI.begin();
|
||||
|
||||
if (!camera.init())
|
||||
{
|
||||
Serial.println("Error setting up the camera!");
|
||||
}
|
||||
}
|
||||
|
||||
void connectWiFi()
|
||||
{
|
||||
while (WiFi.status() != WL_CONNECTED)
|
||||
{
|
||||
Serial.println("Connecting to WiFi..");
|
||||
WiFi.begin(SSID, PASSWORD);
|
||||
delay(500);
|
||||
}
|
||||
|
||||
Serial.println("Connected!");
|
||||
}
|
||||
|
||||
void setupSDCard()
|
||||
{
|
||||
while (!SD.begin(SDCARD_SS_PIN, SDCARD_SPI))
|
||||
{
|
||||
Serial.println("SD Card Error");
|
||||
}
|
||||
}
|
||||
|
||||
void setup()
|
||||
{
|
||||
Serial.begin(9600);
|
||||
|
||||
while (!Serial)
|
||||
; // Wait for Serial to be ready
|
||||
|
||||
delay(1000);
|
||||
|
||||
connectWiFi();
|
||||
|
||||
setupCamera();
|
||||
|
||||
pinMode(WIO_KEY_C, INPUT_PULLUP);
|
||||
|
||||
setupSDCard();
|
||||
}
|
||||
|
||||
int fileNum = 1;
|
||||
|
||||
void saveToSDCard(byte *buffer, uint32_t length)
|
||||
{
|
||||
char buff[16];
|
||||
sprintf(buff, "%d.jpg", fileNum);
|
||||
fileNum++;
|
||||
|
||||
File outFile = SD.open(buff, FILE_WRITE );
|
||||
outFile.write(buffer, length);
|
||||
outFile.close();
|
||||
|
||||
Serial.print("Image written to file ");
|
||||
Serial.println(buff);
|
||||
}
|
||||
|
||||
void buttonPressed()
|
||||
{
|
||||
camera.startCapture();
|
||||
|
||||
while (!camera.captureReady())
|
||||
delay(100);
|
||||
|
||||
Serial.println("Image captured");
|
||||
|
||||
byte *buffer;
|
||||
uint32_t length;
|
||||
|
||||
if (camera.readImageToBuffer(&buffer, length))
|
||||
{
|
||||
Serial.print("Image read to buffer with length ");
|
||||
Serial.println(length);
|
||||
|
||||
saveToSDCard(buffer, length);
|
||||
|
||||
delete(buffer);
|
||||
}
|
||||
}
|
||||
|
||||
void loop()
|
||||
{
|
||||
if (digitalRead(WIO_KEY_C) == LOW)
|
||||
{
|
||||
buttonPressed();
|
||||
delay(2000);
|
||||
}
|
||||
|
||||
delay(200);
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
|
||||
This directory is intended for PlatformIO Unit Testing and project tests.
|
||||
|
||||
Unit Testing is a software testing method by which individual units of
|
||||
source code, sets of one or more MCU program modules together with associated
|
||||
control data, usage procedures, and operating procedures, are tested to
|
||||
determine whether they are fit for use. Unit testing finds problems early
|
||||
in the development cycle.
|
||||
|
||||
More information about PlatformIO Unit Testing:
|
||||
- https://docs.platformio.org/page/plus/unit-testing.html
|
@ -0,0 +1,36 @@
|
||||
import io
|
||||
import time
|
||||
from picamera import PiCamera
|
||||
|
||||
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||
from msrest.authentication import ApiKeyCredentials
|
||||
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
||||
|
||||
prediction_url = '<prediction_url>'
|
||||
prediction_key = '<prediction key>'
|
||||
|
||||
parts = prediction_url.split('/')
|
||||
endpoint = 'https://' + parts[2]
|
||||
project_id = parts[6]
|
||||
iteration_name = parts[9]
|
||||
|
||||
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||
|
||||
image.seek(0)
|
||||
results = predictor.classify_image(project_id, iteration_name, image)
|
||||
|
||||
for prediction in results.predictions:
|
||||
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
@ -0,0 +1,36 @@
|
||||
from counterfit_connection import CounterFitConnection
|
||||
CounterFitConnection.init('127.0.0.1', 5000)
|
||||
|
||||
import io
|
||||
from counterfit_shims_picamera import PiCamera
|
||||
|
||||
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||
from msrest.authentication import ApiKeyCredentials
|
||||
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
||||
|
||||
prediction_url = '<prediction_url>'
|
||||
prediction_key = '<prediction key>'
|
||||
|
||||
parts = prediction_url.split('/')
|
||||
endpoint = 'https://' + parts[2]
|
||||
project_id = parts[6]
|
||||
iteration_name = parts[9]
|
||||
|
||||
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||
|
||||
image.seek(0)
|
||||
results = predictor.classify_image(project_id, iteration_name, image)
|
||||
|
||||
for prediction in results.predictions:
|
||||
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
@ -0,0 +1,133 @@
|
||||
# Capture an image - Raspberry Pi
|
||||
|
||||
In this part of the lesson, you will add a camera sensor to your Raspberry Pi, and read images from it.
|
||||
|
||||
## Hardware
|
||||
|
||||
The Raspberry Pi needs a camera.
|
||||
|
||||
The camera you'll use is a [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/). This camera is designed to work with the Raspberry Pi and connects via a dedicated connector on the Pi.
|
||||
|
||||
> 💁 This camera uses the [Camera Serial Interface, a protocol from the Mobile Industry Processor Interface Alliance](https://wikipedia.org/wiki/Camera_Serial_Interface), known as MIPI-CSI. This is a dedicated protocol for sending images
|
||||
|
||||
## Connect the camera
|
||||
|
||||
The camera can be connected to the Raspberry Pi using a ribbon cable.
|
||||
|
||||
### Task - connect the camera
|
||||
|
||||
![A Raspberry Pi Camera](../../../images/pi-camera-module.png)
|
||||
|
||||
1. Power off the Pi.
|
||||
|
||||
1. Connect the ribbon cable that comes with the camera to the camera. To do this, pull gently on the black plastic clip in the holder so that it comes out a little bit, then slide the cable into the socket, with the blue side facing away from the lens, the metal pin strips facing towards the lens. Once it is all the way in, push the black plastic clip back into place.
|
||||
|
||||
You can find an animation showing how to open the clip and insert the cable on the [Raspberry Pi Getting Started with the Camera module documentation](https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2).
|
||||
|
||||
![The ribbon cable inserted into the camera module](../../../images/pi-camera-ribbon-cable.png)
|
||||
|
||||
1. Remove the Grove Base Hat from the Pi.
|
||||
|
||||
1. Pass the ribbon cable through the camera slot in the Grove Base Hat. Make sure the blue side of the cable faces towards the analog ports labelled **A0**, **A1** etc.
|
||||
|
||||
![The ribbon cable passing through the grove base hat](../../../images/grove-base-hat-ribbon-cable.png)
|
||||
|
||||
1. Inset the ribbon cable into the camera port on the Pi. Once again, pull the black plastic clip up, insert the cable, then push the clip back in. The blue side of the cable should face the USB and ethernet ports.
|
||||
|
||||
![The ribbon cable connected to the camera socket on the Pi](../../../images/pi-camera-socket-ribbon-cable.png)
|
||||
|
||||
1. Refit the Grove Base Hat
|
||||
|
||||
## Program the camera
|
||||
|
||||
The Raspberry Pi can now be programmed to use the camera using the [PiCamera](https://pypi.org/project/picamera/) Python library.
|
||||
|
||||
### Task - program the camera
|
||||
|
||||
Program the device.
|
||||
|
||||
1. Power up the Pi and wait for it to boot
|
||||
|
||||
1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension.
|
||||
|
||||
1. By default the camera socket on the Pi is turned off. You can turn it on by running the following commands from your terminal:
|
||||
|
||||
```sh
|
||||
sudo raspi-config nonint do_camera 0
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
This will toggle a setting to enable the camera, then reboot the Pi to make that setting take effect. Wait for the Pi to reboot, then re-launch VS Code.
|
||||
|
||||
1. From the terminal, create a new folder in the `pi` users home directory called `fruit-quality-detector`. Create a file in this folder called `app.py`.
|
||||
|
||||
1. Open this folder in VS Code
|
||||
|
||||
1. To interact with the camera, you can use the PiCamera Python library. Install the Pip package for this with the following command:
|
||||
|
||||
```sh
|
||||
pip3 install picamera
|
||||
```
|
||||
|
||||
1. Add the following code to your `app.py` file:
|
||||
|
||||
```python
|
||||
import io
|
||||
import time
|
||||
from picamera import PiCamera
|
||||
```
|
||||
|
||||
This code imports some libraries needed, including the `PiCamera` library.
|
||||
|
||||
1. Add the following code below this to initialize the camera:
|
||||
|
||||
```python
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
|
||||
time.sleep(2)
|
||||
```
|
||||
|
||||
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
|
||||
|
||||
The `camera.rotation = 0` line sets the rotation of the image. The ribbon cable comes in to the bottom of the camera, but if your camera was rotated to allow it to point easier at the item you want to classify, then you can change this line to the number of degrees of rotation.
|
||||
|
||||
![The camera hanging down over a drink can](../../../images/pi-camera-upside-down.png)
|
||||
|
||||
For example, if you suspend the ribbon cable over something so that it is at the top of the camera, then set the rotation to be 180:
|
||||
|
||||
```python
|
||||
camera.rotation = 180
|
||||
```
|
||||
|
||||
The camera takes a few seconds to start up, hence the `time.sleep(2)`
|
||||
|
||||
1. Add the following code below this to capture the image as binary data:
|
||||
|
||||
```python
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
```
|
||||
|
||||
This codes creates a `BytesIO` object to store binary data. The image is read from the camera as a JPEG file and stored in this object. This object has a position indicator to know where it is in the data so that more data can be written to the end if needed, so the `image.seek(0)` line moves this position back to the start so that all the data can be read later.
|
||||
|
||||
1. Below this, add the following to save the image to a file:
|
||||
|
||||
```python
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
||||
```
|
||||
|
||||
This code opens a file called `image.jpg` for writing, then reads all the data from the `BytesIO` object and writes that to the file.
|
||||
|
||||
> 💁 You can capture the image directly to a file instead of a `BytesIO` object by passing the file name to the `camera.capture` call. The reason for using the `BytesIO` object is so that later in this lesson you can send the image to your image classifier.
|
||||
|
||||
1. Point the camera at something and run this code.
|
||||
|
||||
1. An image will be captured and saved as `image.jpg` in the current folder. You will see this file in the VS Code explorer. Select the file to view the image. If it needs rotation, update the `camera.rotation = 0` line as necessary and take another picture.
|
||||
|
||||
> 💁 You can find this code in the [code-camera/pi](code-camera/pi) folder.
|
||||
|
||||
😀 Your camera program was a success!
|
@ -0,0 +1,91 @@
|
||||
# Classify an image - Virtual IoT Hardware and Raspberry Pi
|
||||
|
||||
In this part of the lesson, you will add send the image captured by the camera to the Custom Vision service to classify it.
|
||||
|
||||
## Send images to Custom Vision
|
||||
|
||||
The Custom Vision service has a Python SDK you can use to classify images.
|
||||
|
||||
### Task - send images to Custom Vision
|
||||
|
||||
1. Open the `fruit-quality-detector` folder in VS Code. If you are using a virtual IoT device, make sure the virtual environment is running in the terminal.
|
||||
|
||||
1. The Python SDK to send images to Custom Vision is available as a Pip package. Install it with the following command:
|
||||
|
||||
```sh
|
||||
pip3 install azure-cognitiveservices-vision-customvision
|
||||
```
|
||||
|
||||
1. Add the following import statements at the top of the `app.py` file:
|
||||
|
||||
```python
|
||||
from msrest.authentication import ApiKeyCredentials
|
||||
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
|
||||
```
|
||||
|
||||
This brings in some modules from the Custom Vision libraries, one to authenticate with the prediction key, and one to provide a prediction client class that can call Custom Vision.
|
||||
|
||||
1. Add the following code to to the end of the file:
|
||||
|
||||
```python
|
||||
prediction_url = '<prediction_url>'
|
||||
prediction_key = '<prediction key>'
|
||||
```
|
||||
|
||||
Replace `<prediction_url>` with the URL you copied from the *Prediction URL* dialog earlier in this lesson. Replace `<prediction key>` with the prediction key you copied from the same dialog.
|
||||
|
||||
1. The prediciton URL that was provided by the *Prediction URL* dialog is designed to be used when calling the REST endpoint directly. The Python SDK uses parts of the URL in different places. Add the following code to break apart this URL into the parts needed:
|
||||
|
||||
```python
|
||||
parts = prediction_url.split('/')
|
||||
endpoint = 'https://' + parts[2]
|
||||
project_id = parts[6]
|
||||
iteration_name = parts[9]
|
||||
```
|
||||
|
||||
This splits the URL, extracting the endpoint of `https://<location>.api.cognitive.microsoft.com`, the project ID, and the name of the published iteration.
|
||||
|
||||
1. Create a predictor object to perform the prediction with the following code:
|
||||
|
||||
```python
|
||||
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
|
||||
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
|
||||
```
|
||||
|
||||
The `prediction_credentials` wrap the prediction key. These are then used to create a prediction client object pointing at the endpoint.
|
||||
|
||||
1. Send the image to custom vision using the following code:
|
||||
|
||||
```python
|
||||
image.seek(0)
|
||||
results = predictor.classify_image(project_id, iteration_name, image)
|
||||
```
|
||||
|
||||
This rewinds the image back to the start, then sends it to the prediction client.
|
||||
|
||||
1. Finally, show the results with the following code:
|
||||
|
||||
```python
|
||||
for prediction in results.predictions:
|
||||
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
|
||||
```
|
||||
|
||||
This will loop through all the predictions that have been returned and show them on the terminal. The probabilities returned are floating point numbers from 0-1, with 0 being a 0% chance of matching the tag, and 1 being a 100% chance.
|
||||
|
||||
> 💁 Image classifiers will return the percentages for all tags that have been used. Each tag will have a probability that the image matches that tag.
|
||||
|
||||
1. Run your code, with your camera pointing at some fruit, or an appropriate image set, or fruit visible on your webcam if using virtual IoT hardware. You will see the output in the console:
|
||||
|
||||
```output
|
||||
(.venv) ➜ fruit-quality-detector python app.py
|
||||
ripe: 56.84%
|
||||
unripe: 43.16%
|
||||
```
|
||||
|
||||
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
|
||||
|
||||
![A banana in custom vision predicted ripe at 56.8% and unripe at 43.1%](../../../images/custom-vision-banana-prediction.png)
|
||||
|
||||
> 💁 You can find this code in the [code-classify/pi](code-classify/pi) or [code-classify/virtual-device](code-classify/virtual-device) folder.
|
||||
|
||||
😀 Your camera program was a success!
|
@ -0,0 +1,112 @@
|
||||
# Capture an image - Virtual IoT Hardware
|
||||
|
||||
In this part of the lesson, you will add a camera sensor to your yirtual IoT device, and read images from it.
|
||||
|
||||
## Hardware
|
||||
|
||||
The virtual IoT device will use a simulated camera that sends either images from files, or from your webcam.
|
||||
|
||||
### Add the camera to CounterFit
|
||||
|
||||
To use a virtual camera, you need to add one to the CounterFit app
|
||||
|
||||
#### Task - add the camera to CounterFit
|
||||
|
||||
Add the Camera to the CounterFit app.
|
||||
|
||||
1. Create a new Python app on your computer in a folder called `fruit-quality-detector` with a single file called `app.py` and a Python virtual environment, and add the CounterFit pip packages.
|
||||
|
||||
> ⚠️ You can refer to [the instructions for creating and setting up a CounterFit Python project in lesson 1 if needed](../../../1-getting-started/lessons/1-introduction-to-iot/virtual-device.md).
|
||||
|
||||
1. Install an additional Pip package to install a CounterFit shim that can talk to Camera sensors by simulating some of the [Picamera Pip package](https://pypi.org/project/picamera/). Make sure you are installing this from a terminal with the virtual environment activated.
|
||||
|
||||
```sh
|
||||
pip install counterfit-shims-picamera
|
||||
```
|
||||
|
||||
1. Make sure the CounterFit web app is running
|
||||
|
||||
1. Create a camera:
|
||||
|
||||
1. In the *Create sensor* box in the *Sensors* pane, drop down the *Sensor type* box and select *Camera*.
|
||||
|
||||
1. Set the *Name* to `Picamera`
|
||||
|
||||
1. Select the **Add** button to create the camera
|
||||
|
||||
![The camera settings](../../../images/counterfit-create-camera.png)
|
||||
|
||||
The camera will be created and appear in the sensors list.
|
||||
|
||||
![The camera created](../../../images/counterfit-camera.png)
|
||||
|
||||
## Program the camera
|
||||
|
||||
The virtual IoT device can now be programmed to use the virtual camera.
|
||||
|
||||
### Task - program the camera
|
||||
|
||||
Program the device.
|
||||
|
||||
1. Make sure the `fruit-quality-detector` app is open in VS Code
|
||||
|
||||
1. Open the `app.py` file
|
||||
|
||||
1. Add the following code to the top of `app.py` to connect the app to CounterFit:
|
||||
|
||||
```python
|
||||
from counterfit_connection import CounterFitConnection
|
||||
CounterFitConnection.init('127.0.0.1', 5000)
|
||||
```
|
||||
|
||||
1. Add the following code to your `app.py` file:
|
||||
|
||||
```python
|
||||
import io
|
||||
from counterfit_shims_picamera import PiCamera
|
||||
```
|
||||
|
||||
This code imports some libraries needed, including the `PiCamera` class from the counterfit_shims_picamera library.
|
||||
|
||||
1. Add the following code below this to initialize the camera:
|
||||
|
||||
```python
|
||||
camera = PiCamera()
|
||||
camera.resolution = (640, 480)
|
||||
camera.rotation = 0
|
||||
```
|
||||
|
||||
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported, the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
|
||||
|
||||
The `camera.rotation = 0` line sets the rotation of the image in degrees. If you need to rotate the image from the webcam or the file, set this as appropriate. For example, if you want to change the image of a banana on a webcam in landscape mode to be portrait, set `camera.rotation = 90`.
|
||||
|
||||
1. Add the following code below this to capture the image as binary data:
|
||||
|
||||
```python
|
||||
image = io.BytesIO()
|
||||
camera.capture(image, 'jpeg')
|
||||
image.seek(0)
|
||||
```
|
||||
|
||||
This codes creates a `BytesIO` object to store binary data. The image is read from the camera as a JPEG file and stored in this object. This object has a position indicator to know where it is in the data so that more data can be written to the end if needed, so the `image.seek(0)` line moves this position back to the start so that all the data can be read later.
|
||||
|
||||
1. Below this, add the following to save the image to a file:
|
||||
|
||||
```python
|
||||
with open('image.jpg', 'wb') as image_file:
|
||||
image_file.write(image.read())
|
||||
```
|
||||
|
||||
This code opens a file called `image.jpg` for writing, then reads all the data from the `BytesIO` object and writes that to the file.
|
||||
|
||||
> 💁 You can capture the image directly to a file instead of a `BytesIO` object by passing the file name to the `camera.capture` call. The reason for using the `BytesIO` object is so that later in this lesson you can send the image to your image classifier.
|
||||
|
||||
1. Configure the image that the camera in CounterFit will capture. You can either set the *Source* to *File*, then upload an image file, or set the *Source* to *WebCam*, and images will be captures from your web cam. Make sure you select the **Set** button after selecting a picture or selecting your webcam.
|
||||
|
||||
![CounterFit with a file set as the image source, and a web cam set showing a person holding a banana in a preview of the webcam](../../../images/counterfit-camera-options.png)
|
||||
|
||||
1. An image will be captured and saved as `image.jpg` in the current folder. You will see this file in the VS Code explorer. Select the file to view the image. If it needs rotation, update the `camera.rotation = 0` line as necessary and take another picture.
|
||||
|
||||
> 💁 You can find this code in the [code-camera/virtual-iot-device](code-camera/virtual-iot-device) folder.
|
||||
|
||||
😀 Your camera program was a success!
|
@ -0,0 +1,458 @@
|
||||
# Capture an image - Wio Terminal
|
||||
|
||||
In this part of the lesson, you will add a camera to your Wio Terminal, and capture images from it.
|
||||
|
||||
## Hardware
|
||||
|
||||
The Wio Terminal needs a camera.
|
||||
|
||||
The camera you'll use is an [ArduCam Mini 2MP Plus](https://www.arducam.com/product/arducam-2mp-spi-camera-b0067-arduino/). This is a 2 megapixel camera based on the OV2640 image sensor. It communicates over an SPI interface to capture images, and uses I<sup>2</sup>C to configure the sensor.
|
||||
|
||||
## Connect the camera
|
||||
|
||||
The ArduCam doesn't have a Grove socket, instead it connects to both the SPI and I<sup>2</sup>C busses via the GPIO pins on the Wio Terminal.
|
||||
|
||||
### Task - connect the camera
|
||||
|
||||
Connect the camera.
|
||||
|
||||
![An ArduCam sensor](../../../images/arducam.png)
|
||||
|
||||
1. The pins on the base of the ArduCam need to be connected to the GPIO pins on the Wio Terminal. To make it easier to find the right pins, attach the GPIO pin sticker that comes with the Wio Terminal around the pins:
|
||||
|
||||
![The wio terminal with the GPIO pin sticker on](../../../images/wio-terminal-pin-sticker.png)
|
||||
|
||||
1. Using jumper wires, make the following connections:
|
||||
|
||||
| ArduCAM pin | Wio Terminal pin | Description |
|
||||
| ----------- | ---------------- | --------------------------------------- |
|
||||
| CS | 24 (SPI_CS) | SPI Chip Select |
|
||||
| MOSI | 19 (SPI_MOSI) | SPI Controller Output, Peripheral Input |
|
||||
| MISO | 21 (SPI_MISO) | SPI Controller Input, peripheral Output |
|
||||
| SCK | 23 (SPI_SCLK) | SPI Serial Clock |
|
||||
| GND | 6 (GND) | Ground - 0V |
|
||||
| VCC | 4 (5V) | 5V power supply |
|
||||
| SDA | 3 (I2C1_SDA) | I<sup>2</sup>C Serial Data |
|
||||
| SCL | 5 (I2C1_SCL) | I<sup>2</sup>C Serial Clock |
|
||||
|
||||
![The wio terminal connected to the ArduCam with jumper wires](../../../images/arducam-wio-terminal-connections.png)
|
||||
|
||||
The GND and VCC connections provide a 5V power supply to the ArduCam. It runs at 5V, unlike Grove sensors that run at 3V. This power comes directly from the USB-C connection that powers the device.
|
||||
|
||||
> 💁 For the SPI connection the pin labels on the ArduCam and the Wio Terminal pin names used in code still use the old naming convention. The instructions in this lesson will use the new naming convention, except when the pin names are used in code.
|
||||
|
||||
1. You can now connect the Wio Terminal to your computer.
|
||||
|
||||
## Program the device to connect to the camera
|
||||
|
||||
The Wio Terminal can now be programmed to use the attached ArduCAM camera.
|
||||
|
||||
### Task - program the device to connect to the camera
|
||||
|
||||
1. Create a brand new Wio Terminal project using PlatformIO. Call this project `fruit-quality-detector`. Add code in the `setup` function to configure the serial port.
|
||||
|
||||
1. Add code to connect to WiFi, with your WiFi credentials in a file called `config.h`. Don't forget to add the required libraries to the `platformio.ini` file.
|
||||
|
||||
1. The ArduCam library isn't available as an Arduino library that can be installed from the `platformio.ini` file. Instead it will need to be installed from source from their GitHub page. You can get this by either:
|
||||
|
||||
* Cloning the repo from [https://github.com/ArduCAM/Arduino.git](https://github.com/ArduCAM/Arduino.git)
|
||||
* Heading to the repo on GitHub at [github.com/ArduCAM/Arduino](https://github.com/ArduCAM/Arduino) and downloading the code as a zip from from the **Code** button
|
||||
|
||||
1. You only need the `ArduCAM` folder from this code. Copy the entire folder into the `lib` folder in your project.
|
||||
|
||||
> ⚠️ The entire folder must be copied, so the code is in `lib/ArduCam`. Do not just copy the contents of the `ArduCam` folder into the `lib` folder, copy the entire folder over.
|
||||
|
||||
1. The ArduCam library code works for multiple types of camera. The type of camera you want to use is configured using compiler flags - this keeps the built library as small as possible by removing code for cameras you are not using. To configure the library for the OV2640 camera, add the following to the end of the `platformio.ini` file:
|
||||
|
||||
```ini
|
||||
build_flags =
|
||||
-DARDUCAM_SHIELD_V2
|
||||
-DOV2640_CAM
|
||||
```
|
||||
|
||||
This sets 2 compiler flags:
|
||||
|
||||
* `ARDUCAM_SHIELD_V2` to tell the library the camera is on an Arduino board, known as a shield.
|
||||
* `OV2640_CAM` to tell the library to only include code for the OV2640 camera
|
||||
|
||||
1. Add a header file into the `src` folder called `camera.h`. This will contain code to communicate with the camera. Add the following code to this file:
|
||||
|
||||
```cpp
|
||||
#pragma once
|
||||
|
||||
#include <ArduCAM.h>
|
||||
#include <Wire.h>
|
||||
|
||||
class Camera
|
||||
{
|
||||
public:
|
||||
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
|
||||
{
|
||||
_format = format;
|
||||
_image_size = image_size;
|
||||
}
|
||||
|
||||
bool init()
|
||||
{
|
||||
// Reset the CPLD
|
||||
_arducam.write_reg(0x07, 0x80);
|
||||
delay(100);
|
||||
|
||||
_arducam.write_reg(0x07, 0x00);
|
||||
delay(100);
|
||||
|
||||
// Check if the ArduCAM SPI bus is OK
|
||||
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
|
||||
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// Change MCU mode
|
||||
_arducam.set_mode(MCU2LCD_MODE);
|
||||
|
||||
uint8_t vid, pid;
|
||||
|
||||
// Check if the camera module type is OV2640
|
||||
_arducam.wrSensorReg8_8(0xff, 0x01);
|
||||
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
|
||||
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
|
||||
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
_arducam.set_format(_format);
|
||||
_arducam.InitCAM();
|
||||
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||
_arducam.OV2640_set_Light_Mode(Auto);
|
||||
_arducam.OV2640_set_Special_effects(Normal);
|
||||
delay(1000);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void startCapture()
|
||||
{
|
||||
_arducam.flush_fifo();
|
||||
_arducam.clear_fifo_flag();
|
||||
_arducam.start_capture();
|
||||
}
|
||||
|
||||
bool captureReady()
|
||||
{
|
||||
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
|
||||
}
|
||||
|
||||
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
|
||||
{
|
||||
if (!captureReady()) return false;
|
||||
|
||||
// Get the image file length
|
||||
uint32_t length = _arducam.read_fifo_length();
|
||||
buffer_length = length;
|
||||
|
||||
if (length >= MAX_FIFO_SIZE)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
if (length == 0)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// create the buffer
|
||||
byte *buf = new byte[length];
|
||||
|
||||
uint8_t temp = 0, temp_last = 0;
|
||||
int i = 0;
|
||||
uint32_t buffer_pos = 0;
|
||||
bool is_header = false;
|
||||
|
||||
_arducam.CS_LOW();
|
||||
_arducam.set_fifo_burst();
|
||||
|
||||
while (length--)
|
||||
{
|
||||
temp_last = temp;
|
||||
temp = SPI.transfer(0x00);
|
||||
//Read JPEG data from FIFO
|
||||
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
|
||||
{
|
||||
buf[buffer_pos] = temp;
|
||||
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
_arducam.CS_HIGH();
|
||||
}
|
||||
if (is_header == true)
|
||||
{
|
||||
//Write image data to buffer if not full
|
||||
if (i < 256)
|
||||
{
|
||||
buf[buffer_pos] = temp;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
}
|
||||
else
|
||||
{
|
||||
_arducam.CS_HIGH();
|
||||
|
||||
i = 0;
|
||||
buf[buffer_pos] = temp;
|
||||
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
_arducam.CS_LOW();
|
||||
_arducam.set_fifo_burst();
|
||||
}
|
||||
}
|
||||
else if ((temp == 0xD8) & (temp_last == 0xFF))
|
||||
{
|
||||
is_header = true;
|
||||
|
||||
buf[buffer_pos] = temp_last;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
|
||||
buf[buffer_pos] = temp;
|
||||
buffer_pos++;
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
_arducam.clear_fifo_flag();
|
||||
|
||||
_arducam.set_format(_format);
|
||||
_arducam.InitCAM();
|
||||
_arducam.OV2640_set_JPEG_size(_image_size);
|
||||
|
||||
// return the buffer
|
||||
*buffer = buf;
|
||||
}
|
||||
|
||||
private:
|
||||
ArduCAM _arducam;
|
||||
int _format;
|
||||
int _image_size;
|
||||
};
|
||||
```
|
||||
|
||||
This is low level code that configures the camera using the ArduCam libraries, and extracts the images when required using the SPI bus. This code is very specific to the ArduCam, so you don't need to worry about how it works at this point.
|
||||
|
||||
1. In `main.cpp`, add the following code beneath the other `include` statements to include this new file and create an instance of the camera class:
|
||||
|
||||
```cpp
|
||||
#include "camera.h"
|
||||
|
||||
Camera camera = Camera(JPEG, OV2640_640x480);
|
||||
```
|
||||
|
||||
This creates a `Camera` saving the images as JPEGs at a resolution of 640 by 480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
|
||||
|
||||
1. Add the following code below this to define a function to setup the camera:
|
||||
|
||||
```cpp
|
||||
void setupCamera()
|
||||
{
|
||||
pinMode(PIN_SPI_SS, OUTPUT);
|
||||
digitalWrite(PIN_SPI_SS, HIGH);
|
||||
|
||||
Wire.begin();
|
||||
SPI.begin();
|
||||
|
||||
if (!camera.init())
|
||||
{
|
||||
Serial.println("Error setting up the camera!");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This `setupCamera` function starts by configuring the SPI chip select pin (`PIN_SPI_SS`) as high, making the Wio Terminal the SPI controller. It then starts the I<sup>2</sup>C and SPI busses. Finally it initializes the camera class which configures the camera sensor settings and ensures everything it wired up correctly.
|
||||
|
||||
1. Call this function at the end of the `setup` function:
|
||||
|
||||
```cpp
|
||||
setupCamera();
|
||||
```
|
||||
|
||||
1. Build and upload this code, and check the output from the serial monitor. If you see `Error setting up the camera!` then check the wiring to ensure all cables are connecting the correct pins on the ArduCam to the correct GPIO pins on the Wio Terminal, and all jumper cables are seated correctly.
|
||||
|
||||
## Capture an image
|
||||
|
||||
The Wio Terminal can now be programmed to capture an image when a button is pressed.
|
||||
|
||||
### Task - capture an image
|
||||
|
||||
1. Microcontrollers run your code continuously, so it's not easy to trigger something like taking a photo without reacting to a sensor. The Wio Terminal has buttons, so the camera can be set up to be triggered by one of the buttons. Add the following code to the end of the `setup` function to configure the C button (one of the three buttons on the top, the one closest to the power switch):
|
||||
|
||||
```cpp
|
||||
pinMode(WIO_KEY_C, INPUT_PULLUP);
|
||||
```
|
||||
|
||||
The mode of `INPUT_PULLUP` essentially inverts an input. For example, normally a button would send a low signal when not pressed, and a high signal when pressed. When set to `INPUT_PULLUP`, they send a high signal when not pressed, and a low signal when pressed.
|
||||
|
||||
1. Add an empty function to respond to the button press before the `loop` function:
|
||||
|
||||
```cpp
|
||||
void buttonPressed()
|
||||
{
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
1. Call this function in the `loop` method when the button is pressed:
|
||||
|
||||
```cpp
|
||||
void loop()
|
||||
{
|
||||
if (digitalRead(WIO_KEY_C) == LOW)
|
||||
{
|
||||
buttonPressed();
|
||||
delay(2000);
|
||||
}
|
||||
|
||||
delay(200);
|
||||
}
|
||||
```
|
||||
|
||||
This key checks to see if the button is pressed. If it is pressed, the `buttonPressed` function is called, and the loop delays for 2 seconds. This is to allow time for the button to be released so that a long press isn't registered twice.
|
||||
|
||||
> 💁 The button on the Wio Terminal is set to `INPUT_PULLUP`, so send a high signal when not pressed, and a low signal when pressed.
|
||||
|
||||
1. Add the following code to the `buttonPressed` function:
|
||||
|
||||
```cpp
|
||||
camera.startCapture();
|
||||
|
||||
while (!camera.captureReady())
|
||||
delay(100);
|
||||
|
||||
Serial.println("Image captured");
|
||||
|
||||
byte *buffer;
|
||||
uint32_t length;
|
||||
|
||||
if (camera.readImageToBuffer(&buffer, length))
|
||||
{
|
||||
Serial.print("Image read to buffer with length ");
|
||||
Serial.println(length);
|
||||
delete(buffer);
|
||||
}
|
||||
```
|
||||
|
||||
This code begins the camera capture by calling `startCapture`. The camera hardware doesn't work by returning the data when you request it, instead you send an instruction to start capturing, and the camera will work in the background to capture the image, convert it to a JPEG, and store it in a local buffer on the camera itself. The `captureReady` call then checks to see if the image capture has finished.
|
||||
|
||||
Once the capture has finished, the image data is copied from the buffer on the camera into a local buffer (array of bytes) with the `readImageToBuffer` call. The length of the buffer is then sent to the serial monitor.
|
||||
|
||||
1. Build and upload this code, and check the output on the serial monitor. Every time you press the C button, an image will be captured and you will see the image size sent to the serial monitor.
|
||||
|
||||
```output
|
||||
Connecting to WiFi..
|
||||
Connected!
|
||||
Image captured
|
||||
Image read to buffer with length 9224
|
||||
Image captured
|
||||
Image read to buffer with length 11272
|
||||
```
|
||||
|
||||
Different images will have different sizes. They are compressed as JPEGs and the size of a JPEG file for a given resolution depends on what is in the image.
|
||||
|
||||
> 💁 You can find this code in the [code-camera/wio-terminal](code-camera/wio-terminal) folder.
|
||||
|
||||
😀 You have successfully captured images with your Wio Terminal.
|
||||
|
||||
## Optional - verify the camera images using an SD card
|
||||
|
||||
The easiest way to see the images that were captured by the camera is to write them to an SD card in the Wio Terminal and then view them on your computer. Do this step if you have a spare microSD card and a microSD card socket in your computer, or an adapter.
|
||||
|
||||
The Wio Terminal only supports microSD cards of up to 16GB in size. If you have a larger SD card then it won't work.
|
||||
|
||||
### Task - verify the camera images using an SD card
|
||||
|
||||
1. Format a microSD card as FAT32 or exFAT using the relevant applications on your computer (Disk Utility on macOS, File Explorer on Windows, or using command line tools in Linux)
|
||||
|
||||
1. Insert the microSD card in the socket just below the power switch. Make sure it is all the way in until it clicks and stays in place, you may need to push it using a fingernail or a thin tool.
|
||||
|
||||
1. Add the following include statements at the top of the `main.cpp` file:
|
||||
|
||||
```cpp
|
||||
#include "SD/Seeed_SD.h"
|
||||
#include <Seeed_FS.h>
|
||||
```
|
||||
|
||||
1. Add the following function before the `setup` function:
|
||||
|
||||
```cpp
|
||||
void setupSDCard()
|
||||
{
|
||||
while (!SD.begin(SDCARD_SS_PIN, SDCARD_SPI))
|
||||
{
|
||||
Serial.println("SD Card Error");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This configures the SD card using the SPI bus.
|
||||
|
||||
1. Call this from the `setup` function:
|
||||
|
||||
```cpp
|
||||
setupSDCard();
|
||||
```
|
||||
|
||||
1. Add the following code above the `buttonPressed` function:
|
||||
|
||||
```cpp
|
||||
int fileNum = 1;
|
||||
|
||||
void saveToSDCard(byte *buffer, uint32_t length)
|
||||
{
|
||||
char buff[16];
|
||||
sprintf(buff, "%d.jpg", fileNum);
|
||||
fileNum++;
|
||||
|
||||
File outFile = SD.open(buff, FILE_WRITE );
|
||||
outFile.write(buffer, length);
|
||||
outFile.close();
|
||||
|
||||
Serial.print("Image written to file ");
|
||||
Serial.println(buff);
|
||||
}
|
||||
```
|
||||
|
||||
This defines a global variable for a file count. This is used for the image file names so multiple images can be captured with incrementing file names - `1.jpg`, `2.jpg` and so on.
|
||||
|
||||
It then defines the `saveToSDCard` that takes a buffer of byte data, and the length of the buffer. A file name is created using the file count, and the file count is incremented ready for the next file. The binary data from the buffer is then written to the file.
|
||||
|
||||
1. Call the `saveToSDCard` function from the `buttonPressed` function. The call should be **before** the buffer is deleted:
|
||||
|
||||
```cpp
|
||||
Serial.print("Image read to buffer with length ");
|
||||
Serial.println(length);
|
||||
|
||||
saveToSDCard(buffer, length);
|
||||
|
||||
delete(buffer);
|
||||
```
|
||||
|
||||
1. Build and upload this code, and check the output on the serial monitor. Every time you press the C button, an image will be captured and saved to the SD card.
|
||||
|
||||
```output
|
||||
Connecting to WiFi..
|
||||
Connected!
|
||||
Image captured
|
||||
Image read to buffer with length 16392
|
||||
Image written to file 1.jpg
|
||||
Image captured
|
||||
Image read to buffer with length 14344
|
||||
Image written to file 2.jpg
|
||||
```
|
||||
|
||||
1. Power off the microSD card and eject it by pushing it in slightly and releasing, and it will pop out. You may need to use a thin tool to do this. Plug the microSD card into your computer to view the images.
|
||||
|
||||
![A picture of a banana captured using the ArduCam](../../../images/banana-arducam.jpg)
|
||||
|
||||
> 💁 It may take a few images for the white balance of the camera to adjust itself. You will notice this based on the color of the images captured, the first few may look off color. You can always work around this by changing the code to capture a few images that are ignored during the setup.
|
@ -0,0 +1,3 @@
|
||||
# Classify an image - Wio Terminal
|
||||
|
||||
Coming soon!
|
After Width: | Height: | Size: 449 KiB |
After Width: | Height: | Size: 283 KiB |
After Width: | Height: | Size: 14 KiB |
After Width: | Height: | Size: 566 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 143 KiB |
After Width: | Height: | Size: 45 KiB |
After Width: | Height: | Size: 20 KiB |
After Width: | Height: | Size: 531 KiB |
After Width: | Height: | Size: 117 KiB |
After Width: | Height: | Size: 152 KiB |
After Width: | Height: | Size: 424 KiB |
After Width: | Height: | Size: 353 KiB |
After Width: | Height: | Size: 297 KiB |
After Width: | Height: | Size: 402 KiB |
After Width: | Height: | Size: 264 KiB |
After Width: | Height: | Size: 433 KiB |