Skip to content

Latest commit

 

History

History
106 lines (55 loc) · 5.94 KB

Deploying_Face_Detection.md

File metadata and controls

106 lines (55 loc) · 5.94 KB

Deploy Face Recognition project

architecture

Step 0 - Login to AWS DeepLens Device & AWS Account

In this workshop, you have an AWS DeepLens device in front of you connected to a monitor, keyboard, and mouse. AWS DeepLens runs an Ubuntu OS. Login to the device with the password Aws2017!.

We have already pre-registered your devices to workshop accounts. You can find the information for your account on the card in front of you taped to your monitor.

Open a Firefox browser on the left panel. Once Firefox is open, type console.aws.amazon.com into the url bar. (Note: If the login page says "Root user sign in" and there's already an email showing, select Sign in to a different account and then type in your AWS Account number on your card.)

Once your login page shows three fields, please enter the following:

  • Account ID or alias: the AWS Account number on your card
  • IAM user name: the User name on your card
  • Password: Aws2017!

Next, make sure you're in N. Virginia region, and navigate to the DeepLens Dashboard.

Step 1- Create Project

The console should open on the Projects screen, select Create new project on the top right (if you don’t see the project list view, click on the hamburger menu on the left and select Projects)

create project

Choose, Use a project template as the Project type, and select Face Detection from the project templates list.

project template

Scroll down the screen and select Next

project template-next

Change the Project name as Face-detection-your-name

face detection your name

Scroll down the screen and select Create

click create

Step 2- Deploy to device

In this step, you will deploy the Face detection project to your AWS DeepLens device.

Select the project you just created from the list by choosing the radio button. Note: You may see your project in the Projects list, but are not able to deploy it. This means it's still being created, and it may take up to a minute. Refresh until you see a Creation Time for your project; now you will be able to deploy it.

Select Deploy to device.

choose project-edited-just picture

On the Target device screen, choose your device (your device name is on your card as Device) from the list by clicking a radio button, and select Review..

target device

Select Deploy.

review deploy

On the AWS DeepLens console, you can track the progress of the deployment. It can take a few minutes to transfer a large model file to the device. Once the project is downloaded, you will see a success message displayed and the banner color will change from blue to green.

View Output

This default projects give inference output in two forms:

  • Messages published to an IoT topic in the cloud via MQTT Protocol
  • Inference video stream locally on the device

We will look at both.

IoT

Once your project has deployed, scroll down your device page to the Project Output panel.

project output

Click Copy to copy the IoT topic id unique to your device (this is the topic your Project is publishing messages to).

Then click the link to the AWS IoT Console. Once there, paste your IoT topic id into the Subscription topic field, then click Subscribe.

iot dash

You should now start to see the messages being published to your topic from your device.

iot topic

These messages are the real-time results of our model. We get a label of what is detected (in this case, it's always a face) as well as the confidence score (how confident the model is that it's a face).

IoT topics are an easy to transfer information from edge devices back into the cloud. Other functionality can be built around IoT topics; one example would be to monitor an IoT topic during hours you're away from home, and send a notification to yourself if a face is detected.

Video Stream

We've seen in the IoT topics that our model outputs a label and a confidence score, but since it's a detection model it also outputs a localization, which in this case is bounding box coordinates. The best way to get a sense of this is to visualize the output for yourself.

Aside from publishing messages, the default project also streams inference output to a file locally on disk. If you register your own device, it's possible to view this over brwoser; in this lab, you are on the device itself, and so you can easily visualize it fom the local stream.

To view the output, open a terminal (on the Deeplens desktop UI, choose the top left button and search for terminal) and enter the following command:

mplayer -demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/results.mjpeg

Please visit https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-viewing-device-output-on-device.html for more options to view the device stream