TensorFlow-Powered Vision For Pi-based robot

Introduction

This is a Pi-based robot to implement visual recognition(by Inception V3). The TensorFlow-Powered vision can recognize many objects such as people, car, bus, fruits, and so on.

  • Hardware: Raspberry-Pi2, Sony PS3 Eye Camera(Available to use Logitech C270 USB camera with Raspberry Pi)
  • Software: TensorFlow(v1.0.1), Jupyter-Notebook

Structure.png

My motivation

I was so curious about excellence of the image recognition with TensorFlow on Raspberry Pi. Also, the Jupyter notebook is very convenient to instantly code as a quick prototype. So, in terms of error rate of the image classification, Inception V3(3.46%) is more excellent than human(5.1%) whereas raspberry pi’s processing speed is very slow compare to my laptop.

(Table: Jeff Dean’s Keynote @Google Brain).

Chart_IR.png

  • Schematic diagram of Inception-v3

InceptionV3.png

Requirements and Installation

  • Install Webcam driver on your Rapsberry Pi.
sudo apt-get install fswebcam
  • Test your webcam.
fswebcam test.jpg

Quick Start

  • You should install both TensorFlow(v1.0.1) and Jupyter notebook on your Raspberry Pi.
  • First, clone the TensorFlow-Powered_Robot_Vision git repository here. This can be accomplished by:
cd /home/pi/Documents
git clone https://github.com/leehaesung/TensorFlow-Powered_Robot_Vision.git

next, cd into the newly created directory:

cd TensorFlow-Powered_Robot_Vision

Drive your jupyter notebook on your Raspberry Pi.

jupyter-notebook

The pre-trained data(inception_v3.ckpt) will automatically download when driving the Jupyter notebook. (Where: /pi/home/Documents/datasets/inception)

Source Codes

Results of Object Recognition

  • Wow! The result is really awessome!!

RecognitionResult.png

References

YOLO-Powered_Robot_Vision

YOLO-Powered_Robot_Vision


Introduction

This is a Pi-based robot to implement visual recognition(by YOLO). The YOLO-Powered vision can recognize many objects such as people, car, bus, fruits, and so on.

  • Hardware: Raspberry-Pi2, Sony PS3 Eye Camera

    (Available to use Logitech C270 USB camera with Raspberry Pi)

  • Software: YOLO(v2), Jupyter-Notebook

Structure.png

My motivation

I was so interested in performance of the image recognition with YOLO-2 on Raspberry Pi. In addition, the Jupyter notebook is really convenient to instantly code as a quick prototype. According to paper, I realised that YOLO is a fast, accurate visual detector, making it ideal for computer vision system. We connect YOLO to a webcam and verify that it maintains real-time performance. So, the Raspberry pi’s processing speed is very slow compare to my laptop.

(Picasso Dataset precision-recall curves: paper)

Perfomance_Picaso.png

(The Architecture: paper)

Architecture_CNN.png

Requirements and Installation

Quick Start

This post will guide you through detecting objects with the YOLO system using a pre-trained model. If you don’t already have Darknet installed, you should install OpenCV2 before on your Raspberry Pi.

  • Install dependencies for OpenCV2
sudo apt-get update

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libjpeg-dev libpng-dev libtiff-dev libjasper-dev

sudo apt-get install python-opencv
  • Check which version of OpenCV you have in Python
python
import cv2
cv2.__version__
  • Install the darknet for YOLO
git clone https://github.com/pjreddie/darknet
cd darknet
make

Easy!

You already have the config file for YOLO in the cfg/ subdirectory. You will have to download the pre-trained weight file here (258 MB). Or just run this:

wget http://pjreddie.com/media/files/yolo.weights

Then run the detector to test.

./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg

You will see some output like this:

layer     filters    size              input                output
    0 conv     32  3 x 3 / 1   416 x 416 x   3   ->   416 x 416 x  32
    1 max          2 x 2 / 2   416 x 416 x  32   ->   208 x 208 x  32
    .......
   29 conv    425  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 425
   30 detection
Loading weights from yolo.weights...Done!
data/dog.jpg: Predicted in 0.016287 seconds.
car: 54%
bicycle: 51%
dog: 56%

Output

  • Re-name from ‘darknet’ to ‘YOLO-Powered_Robot_Vision’.
mv /home/pi/Documents/darknet /home/pi/Documents/YOLO-Powered_Robot_Vision
cd /home/pi/Documents/YOLO-Powered_Robot_Vision
  • Download ‘YOLO-Powered_Robot_Vision.ipynb’ at /home/pi/Documents/YOLO-Powered_Robot_Vision
wget https://github.com/leehaesung/YOLO-Powered_Robot_Vision/raw/master/YOLO-Powered_Robot_Vision.ipynb

jupyter-notebook

Source Codes

Result of Object Recognition

(Caution!!: I have used the image sources for an educational purpose. Please don’t use any pictures in copyright. So, I am not responsible for any images you use.)

predictions06.png

  • Another examples predictions07.png

predictions08.png

predictions09.png

predictions11.png

IBM Watson Cloud Robot

 

IBM Watson Cloud Robot
Screenshot 2017-01-16 21.54.18.png
Screenshot 2016-12-31 16.28.11.png
ControllingButtons.jpeg
Screenshot 2017-01-02 11.03.04.png
IBM Cloud Robot.jpeg
Screenshot 2017-01-02 11.02.53.png
Screenshot 2017-01-02 11.00.48.png
Screenshot 2017-01-02 20.23.21.png
Screenshot 2017-01-02 20.28.36.png
Screenshot 2017-01-03 17.56.47.png
5870d98f3dd33aeaf7001a9e.jpeg

 

Motivation

I work as a robotics teacher in Sydney. I want to introduce my AI robot to my students in my class next month. In addition, I’m joining NASA Open Innovation Initiative (also known as NASA Space Apps Challenge) with my AI robot to measure the space environment such as temperature, humidity, and pressure. So, I’m so excited!!

Introduction

The IBM Watson Cloud Robot can recognize a human face, voice, and text like a human. The robot clearly recognized the celebrity (Elon Musk) and who he was. Also, it recognized my voice & any text. (YouTube)

This instructable will cover the basic steps that you need to follow to get started with open sources such as Watson nodes (Visual Recognition V3, Speech To Text, Text To Speech) for IBM Bluemix, Node-RED, MQTT v3.1. MQTT(Message Queueing Telemetry Transport) is a Machine-To-Machine(M2M) or Internet of Things (IoT) connectivity protocol that was designed to be extremely lightweight and useful when low battery power consumption and low network bandwidth is at a premium. It was invented in 1999 by Dr. Andy Stanford-Clark and Arlen Nipper and is now an Oasis Standard .

– How to tune PID gains of Node-RED with MQTT on Raspberry Pi:

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

– How to use the Bluemix platform (Docs)
https://console.ng.bluemix.net/docs/

– Enclosed my additional material (Pi-Scratch_Robot_GPIO.sb) for kid education at Download List

(Functions: Driving motors & Taking a picture on Rasberry Pi)

Step 1: Table of Contents

Step 0: Introduction

Step 1: Table of Contents

Step 2: Bill of Materials

Step 3: Assembly (Wiring & Soldering)

Step 4: Programming NodeRED on Raspberry Pi2

Step 5: Setting up MQTT v3.1 on Raspberry Pi2

Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2

Step 7: Adding & Setting up PID node, Dashboard on Raspberry Pi2

Step 8: Configuring the PS3 EYE camera with microphone

Step 9: Configuring GPS Sensor

Step 10: Using a dashboard for the robot

Step 11: Tuning PID controller

Step 12: (Optional) Programing a Pi-Scratch Robot

Step 13: Download list

Step 14: List of references

Step 15: Version Note

Step 2: Bill of Materials

Step 3: Assembly (Wiring & Soldering)

Assembly (Wiring & Soldering)
Screenshot 2017-01-02 15.04.09.png
586a327a8852ddcf530000be.jpeg

Step 4: Programming NodeRED on Raspberry Pi2

Programming NodeRED on Raspberry Pi2
5869dae765d221290a000785.jpeg
Screenshot 2016-12-31 16.19.05.png
586a34278ae43be54f00029b.jpeg
Screenshot 2017-01-02 22.13.51.png
Screenshot 2016-12-31 16.19.58.png
Screenshot 2016-12-31 16.21.14.png
Screenshot 2016-12-31 16.21.23.png
Screenshot 2016-12-31 16.21.36.png
Screenshot 2016-12-31 16.21.57.png
587de7568080cfd9830014dc.jpeg

How to start Node-RED on web-browser.

(1) Write down command shown below to a terminal window.

node-red-start

(2) You can find an IP address as below. ‘Once Node-RED has started, point a browser at http://169.254.170.40:1880’ (It depends on your IP address)

(3) Open your web browser.

(4) Copy the IP address and paste on web-browser.

(5) It will display a visual editor of Node-RED on web-browser.

(6) You can start coding with visual editor on web-browser.

(7) Try dragging & dropping any node from the left-hand side to right-hand side. It’s really easy to code. ( You can conveniently use the visual editor offline as well as online. ) Download all files at Download list. (1) Click the number (1) at the right-hand side corner shown in NodeRED on web-browser. (2) Click the Import button on the drop down menu. (3) Open the Clipboard shown in the above 1st picture. (4) Lastly, paste the given JSON format text of ‘____ver0.1.txt’ (Download List) in Import nodes editor.

Step 5: Setting up MQTT v3.1 on Raspberry Pi2

Setting up MQTT v3.1 on Raspberry Pi2
586a2d4f8080cffb3e001058.jpeg
586a353def665a0a630004ce.jpeg

There are two options such as using eclipse paho, installing a mosquitto sever. Also, you can use (1) option instead of (2) opption.

(1) Using “iot.eclipse.org”.

Click each MQTT node and Type it.

iot.eclipse.org

(2) Setting up MQTT v3.1 on Raspberry Pi2

This message broker(Mosquitto) is supported by MQTT v3.1 and it is easily installed on the Raspberry Pi and somewhat less easy to configure. Next we step through installing and configuring the Mosquitto broker. We are going to install & test the MQTT “mosquitto” on terminal window. Click that.

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2

Checking your NodeRED codes with MQTT on Raspberry Pi2

When you will use the JSON format of the ‘NodeRED_Text_files_ver0.1.txt’ (Download List) on Node-RED, it’s automatically set up & coded each data. I have already set up the each data in each node.

(1) Click each node.

(2) Check information inside each node has been prefilled.

(3) Please don’t change the set data. (The above can be customized for more advanced users.)

Step 7: Adding & Setting up PID node, Dashboard on Raspberry Pi2

Adding & Setting up PID node, Dashboard on Raspberry Pi2
Screenshot 2017-01-17 20.45.01.png
Screenshot 2016-12-31 17.04.31.png
Screenshot 2016-12-31 17.06.38.png
Screenshot 2016-12-31 17.05.40.png
Screenshot 2016-12-31 17.05.46.png
Screenshot 2016-12-31 17.06.08.png

Searching the Nodes

Node-RED comes with a core set of useful nodes, but there are a growing number of additional nodes available for installing from both the Node-RED project as well as the wider community. You can search for available nodes in the Node-RED library or on the npm repository .

  • For example, we are going to search ‘node-red-node-pidcontrol’ at the npm web. Click here .
  • Then, we are going to install npm package, node-red-node-pidcontrol, node-red-dashboard on Raspberry Pi.

To add additional nodes you must first install the npm tool, as it is not included in the default installation. The following commands install npm and then upgrade it to the latest 2.x version.

sudo apt-get update
sudo apt-get install npm
sudo npm install -g npm@2.x
hash -r
cd /home/pi/.node-red
  • For example, ‘npm install node-red-{example node name}’
  • Copy the ‘npm install node-red-node-pidcontrol’ from the npm web. Paste it on a terminal window.
  • Ex: node-red-node-watson, node-red-contrib-play-audio, node-red-dashboard, node-red-node-pidcontrol
npm  install node-red-node-watson node-red-contrib-play-audio node-red-node-pidcontrol node-red-dashboard

You will need to restart Node-RED for it to pick-up the new nodes.

node-red-stop
node-red-start

Close your web browser and reopen the web browser.

Step 8: Configuring the PS3 EYE camera with microphone

Configuring the PS3 EYE camera with microphone
5869d5dd65d221a62200002b.jpeg
5869d6448080cf495b00030b.jpeg
Screenshot 2017-01-17 20.41.13.png

This Sony PS3 eye USB camera that can achieve up to 187 frames per second can be found for under $8 on Amazon.com that should make it quite a bargain for those wishing to experiment with CV projects. The PlayStation Eye camera for the PS3 is similar to a web camera but can also be used for computer vision and gesture recognition tasks. The PlayStation Eye has been supported by the Linux kernel since the late Linux 2.6 days but with a future update (Linux 3.20 or later given that the 3.19 merge window is closed) will support higher modes.

(1) Install a USB driver on Raspberry Pi.

 sudo apt-get install fswebcam

(2) Take a picture and then check the ‘visionImage.jpg’ file in the /home/pi

(3) Don’t forget to put the Bluemix service credentials for Watson Services such as Visual recognition, Speech to Text, and, Text to Speech. ( How to use the IBM Bluemix platform: https://console.ng.bluemix.net/docs/ )

(4) Make an image file (jpg) server for every boot.

<p>cd /etc/xdg/autostart/</p>
<p>sudo nano imageFileServer.desktop</p>

Type the description below or put the ‘imageFileServer.desktop’ file into /etc/xdg/autostart/ folder.

[Desktop Entry]
<p>Type=Application <br>Name=imageFileServer 
Comment=Start an image file server 
NoDisplay=false 
Exec=cd /home/pi 
Exec=python -m SimpleHTTPServer 7000</p>

Check the visionImage.jpg on the web browser.

http://169.254.62.80:7000/visionImage.jpg

Step 9: Configuring GPS Sensor

Configuring GPS Sensor
5869d8673dd33afb53000db3.jpeg
5869d8d88ae43b9c8f001508.jpeg
5869d986ef665a834c001426.jpeg
586cac5e8080cfca9500038e.jpeg

How to set the serial configuration for GPS module.

https://learn.adafruit.com/adafruit-ultimate-gps-on-the-raspberry-pi/using-uart-instead-of-usb

– Reference:

Adafruit Ultimate GPS & Download PDF file.

Tip: You should experiment the GPS sensor outside because this does not work inside at home. You would see an error signal. So, I made an extra node for home GPS test.

(1) Edit /boot/cmdline.txt

Next, enter the following command from the command line:

sudo nano /boot/cmdline.txt

And change:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

to:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

(eg, remove console=ttyAMA0,115200 and if there, kgdboc=ttyAMA0,115200)

Note you might see console=serial0,115200 or console=ttyS0,115200 and should remove those parts of the line if present.

(2) Edit /etc/inittab

(Raspbian Wheezy only)

From the command prompt enter the following command:

sudo nano /etc/inittab

And change:

#Spawn a getty on Raspberry Pi serial line

T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

to:

#Spawn a getty on Raspberry Pi serial line

#T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

That is, add a # to the beginning of the line!

(3) Only Raspbian Jessie

For the Raspberry Pi 1 or 2 (but NOT the 3!) Run the following two commands to stop and disable the tty service:

sudo systemctl stop serial-getty@ttyAMA0.service

sudo systemctl disable serial-getty@ttyAMA0.service

However for the Raspberry Pi 3 you need to use the /dev/ttyS0 port since that is what is normally connected to the GPIO serial port pins. Use these two commands instead:

sudo systemctl stop serial-getty@ttyS0.service

sudo systemctl disable serial-getty@ttyS0.service

(4) Raspberry Pi 3 Only

For the Raspberry Pi 3

You need to explicitly enable the serial port on the GPIO pins. The reason for this is a change with the Pi 3 to use the hardware serial port for Bluetooth and instead use a slightly different software serial port for the GPIO pins. A side effect of this change is that the serial port will actually change speed as the Pi CPU clock throttles up and down–this will unfortunately cause problems for most serial devices like GPS receivers!

Luckily there’s an easy fix detailed in this excellet blog post to force the Pi CPU into a fixed frequency which prevents speed changes on the serial port. The Pi might not perform as well but it will have a stable serial port speed.

To make this change edit the /boot/config.txt file by running:

sudo nano /boot/config.txt

At the very bottom of the file add this on a new line:

enable_uart=1

ave the file (press Ctrl-O, then enter) and exit (press Ctrl-X). You’re all set!

(5) Reboot your Pi

sudo reboot

(6) Restart GPSD with HW UART

Restart gpsd and redirect it to use HW UART instead of the USB port we pointed it to earlier. Simply entering the following two commands.

For the Raspberry Pi 1 or 2 (but NOT the 3!) run these commands:

sudo killall gpsd
sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock

And for the Raspberry Pi 3 run these commands to use the different serial port:

sudo killall gpsd
sudo gpsd /dev/ttyS0 -F /var/run/gpsd.sock

As with the USB example, you can test the output with:

cgps -s

Step 10: Using a dashboard for the robot

Using a dashboard for the robot
Screenshot 2017-01-04 18.02.05.png
Screenshot 2017-01-04 18.02.13.png
Screenshot 2017-01-04 18.02.24.png
Screenshot 2017-01-04 18.04.54.png
Screenshot 2017-01-04 18.05.07.png
Screenshot 2017-01-04 18.05.23.png
Screenshot 2017-01-04 18.06.03.png
Screenshot 2017-01-04 18.06.57.png
Screenshot 2017-01-04 18.07.15.png
Screenshot 2017-01-04 18.10.54.png
Screenshot 2017-01-04 18.16.39.png
Screenshot 2017-01-04 18.16.47.png
Screenshot 2017-01-04 18.17.00.png

The dashboard is a visual UI tool like gauge, chart. There is a basic tutorial of a Node-RED dashboard using ‘node-red-dashboard’

http://developers.sensetecnic.com/article/a-node-red-dashboard-using-node-red-contrib-ui/

Step 11: Tuning PID controller

Tuning PID controller
2.jpg

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

My instructable is really helpful to tune the PID gains for your system.

This is big job to adjust the pid gains. Use my source(node red) from the Download List.

Step 12: (Optional) Programing a Pi-Scratch Robot

(Optional) Programing a Pi-Scratch Robot
Screenshot 2017-01-09 22.06.46.png
Screenshot 2017-01-09 22.07.34.png
Screenshot 2017-01-09 22.07.42.png
Screenshot 2017-01-09 22.07.48.png
Screenshot 2017-01-09 22.07.54.png
Screenshot 2017-01-09 22.08.02.png

This part is an optional part for kid educational purpose. So, I developed it for my students in Sydney.

Let’s have fun with kids!!

Step 15: Version Note

————————————————————————–

Version rules

  • VerX.Y
    • X: Changed
    • Y: Added
    • (Ex 01) file__Ver0.2 : added something
    • (Ex 02) file__Ver1.0 : changed something

————————————————————————–

  • 06_Voice_Part_Ver0.2.txt : added a Watson Conversation (17 Jan 2017)

 

 

 

 

A smart JPEG camera for home security

By in raspberry-pi    

First Prize IoT Builders Contest 2016 (IBM Watson IoT)

Screenshot 2016-11-05 18.18.12.png
Screenshot 2016-11-05 18.26.44.png
Screenshot 2016-11-05 18.28.35.png
thumb_IMG_0615_1024.jpg
Screenshot 2016-11-04 22.09.36.png
Screenshot 2016-11-04 22.35.22.png
Screenshot 2016-11-05 18.06.44.png
14877793_1118884861498137_1672939215_n.jpg

Introduction

This instructable will cover the basic steps that you need to follow to get started with open sources such as Watson nodes(Visual Recognition V3, Text To Speech) for IBM Bluemix, Node-RED, OpenCV, MQTT v3.1. MQTT(Message Queueing Telemetry Transport) is a Machine-To-Machine(M2M) or Internet of Things (IoT) connectivity protocol that was designed to be extremely lightweight and useful when low battery power consumption and low network bandwidth is at a premium. It was invented in 1999 by Dr. Andy Stanford-Clark and Arlen Nipper and is now an Oasis Standard.

I’ve already published an instructable of the Smart Gas Valve For Safety. In addition, I’m going to communicate between A Smart JPEG Camera and A Smart Gas Valve for M2M Communication by MQTT. Specifically, this instructable will cover how to code the Node-RED on Raspberry Pi2 as a MQTT client by connecting to your home wireless network and how to send sensor data. I will be using A Smart Gas Valve for M2M communication by MQTT.

Step 1: Table of Contents

  • Step 0: Introduction
  • Step 1: Table of Contents
  • Step 2: Bill of Materials
  • Step 3: Setting up the Camera & PIR Sensor with Raspberry Pi
  • Step 4: Programming NodeRED on Raspberry Pi2
  • Step 5: Setting up MQTT v3.1 on Raspberry Pi2
  • Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2
  • Step 7: Programming Python JPEG Camera
  • Step 8: Adding IBM Watson, IBM NoSQL DB, Play-Audio, and Twilio
  • Step 9: Adding autostart files for every boot
  • Step 10: Testing M2M Communication
  • Step 11: (Optional) Using OpenCV
  • Step 12: Download list
  • Step 13: List of references

Step 2: Bill of Materials

  • Wifi dongle X 1ea
  • PIR motion sensor X 1ea
  • Android smartphone’s portable battery X 2ea
  • Nod-RED software X 1ea
    • Free open source
    • Use the version pre-installed in Raspbian Jessie image since November 2015
    • Installation guide
  • MQTT v3.1 software X 1ea
    • Free open source
    • Installation guide includes at Step 5
  • NodeRED’s IBM Watson Nodes for Bluemix
    • Text to speech node X 1ea
    • Visual Recognition X 1ea
  • Speaker X 1ea
  • Minion X 1ea
    • You can easily buy it from eBay.

Step 3: Setting up the Camera & PIR Sensor with Raspberry Pi

Setting up the  Camera & PIR Sensor with Raspberry Pi
Screenshot 2016-10-29 17.50.26.png

Assembly steps for Smart JPEG Camera

(1) Connect the Raspberry Pi2 with a PIR motion sensor as shown above in the circuit diagram.

(2) Connect the PIR motion sensor with Raspberry Pi2.

  • Raspberry Pi2 PIR motion Sensor
    • 5V —————- VCC
    • GND ————- GND
    • GPIO 18 ——– OUT

(4) Assemble carefully the Pi camera with Raspberry Pi2.

(5) Connect a portable battery with Raspberry Pi2. (Use any portable battery to connect with the same size connector cable on Raspberry Pi2. )

Assembly steps for Smart Gas Valve : here

Step 4: Programming NodeRED on Raspberry Pi2

Programming NodeRED on Raspberry Pi2
581db15415be4d4ed700153d.jpeg
581db1e345bceb7607000d17.jpeg
581db2994fbadef11c001536.jpeg
581db31c4936d4c09200053f.jpeg
581db24315be4d1908000c68.jpeg
581db3dd4fbadef11c00153e.jpeg
Screenshot 2016-11-04 08.36.16.png

How to start Node-RED on web-browser.

(1) Write down command shown below to a terminal window. node-red-start

(2) You can find an IP address as below. ‘Once Node-RED has started, point a browser at http://169.254.170.40:1880&#8217; (It depends on your IP address)

(3) Open your web browser.

(4) Copy the IP address and paste on the web browser.

(5) It will display a visual editor of Node-RED on the web browser.

(6) You can start coding with visual editor on the web browser.

(7) Try dragging & dropping any node from the left-hand side to right-hand side. It’s really easy to code. ( You can conveniently use the visual editor offline as well as online. ) Download the ‘SmartGasValve_NodeRED.txt’ file. (1) Click the number (1) at the right-hand side corner shown in NodeRED on the web browser.

(2) Click the Import button on the drop down menu.

(3) Open the Clipboard shown in the above 1st picture.

(4) Lastly, paste the given JSON format text of ‘SmartJPGCameraNoCredits_NodeRED_ver0.1.txt‘ in Import nodes editor.

Step 5: Setting up MQTT v3.1 on Raspberry Pi2

Setting up MQTT v3.1 on Raspberry Pi2
Screenshot 2016-10-25 23.12.34.png
Screenshot 2016-10-25 23.13.03.png
Screenshot 2016-10-25 23.11.12.png
Screenshot 2016-10-25 23.10.09.png

Setting up MQTT v3.1 on Raspberry Pi2

This message broker(Mosquitto) is supported by MQTT v3.1 and it is easily installed on the Raspberry Pi and somewhat less easy to configure. Next, we step through installing and configuring the Mosquitto broker. We are going to install & test the MQTT “mosquitto” on the terminal window.

curl -O http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
sudo apt-key add mosquitto-repo.gpg.key
rm mosquitto-repo.gpg.key
cd /etc/apt/sources.list.d/
sudo curl -O http://repo.mosquitto.org/debian/mosquitto-jessie.list
sudo apt-get update

Next install the broker and command line clients:

  • mosquitto – the MQTT broker (or in other words, a server)
  • mosquitto-clients – command line clients, very useful in debugging
  • python-mosquitto – the Python language bindings
sudo apt-get install mosquitto mosquitto-clients python-mosquitto

As is the case with most packages from Debian, the broker is immediately started. Since we have to configure it first, stop it.

sudo /etc/init.d/mosquitto stop

Now that the MQTT broker is installed on the Pi we will add some basic security.
Create a config file:

cd /etc/mosquitto/conf.d/

sudo nano mosquitto.conf

Let’s stop anonymous clients connecting to our broker by adding a few lines to your config file. To control client access to the broker we also need to define valid client names and passwords. Add the lines:

allow_anonymous false

password_file /etc/mosquitto/conf.d/passwd

require_certificate false

Save and exit your editor (nano in this case).
From the current /conf.d directory, create an empty password file:

sudo touch passwd

We will use the mosquitto_passwd tool to create a password hash for user pi:

sudo mosquitto_passwd -c /etc/mosquitto/conf.d/passwd pi

You will be asked to enter your password twice. Enter the password you wish to use for the user you defined.

Testing Mosquitto on Raspberry Pi

Now that Mosquitto is installed we can perform a local test to see if it is working:
Open three terminal windows. In one, make sure the Mosquitto broker is running:

mosquitto

In the next terminal, run the command line subscriber:

mosquitto_sub -v -t 'topic/test'

You should see the first terminal window echo that a new client is connected.
In the next terminal, run the command line publisher:

mosquitto_pub -t 'topic/test' -m 'helloWorld'

You should see another message in the first terminal window saying another client is connected. You should also see this message in the subscriber terminal:

topic/test helloWorld

We have shown that Mosquitto is configured correctly and we can both publish and subscribe to a topic.
When you finish testing all, let’s set up below that.

sudo /etc/init.d/mosquitto start

Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2

Checking your NodeRED codes with MQTT on Raspberry Pi2
Screenshot 2016-10-25 23.25.14.png
Screenshot 2016-11-05 20.48.11.png
Screenshot 2016-10-25 23.26.11.png
Screenshot 2016-10-25 23.26.28.png
Screenshot 2016-11-05 20.46.30.png

When you have already used the JSON format of the ‘SmartGasValve_NodeRED.txt’ on Node-RED, it’s automatically set up & coded each data. I have already set up the each data in each node.

(1) Click each node.

(2) Check information inside each node has been prefilled.

(3) Please don’t change the set data.

(The above can be customized for more advanced users.)

Step 7: Programming Python JPEG Camera

Programming Python JPEG Camera
Screenshot 2016-10-26 01.35.49.png
Screenshot 2016-10-26 00.46.23.png
Screenshot 2016-10-26 01.35.31.png
14885984_1118885038164786_1036372151_n.jpg

Programming Python JPEG Camera

First of all, you should test the camera module in the terminal window.

raspistill -o test.jpg

You should see the test.jpg in ‘/home/pi’

cd /home/pi
mkdir pythonPir
cd pythonPir
sudo nano pircameraNodeRED.py

Type the below (the enclosed file) Or Put ‘pircameraNodeRED.py’ file into ‘/home/pi/pythonPir’ folder.

import RPi.GPIO as GPIO 
import time
import picamera
import datetime 

timeFormat = 0

GPIO.setmode(GPIO.BCM)
GPIO.setup(17, GPIO.IN)  # For M2M Communication from Gas Valve signal
GPIO.setup(18, GPIO.IN)
camera = picamera.PiCamera()

while True:
        input17 = GPIO.input(17)  #Pin number 17 activates
        input18 = GPIO.input(18)  #Pin number 18 activates
        now = datetime.datetime.now()
        timeFormat = now.strftime("%Y%m%d_%H%M_%S.%s") #To put date and time in images

        if input17 == True or input18 == True:  #If PIR Sensor detects something, the Picamera will take.
                print('Motion_Detected_%s' %timeFormat)
                camera.capture('image_%s.jpg' %timeFormat) #To take a picture

                time.sleep(1) #sleeping time 1 second

When you finish typing, you should press the keys ‘Control‘ + ‘x‘ and press ‘y‘ to save this file.

Making an image file server

cd /home/pi
mkdir camserver
sudo nano requirements.txt

Type the below (the enclosed file) Or Put ‘requirements.txt’ file into ‘/home/pi/camserver’ folder.

numpy==1.10.1
websocket-client==0.35.0
websocket-server==0.4
ibmiotf==0.2.3
pip install --user -r requirements.txt

Execute an image file server in /home/pi/ below.

cd /home/pi
python -m SimpleHTTPServer 7000

Step 8: Adding IBM Watson, IBM NoSQL DB, Play-Audio, and Twilio

Adding IBM Watson, IBM NoSQL DB, Play-Audio, and Twilio
Screenshot 2016-11-04 08.35.52.png
Screenshot 2016-11-04 08.36.16.png
Screenshot 2016-11-04 08.33.28.png
Screenshot 2016-11-04 08.35.43.png

Searching the Nodes

Node-RED comes with a core set of useful nodes, but there are a growing number of additional nodes available for installing from both the Node-RED project as well as the wider community. You can search for available nodes in the Node-RED library or on the npm repository.

  • For example, we are going to search Twilio at the npm web. Click here.
  • Then, we are going to install Twilio on Raspberry pi.

Installing npm packaged node

To add additional nodes you must first install the npm tool, as it is not included in the default installation. The following commands install npm and then upgrade it to the latest 2.x version.

sudo apt-get update
sudo apt-get install npm
sudo npm install -g npm@2.x
hash -r
cd /home/pi/.node-red
  • For example, ‘npm install node-red-{example node name}’
  • Copy the ‘npm install node-red-node-twilio’ from the npm web. Paste it on a terminal window.
  • Ex: node-red-node-watson, node-red-contrib-play-audio, node-red-dashboard, and node-red-node-pidcontrol.
npm install node-red-node-twilio
  • You will need to restart Node-RED for it to pick-up the new nodes.
node-red-stop

node-red-start
  • Close your web browser and reopen the web browser.

Step 9: Adding autostart files for every boot.

Adding autostart files for every boot.

How to make autostart files at every boot.

  • Mosquitto
cd /etc/xdg/autostart/
sudo nano flyMosquitto.desktop

Type the below (this will enclose the file) Or Put ‘flyMosquitto.desktop’ file into autostart folder.

[Desktop Entry] 
Type=Application
Name=flyMosquitto
Comment=Fly my mosquitto
Exec=cd /etc/mosquitto/conf.d/
Exec=mosquitto
  • Node-RED
sudo systemctl enable nodered.service
  • Python JPEG Camera
cd /etc/xdg/autostart/
sudo nano pircameraNodeRED.desktop

Type the description below or put the ‘pircameraNodeRED.desktop’ file into /etc/xdg/autostart/ folder.

[Desktop Entry]
Type=Application
Name=pircameraNodeRED.py
Comment=Start my security camera
NoDisplay=false
Exec=python /home/pi/pythonPir/pircameraNodeRED.py
NotShowIn=GNOME;KDE;XFCE;
Name[en_US]=pircamera.py
  • Image file Server
cd /etc/xdg/autostart/
sudo nano imageFileServer.desktop

Type the description below or put the ‘imageFileServer.desktop’ file into /etc/xdg/autostart/ folder.

[Desktop Entry]
Type=Application 
Name=imageFileServer 
Comment=Start an image file server 
NoDisplay=false 
Exec=cd /home/pi 
Exec=python -m SimpleHTTPServer 7000

Step 10: Testing M2M Communication.

Testing M2M Communication.
Screenshot 2016-10-29 18.29.58.png
IMG_0395.JPG
IMG_0400.JPG

Importing the enclosed files in each NodeRED.

(1) Using a smart JPEG camera

Import the ‘M2M_SmartJPGCamera.txt‘ into the NodeRED of the smart JPEG camera.

(2) Using a smart gas valve

Import the ‘M2M_SmartGasValve.txt‘ into the NodeRED of the smart gas valve.

(3) Check an IP address of the smart gas valve in the Raspberry Pi2.

Type ‘ifconfig’ on a terminal window as shown below.

ifconfig

When you see the IP address, copy the IP address in a terminal window.

(4) Put the IP address into the MQTT node in other Raspberry Pi2.

  1. Click the MQTT node.
  2. Put the IP address into Server.

Step 11: (Optional) Using OpenCV

(Optional) Using OpenCV
Screenshot 2016-11-05 18.11.14.png

Installing & Using OpenCV on Raspberry Pi2

We have already used the IBM Watson Visual Recognition. Watson Visual Recognition is very excellent whereas we can’t use it without connecting wifi. OpenCV is possible to use without internet connection but It’s not very easy for a beginner to install & code into OpenCV. So, I’m going to install the OpenCV.

  • Download ‘opencv-3.1.0.zip from opecv.org
  • Install dependencies
sudo apt-get update
sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libjpeg-dev libpng-dev libtiff-dev libjasper-dev
  • (Optional) Install OpenCV 2
sudo apt-get install python-opencv
  • Install OpenCV 3
unzip ~/Downloads/opencv-3.1.0.zip
cd opencv-3.1.0/
mkdir build
cd build/
cmake -DCMAKE_BUILD_TYPE=Debug -DBUILD_TESTS=NO -DBUILD_PERF_TESTS=NO ..
make -j3
sudo make install
sudo ldconfig
  • Check which version of OpenCV you have in Python
python
import cv2
cv2.__version__
  • Run the simple face detect sample, and look at its code to see how it works:
  • Before, you should connect an USB-cam with Raspberry Pi2
cd /home/pi
cd opencv-3.1.0
python ./facedetect.py

Coding Jarvis in Python in 2016

Gurwinder Gulati's Blog

It’s tough for an erstwhile Iron Man to work on creating their personal AI assistant on the weekends. Like any other time-pressured inventor without a PhD in computer science and linguistics, I decided to use a library for speech recognition and synthesis. Fortunately, Python offers several choices. Unfortunately, many of simply them don’t work any more. I will discuss the ones that are still functional and can be used with Python 2.7 and Python 3 (up to Python 3.5 at the time of writing).

j-a-r-v-i-s My AI assistant is actually a little humbler – I call it Samwise

View original post 440 more words

Most Cited Deep Learning Papers by Terry T. Um

[ Most Cited Deep Learning Papers By Terry T. Um  ]

Awesome

A curated list of the most cited deep learning papers (since 2010)

I believe that there exist classic deep learning papers which are worth reading regardless of their applications. Rather than providing overwhelming amount of papers, I would like to provide a curated list of the classic deep learning papers which can be considered as must-reads in some area.

Awesome list criteria

  • 2016 : Based on discussions
  • 2015 : +100 citations (✨ +200)
  • 2014 : +200 citations (✨ +400)
  • 2013 : +300 citations (✨ +600)
  • 2012 : +400 citations (✨ +800)
  • 2011 : +500 citations (✨ +1000)
  • 2010 : +600 citations (✨ +1200)

I need your contributions!

Table of Contents

Survey / Review

  • Deep learning (2016), Goodfellow et al. (Bengio) [html]
  • Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton [pdf]
  • Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf]
  • Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf]

Theory / Future

  • Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]
  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al.[pdf]
  • How transferable are features in deep neural networks? (2014), J. Yosinski et al. (Bengio) [pdf]
  • Why does unsupervised pre-training help deep learning (2010), E. Erhan et al. (Bengio) [pdf]
  • Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]

Optimization / Regularization

  • Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
  • Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (2015), S. Loffe and C. Szegedy [pdf]
  • Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. [pdf]
  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. (Hinton) [pdf]
  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
  • Regularization of neural networks using dropconnect (2013), L. Wan et al. (LeCun) [pdf]
  • Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf]
  • Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf]
  • Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]

Network Models

  • Deep residual learning for image recognition (2016), K. He et al. (Microsoft) [pdf]
  • Going deeper with convolutions (2015), C. Szegedy et al. (Google) [pdf]
  • Fast R-CNN (2015), R. Girshick [pdf]
  • Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf]
  • Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf]
  • OverFeat: Integrated recognition, localization and detection using convolutional networks (2014), P. Sermanet et al.(LeCun) [pdf]
  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]
  • Maxout networks (2013), I. Goodfellow et al. (Bengio) [pdf]
  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. (Hinton) [pdf]
  • Large scale distributed deep networks (2012), J. Dean et al. [pdf]
  • Deep sparse rectifier neural networks (2011), X. Glorot et al. (Bengio) [pdf]

Image

  • Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf]
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf]
  • DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
  • Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf]
  • Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al.[pdf]
  • DeepFace: Closing the Gap to Human-Level Performance in Face Verification (2014), Y. Taigman et al. (Facebook) [pdf]
  • Decaf: A deep convolutional activation feature for generic visual recognition (2013), J. Donahue et al. [pdf]
  • Learning Hierarchical Features for Scene Labeling (2013), C. Farabet et al. (LeCun) [pdf]
  • Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis (2011), Q. Le et al. [pdf]
  • Learning mid-level features for recognition (2010), Y. Boureau (LeCun) [pdf]

Caption

  • Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. (Bengio) [pdf]
  • Show and tell: A neural image caption generator (2015), O. Vinyals et al. [pdf]
  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf]
  • Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf]

Video / Human Activity

  • Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. (FeiFei) [pdf]
  • A survey on human activity recognition using wearable sensors (2013), O. Lara and M. Labrador [pdf]
  • 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]
  • Deeppose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy [pdf]
  • Action recognition with improved trajectories (2013), H. Wang and C. Schmid [pdf]

Word Embedding

  • Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf]
  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf]
  • Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov [pdf] (Google)
  • Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. (Google) [pdf]
  • Efficient estimation of word representations in vector space (2013), T. Mikolov et al. (Google) [pdf]
  • Word representations: a simple and general method for semi-supervised learning (2010), J. Turian (Bengio) [pdf]

Machine Translation / QnA

  • Towards ai-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. (Bengio) [pdf]
  • Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al.(Bengio) [pdf]
  • A convolutional neural network for modelling sentences (2014), N. kalchbrenner et al. [pdf]
  • Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
  • The stanford coreNLP natural language processing toolkit (2014), C. Manning et al. [pdf]
  • Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf]
  • Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf]
  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]

Speech / Etc.

  • Speech recognition with deep recurrent neural networks (2013), A. Graves (Hinton) [pdf]
  • Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf]
  • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf]

RL / Robotics

  • Mastering the game of Go with deep neural networks and tree search, D. Silver et al. (DeepMind) [pdf]
  • Human-level control through deep reinforcement learning (2015), V. Mnih et al. (DeepMind) [pdf]
  • Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
  • Playing atari with deep reinforcement learning (2013), V. Mnih et al. (DeepMind) [pdf])

Unsupervised

  • Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf]
  • Contractive auto-encoders: Explicit invariance during feature extraction (2011), S. Rifai et al. (Bengio) [pdf]
  • An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
  • A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]

Hardware / Software

  • TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2016), M. Abadi et al. (Google) [pdf]
  • MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf]
  • Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf]
  • Theano: new features and speed improvements (2012), F. Bastien et al. (Bengio) [pdf]

License

CC0

To the extent possible under law, Terry T. Um has waived all copyright and related or neighboring rights to this work.

[PDF] Official Deep Learning Book By Yoshua Bengio

[PDF: Official Deep Learning Book By Yoshua Bengio ]

Screenshot 2017-01-26 05.42.31.png

 

http://www.deeplearningbook.org/