YOLO-Powered_Robot_Vision

YOLO-Powered_Robot_Vision


Introduction

This is a Pi-based robot to implement visual recognition(by YOLO). The YOLO-Powered vision can recognize many objects such as people, car, bus, fruits, and so on.

  • Hardware: Raspberry-Pi2, Sony PS3 Eye Camera

    (Available to use Logitech C270 USB camera with Raspberry Pi)

  • Software: YOLO(v2), Jupyter-Notebook

Structure.png

My motivation

I was so interested in performance of the image recognition with YOLO-2 on Raspberry Pi. In addition, the Jupyter notebook is really convenient to instantly code as a quick prototype. According to paper, I realised that YOLO is a fast, accurate visual detector, making it ideal for computer vision system. We connect YOLO to a webcam and verify that it maintains real-time performance. So, the Raspberry pi’s processing speed is very slow compare to my laptop.

(Picasso Dataset precision-recall curves: paper)

Perfomance_Picaso.png

(The Architecture: paper)

Architecture_CNN.png

Requirements and Installation

Quick Start

This post will guide you through detecting objects with the YOLO system using a pre-trained model. If you don’t already have Darknet installed, you should install OpenCV2 before on your Raspberry Pi.

  • Install dependencies for OpenCV2
sudo apt-get update

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libjpeg-dev libpng-dev libtiff-dev libjasper-dev

sudo apt-get install python-opencv
  • Check which version of OpenCV you have in Python
python
import cv2
cv2.__version__
  • Install the darknet for YOLO
git clone https://github.com/pjreddie/darknet
cd darknet
make

Easy!

You already have the config file for YOLO in the cfg/ subdirectory. You will have to download the pre-trained weight file here (258 MB). Or just run this:

wget http://pjreddie.com/media/files/yolo.weights

Then run the detector to test.

./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg

You will see some output like this:

layer     filters    size              input                output
    0 conv     32  3 x 3 / 1   416 x 416 x   3   ->   416 x 416 x  32
    1 max          2 x 2 / 2   416 x 416 x  32   ->   208 x 208 x  32
    .......
   29 conv    425  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 425
   30 detection
Loading weights from yolo.weights...Done!
data/dog.jpg: Predicted in 0.016287 seconds.
car: 54%
bicycle: 51%
dog: 56%

Output

  • Re-name from ‘darknet’ to ‘YOLO-Powered_Robot_Vision’.
mv /home/pi/Documents/darknet /home/pi/Documents/YOLO-Powered_Robot_Vision
cd /home/pi/Documents/YOLO-Powered_Robot_Vision
  • Download ‘YOLO-Powered_Robot_Vision.ipynb’ at /home/pi/Documents/YOLO-Powered_Robot_Vision
wget https://github.com/leehaesung/YOLO-Powered_Robot_Vision/raw/master/YOLO-Powered_Robot_Vision.ipynb

jupyter-notebook

Source Codes

Result of Object Recognition

(Caution!!: I have used the image sources for an educational purpose. Please don’t use any pictures in copyright. So, I am not responsible for any images you use.)

predictions06.png

  • Another examples predictions07.png

predictions08.png

predictions09.png

predictions11.png

Advertisements

Videos

  1. Deep Learning and Neural Networks with Kevin Duh: course page
  2. NY Course by Yann LeCun: 2014 version, 2015 version
  3. NIPS 2015 Deep Learning Tutorial by Yann LeCun and Yoshua Bengio (slides)(mp4,wmv)
  4. ICML 2013 Deep Learning Tutorial by Yann Lecun (slides)
  5. Geoffery Hinton’s cousera course on Neural Networks for Machine Learning
  6. Stanford 231n Class: Convolutional Neural Networks for Visual Recognition (videosgithub, syllabus, subreddit, projectfinal reports, twitter)
  7. Large Scale Visual Recognition Challenge 2014, arxiv paper
  8. GTC Deep Learning 2015
  9. Hugo Larochelle Neural Networks class, slides
  10. My youtube playlist
  11. Yaser Abu-Mostafa’s Learning from Data course (youtube playlist)
  12. Stanford CS224d: Deep Learning for Natural Language Processing: syllabus, youtube playlist, reddit, longer playlist
  13. Neural Networks for Machine Perception: vimeo
  14. Deep Learning for NLP (without magic): page, better page, video1, video2, youtube playlist
  15. Introduction to Deep Learning with Python: video, slides, code
  16. Machine Learning course with emphasis on Deep Learning by Nando de Freitas (youtube playlist), course page, torch practicals
  17. NIPS 2013 Deep Learning for Computer Vision Tutorial – Rob Fergus: video, slides
  18. Tensorflow Udacity mooc
  19. Oxford Deep NLP Course 2017 (github)

Links

  1. Deeplearning.net
  2. NVidia’s Deep Learning portal
  3. My flipboard page

AMIs, Docker images & Install Howtos

  1. Stanford 231n AWS AMI:  image is cs231n_caffe_torch7_keras_lasagne_v2, AMI ID: ami-125b2c72, Caffe, Torch7, Theano, Keras and Lasagne are pre-installed. Python bindings of caffe are available. It has CUDA 7.5 and CuDNN v3.
  2. AMI for AWS EC2 (g2.2xlarge): ubuntu14.04-mkl-cuda-dl (ami-03e67874) in Ireland Region: page,  Installed stuffs: Intel MKL, CUDA 7.0, cuDNN v2, theano, pylearn2, CXXNET, Caffe, cuda-convnet2, OverFeat, nnForge, Graphlab Create (GPU), etc.
  3. Chef cookbook for installing the Caffe deep learning framework
  4. Public EC2 AMI with Torch and Caffe deep learning toolkits (ami-027a4e6a): page
  5. Install Theano on AWS (ami-b141a2f5 with CUDA 7): page
  6. Running Caffe on AWS Instance via Docker: page, docs, image
  7. CVPR 2015 ITorch Tutorial (ami-b36981d8): page, github, cheatsheet
  8. Torch/iTorch/Ubuntu 14.04 Docker image: docker pull kaixhin/torch
  9. Torch/iTorch/CUDA 7/Ubuntu 14.04 Docker image: docker pull kaixhin/cuda-torch
  10. AMI containing Caffe, Python, Cuda 7, CuDNN, and all dependencies. Its id is ami-763a311e (disk min 8G,system is 4.6G), howto
  11. My Dockerfiles at GitHub

Examples and Tutorials

  1. IPython Caffe Classification
  2. IPython Detection, arxiv paper, rcnn github, selective search
  3. Machine Learning with Torch 7
  4. Deep Learning Tutorials with Theano/Python, CNN, github
  5. Torch tutorials, tutorial&demos from Clement Fabaret
  6. Brewing Imagenet with Caffe
  7. Training an Object Classifier in Torch-7 on multiple GPUs over ImageNet
  8. Stanford Deep Learning Matlab based Tutorial (github, data)
  9. DIY Deep Learning for Vision: A Hands on tutorial with Caffe (google doc)
  10. Tutorial on Deep Learning for Vision CVPR 2014: page
  11. Pylearn2 tutorials: convolutional network, getthedata
  12. Pylearn2 quickstart, docs
  13. So you wanna try deep learning? post from SnippyHollow
  14. Object Detection ipython nb from SnippyHollow
  15. Filter Visualization ipython nb from SnippyHollow
  16. Specifics on CNN and DBN, and more
  17. CVPR 2015 Caffe Tutorial
  18. Deep Learning on Amazon EC2 GPU with Python and nolearn
  19. How to build and run your first deep learning network (video, behind paywall)
  20. Tensorflow examples
  21. Illia Polosukhin’s Getting Started with Tensorflow – Part 1, Part 2, Part 3
  22. CNTK Tutorial at NIPS 2015
  23. CNTK: FFN, CNN, LSTM, RNN
  24. CNTK Introduction and Book

People

  1. Geoffery Hinton: Homepage, Reddit AMA (11/10/2014)
  2. Yann LeCun: Homepage, NYU Research Page, Reddit AMA (5/15/2014)
  3. Yoshua Bengio: Homepage, Reddit AMA (2/27/2014)
  4. Clement Fabaret: Scene Parsing (paper), github, code page
  5. Andrej Karpathy: Homepagetwitter, github, blog
  6. Michael I Jordan: Homepage, Reddit AMA (9/10/2014)
  7. Andrew Ng: Homepage, Reddit AMA (4/15/2015)
  8. Jurden Schmidhuber: Homepage, Reddit AMA (3/4/2015)
  9. Nando de Freitas: Homepage, YouTube, Reddit AMA (12/26/2015)

Datasets

  1. ImageNet
  2. MNIST (Wikipedia), database
  3. Kaggle datasets
  4. Kitti Vision Benchmark Suite
  5. Ford Campus Vision and Lidar Dataset
  6. PCL Lidar Datasets
  7. Pylearn2 list

Frameworks and Libraries

  1. Caffe: homepage, github, google group
  2. Torch: homepage, cheatsheet, github, google group
  3. Theano: homepage, google group
  4. Tensorflow: homepage, github, google group, skflow
  5. CNTK: homepage, github, wiki
  6. CuDNN: homepage
  7. PaddlePaddle: homepage, github, docs, quick start
  8. fbcunn: github
  9. pylearn2: github, docs
  10. cuda-convnet2: homepage, cuda-convnet, matlab
  11. nnForge: homepage
  12. Deep Learning software links
  13. Torch vs. Theano post
  14. Overfeat: page, github, paper, slidesgoogle group
  15. Keras: github, docs, google group
  16. Deeplearning4j: page, github
  17. Lasagne: docs, github

Topics

  1. Scene Understanding (CVPR 2013, Lecun) (slides), Scene Parsing (paper)
  2. Overfeat: Integrated Recognition, Localization and Detection using Convolutional Networks (arxiv)
  3. Parsing Natural Scenes and Natural Language with Recursive Neural Networks: page, ICML 2011 paper

Reddit

  1. Machine Learning Reddit page
  2. Computer Vision Reddit page
  3. Reddit: Neural Networks: new, relevant
  4. Reddit: Deep Learning: new, relevant

Books

  1. Learning Deep Architectures for AI, Bengio (pdf)
  2. Neural Nets and Deep Learning (html, github)
  3. Deep Learning, Bengio, Goodfellow, Courville (html)
  4. Neural Nets and Learning Machines, Haykin, 2008 (amazon)

Papers

  1. ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NIPS 2012 (paper)
  2. Why does unsupervised pre-training help deep learning? (paper)
  3. Hinton06 – Autoencoders (paper)
  4. Deep Learning using Linear Support Vector machines (paper)

Companies

  1. Kaggle: homepage
  2. Microsoft Deep Learning Technology Center

Conferences

  1. ICML
  2. PAMITC Sponsored Conferences
  3. NIPS: 2015

Installing & Testing Google TensorFlow on Raspberry Pi2

[ Installing & Testing Google TensorFlow on Raspberry Pi2 ]

Let’s install TensorFlow.

sudo apt-get update
# For Python 3.3+
sudo apt-get install python3-pip python3-dev

# For Python 3.3+
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v1.0.1/tensorflow-1.0.1-cp34-cp34m-linux_armv7l.whl
sudo pip3 install tensorflow-1.0.1-cp34-cp34m-linux_armv7l.whl

# For Python 3.3+
sudo pip3 uninstall mock
sudo pip3 install mock

And then,
Let’s code it below that

Ref.: https://www.tensorflow.org/get_started/

 

pi@raspberrypi:~ $ python3
Python 3.4.2 (default, Oct 19 2014, 13:31:11)
[GCC 4.9.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
import tensorflow as tf
import numpy as np

# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(201):
     sess.run(train)
     if step % 20 == 0:
         print(step, sess.run(W), sess.run(b))


0 [ 0.2893075] [ 0.26960531]
20 [ 0.14367677] [ 0.27712572]
40 [ 0.11184501] [ 0.29379657]
60 [ 0.10321232] [ 0.29831767]
80 [ 0.10087116] [ 0.29954377]
100 [ 0.10023624] [ 0.29987627]
120 [ 0.10006408] [ 0.29996645]
140 [ 0.10001738] [ 0.29999092]
160 [ 0.1000047] [ 0.29999754]
180 [ 0.10000128] [ 0.29999936]
200 [ 0.10000037] [ 0.29999983]

screenshot-2017-01-28-22-17-32

Deep Learning Libraries

Software links

  1. Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal)
  2. Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu)
  3. Pylearn2 – Pylearn2 is a library designed to make machine learning research easy.
  4. Blocks – A Theano framework for training neural networks
  5. Tensorflow – TensorFlow™ is an open source software library for numerical computation using data flow graphs.
  6. MXNet – MXNet is a deep learning framework designed for both efficiency and flexibility.
  7. Caffe -Caffe is a deep learning framework made with expression, speed, and modularity in mind.Caffe is a deep learning framework made with expression, speed, and modularity in mind.
  8. Lasagne – Lasagne is a lightweight library to build and train neural networks in Theano.
  9. Keras– A theano based deep learning library.
  10. Deep Learning Tutorials – examples of how to do Deep Learning with Theano (from LISA lab at University of Montreal)
  11. Chainer – A GPU based Neural Network Framework
  12. Matlab Deep Learning – Matlab Deep Learning Tools
  13. CNTK – Computational Network Toolkit – is a unified deep-learning toolkit by Microsoft Research.
  14. MatConvNet – A MATLAB toolbox implementing Convolutional Neural Networks (CNNs) for computer vision applications. It is simple, efficient, and can run and learn state-of-the-art CNNs.
  15. DeepLearnToolbox – A Matlab toolbox for Deep Learning (from Rasmus Berg Palm)
  16. Cuda-Convnet – A fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
  17. Deep Belief Networks. Matlab code for learning Deep Belief Networks (from Ruslan Salakhutdinov).
  18. RNNLM– Tomas Mikolov’s Recurrent Neural Network based Language models Toolkit.
  19. RNNLIB-RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition.
  20. matrbm. Simplified version of Ruslan Salakhutdinov’s code, by Andrej Karpathy (Matlab).
  21. deeplearning4j– Deeplearning4J is an Apache 2.0-licensed, open-source, distributed neural net library written in Java and Scala.
  22. Estimating Partition Functions of RBM’s. Matlab code for estimating partition functions of Restricted Boltzmann Machines using Annealed Importance Sampling (from Ruslan Salakhutdinov).
  23. Learning Deep Boltzmann Machines Matlab code for training and fine-tuning Deep Boltzmann Machines (from Ruslan Salakhutdinov).
  24. The LUSH programming language and development environment, which is used @ NYU for deep convolutional networks
  25. Eblearn.lsh is a LUSH-based machine learning library for doing Energy-Based Learning. It includes code for “Predictive Sparse Decomposition” and other sparse auto-encoder methods for unsupervised learning. Koray Kavukcuoglu provides Eblearn code for several deep learning papers on this page.
  26. deepmat– Deepmat, Matlab based deep learning algorithms.
  27. MShadow – MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance. Supports CPU/GPU/Multi-GPU and distributed system.
  28. CXXNET – CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.
  29. Nengo-Nengo is a graphical and scripting based software package for simulating large-scale neural systems.
  30. Eblearn is a C++ machine learning library with a BSD license for energy-based learning, convolutional networks, vision/recognition applications, etc. EBLearn is primarily maintained by Pierre Sermanet at NYU.
  31. cudamat is a GPU-based matrix library for Python. Example code for training Neural Networks and Restricted Boltzmann Machines is included.
  32. Gnumpy is a Python module that interfaces in a way almost identical to numpy, but does its computations on your computer’s GPU. It runs on top of cudamat.
  33. The CUV Library (github link) is a C++ framework with python bindings for easy use of Nvidia CUDA functions on matrices. It contains an RBM implementation, as well as annealed importance sampling code and code to calculate the partition function exactly (from AIS lab at University of Bonn).
  34. 3-way factored RBM and mcRBM is python code calling CUDAMat to train models of natural images (from Marc’Aurelio Ranzato).
  35. Matlab code for training conditional RBMs/DBNs and factored conditional RBMs (from Graham Taylor).
  36. mPoT is python code using CUDAMat and gnumpy to train models of natural images (from Marc’Aurelio Ranzato).
  37. neuralnetworks is a java based gpu library for deep learning algorithms.
  38. ConvNet is a matlab based convolutional neural network toolbox.
  39. Elektronn is a deep learning toolkit that makes powerful neural networks accessible to scientists outside the machine learning community.
  40. OpenNN is an open source class library written in C++ programming language which implements neural networks, a main area of deep learning research.
  41. NeuralDesigner  is an innovative deep learning tool for predictive analytics.
  42. Theano Generalized Hebbian Learning.
  43. Apache Singa is an open source deep learning library that provides a flexible architecture for scalable distributed training. It is extensible to run over a wide range of hardware, and has a focus on health-care applications.
  44. Lightnet  is a lightweight, versatile and purely Matlab-based deep learning framework. The aim of the design is to provide an easy-to-understand, easy-to-use and efficient computational platform for deep learning research.

If your software belongs here, email us and let us know.

 

NodeRED BlockChain

Build your own block chain in 15 minutes on Node-RED using Node.js, JavaScript, Cloudant/CouchDB on a free IBM Cloud account… Note: To do the tutorial you need a free Bluemix (IBM PaaS Cloud) account. You can obtain one here and the raw file (JSON) for this NodeRED flow is here. Tutorial Objective In this exercise […]

via Node-RED Blockchain — romeokienzler

Installing Cylon.js for the Raspberry Pi

[ Installing Cylon.js for the Raspberry Pi ]

 

Repository| Issues

The Raspberry Pi is an inexpensive and popular ARM based single board computer with digital & PWM GPIO, and i2c interfaces built in.

The Raspberry Pi is a credit-card-sized single-board computer developed in the UK by the Raspberry Pi Foundation with the intention of promoting the teaching of basic computer science in schools

For more info about the Raspberry Pi platform, click here.

How to Install

Installing Cylon.js for the Raspberry Pi is easy, but must be done on the Raspi itself, or on another Linux computer. Due to I2C device support, the module cannot be installed on OS X or Windows.

Install the module with:

$ npm install cylon cylon-raspi

How to Use

This small program causes an LED to blink.

var Cylon = require("cylon");

Cylon.robot({
  connections: {
    raspi: { adaptor: 'raspi' }
  },

  devices: {
    led: { driver: 'led', pin: 11 }
  },

  work: function(my) {
    every((1).second(), my.led.toggle);
  }
}).start();

How to Connect

Install the lastest Raspbian OS

You can get it from here: http://www.raspberrypi.org/downloads/

Setting the Raspberry Pi keyboard

Having trouble with your Raspberry Pi keyboard layout? Use the following command:

sudo dpkg-reconfigure keyboard-configuration

Update your Raspbian and install Node.js

These commands need to be run after SSHing into the Raspi:

sudo apt-get update
sudo apt-get upgrade
wget http://nodejs.org/dist/v0.10.28/node-v0.10.28-linux-arm-pi.tar.gz
tar -xvzf node-v0.10.28-linux-arm-pi.tar.gz
node-v0.10.28-linux-arm-pi/bin/node --version

You should see the node version you just installed.

$ node --version
v0.10.28

Once you have installed Node.js, you need to add the following to your ~/.bash_profile file. Create this file if it does not already exist, and add this to it:

NODE_JS_HOME=/home/pi/node-v0.10.28-linux-arm-pi
PATH=$PATH:$NODE_JS_HOME/bin

This will setup the path for you every time you login. Run the source ~/.bash_profile command to load it right now without having to login again.

Thanks @joshmarinacci for the blog post at http://joshondesign.com/2013/10/23/noderpi where these modified instructions were taken.

Connecting to Raspberry Pi GPIO

This module only works on a real Raspberry Pi. Do not bother trying on any other kind of computer, it will not work. Also note you will need to connect actual circuits to the Raspberry Pi’s GPIO pins.

In order to access the GPIO pins without using sudo you will need to both app the pi user to the gpio group:

sudo usermod -G gpio pi

And also add the following udev rules file to /etc/udev/rules.d/91-gpio.rules:

SUBSYSTEM=="gpio", KERNEL=="gpiochip*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys/class/gpio/export /sys/class/gpio/unexport ; chmod 220 /sys/class/gpio/export /sys/class/gpio/unexport'"
SUBSYSTEM=="gpio", KERNEL=="gpio*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value ; chmod 660 /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value'"

Thanks to “MikeDK” for the above solution: https://www.raspberrypi.org/forums/viewtopic.php?p=198148#p198148

Enabling the Raspberry Pi i2c on Raspbian

You must add these two entries to your /etc/modules

i2c-bcm2708
i2c-dev

You must also ensure that these entries are commented in your /etc/modprobe.d/raspi-blacklist.conf

#blacklist spi-bcm2708
#blacklist i2c-bcm2708

You will also need to update the /boot/config.txt file. Edit it add the following text:

dtparam=i2c1=on
dtparam=i2c_arm=on

Finally, you need to allow the pi user permissions to access the i2c interface by running this command:

sudo usermod -G i2c pi

Now restart your Raspberry Pi.

Enabling PWM output on GPIO pins.

You need to install and have pi-blaster running in the raspberry-pi, you can follow the instructions for pi-blaster install in the pi-blaster repo here:

https://github.com/sarfata/pi-blaster

Available PINS

The following object depicts available pins for all revisions of raspberry-pi, the key is the actual number of the physical pin header on the board,the value is the GPIO pin number assigned by the OS, for the pins with changes between board revisions, the value contains the variations of GPIO pin number assignment between them (eg.rev1, rev2, rev3).

You should just be concerned with the key (number of the physical pin header on the board), Cylon.JS takes care of the board revision and GPIO pin numbers for you, this full list is for reference only.

PINS = {
  3: {
    rev1: 0,
    rev2: 2,
    rev3: 2
  },
  5: {
    rev1: 1,
    rev2: 3,
    rev3: 3
  },
  7: 4,
  8: 14,
  10: 15,
  11: 17,
  12: 18,
  13: {
    rev1: 21,
    rev2: 27,
    rev3: 27
  },
  15: 22,
  16: 23,
  18: 24,
  19: 10,
  21: 9,
  22: 25,
  23: 11,
  24: 8,
  29: {
    rev3: 5
  },
  31: {
    rev3: 6
  },
  32: {
    rev3: 12
  },
  33: {
    rev3: 13
  },
  35: {
    rev3: 19
  },
  36: {
    rev3: 16
  },
  37: {
    rev3: 26
  },
  38: {
    rev3: 20
  },
  40: {
    rev3: 21
  }
};

The website http://pi.gadgetoid.com/pinout has a great visual representation of this information.

Drivers

All Cylon.js digital and PWM GPIO drivers listed below should work with the Raspberry Pi:

I2C Drivers

 

 

 

 

IBM Watson Cloud Robot

 

IBM Watson Cloud Robot
Screenshot 2017-01-16 21.54.18.png
Screenshot 2016-12-31 16.28.11.png
ControllingButtons.jpeg
Screenshot 2017-01-02 11.03.04.png
IBM Cloud Robot.jpeg
Screenshot 2017-01-02 11.02.53.png
Screenshot 2017-01-02 11.00.48.png
Screenshot 2017-01-02 20.23.21.png
Screenshot 2017-01-02 20.28.36.png
Screenshot 2017-01-03 17.56.47.png
5870d98f3dd33aeaf7001a9e.jpeg

 

Motivation

I work as a robotics teacher in Sydney. I want to introduce my AI robot to my students in my class next month. In addition, I’m joining NASA Open Innovation Initiative (also known as NASA Space Apps Challenge) with my AI robot to measure the space environment such as temperature, humidity, and pressure. So, I’m so excited!!

Introduction

The IBM Watson Cloud Robot can recognize a human face, voice, and text like a human. The robot clearly recognized the celebrity (Elon Musk) and who he was. Also, it recognized my voice & any text. (YouTube)

This instructable will cover the basic steps that you need to follow to get started with open sources such as Watson nodes (Visual Recognition V3, Speech To Text, Text To Speech) for IBM Bluemix, Node-RED, MQTT v3.1. MQTT(Message Queueing Telemetry Transport) is a Machine-To-Machine(M2M) or Internet of Things (IoT) connectivity protocol that was designed to be extremely lightweight and useful when low battery power consumption and low network bandwidth is at a premium. It was invented in 1999 by Dr. Andy Stanford-Clark and Arlen Nipper and is now an Oasis Standard .

– How to tune PID gains of Node-RED with MQTT on Raspberry Pi:

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

– How to use the Bluemix platform (Docs)
https://console.ng.bluemix.net/docs/

– Enclosed my additional material (Pi-Scratch_Robot_GPIO.sb) for kid education at Download List

(Functions: Driving motors & Taking a picture on Rasberry Pi)

Step 1: Table of Contents

Step 0: Introduction

Step 1: Table of Contents

Step 2: Bill of Materials

Step 3: Assembly (Wiring & Soldering)

Step 4: Programming NodeRED on Raspberry Pi2

Step 5: Setting up MQTT v3.1 on Raspberry Pi2

Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2

Step 7: Adding & Setting up PID node, Dashboard on Raspberry Pi2

Step 8: Configuring the PS3 EYE camera with microphone

Step 9: Configuring GPS Sensor

Step 10: Using a dashboard for the robot

Step 11: Tuning PID controller

Step 12: (Optional) Programing a Pi-Scratch Robot

Step 13: Download list

Step 14: List of references

Step 15: Version Note

Step 2: Bill of Materials

Step 3: Assembly (Wiring & Soldering)

Assembly (Wiring & Soldering)
Screenshot 2017-01-02 15.04.09.png
586a327a8852ddcf530000be.jpeg

Step 4: Programming NodeRED on Raspberry Pi2

Programming NodeRED on Raspberry Pi2
5869dae765d221290a000785.jpeg
Screenshot 2016-12-31 16.19.05.png
586a34278ae43be54f00029b.jpeg
Screenshot 2017-01-02 22.13.51.png
Screenshot 2016-12-31 16.19.58.png
Screenshot 2016-12-31 16.21.14.png
Screenshot 2016-12-31 16.21.23.png
Screenshot 2016-12-31 16.21.36.png
Screenshot 2016-12-31 16.21.57.png
587de7568080cfd9830014dc.jpeg

How to start Node-RED on web-browser.

(1) Write down command shown below to a terminal window.

node-red-start

(2) You can find an IP address as below. ‘Once Node-RED has started, point a browser at http://169.254.170.40:1880’ (It depends on your IP address)

(3) Open your web browser.

(4) Copy the IP address and paste on web-browser.

(5) It will display a visual editor of Node-RED on web-browser.

(6) You can start coding with visual editor on web-browser.

(7) Try dragging & dropping any node from the left-hand side to right-hand side. It’s really easy to code. ( You can conveniently use the visual editor offline as well as online. ) Download all files at Download list. (1) Click the number (1) at the right-hand side corner shown in NodeRED on web-browser. (2) Click the Import button on the drop down menu. (3) Open the Clipboard shown in the above 1st picture. (4) Lastly, paste the given JSON format text of ‘____ver0.1.txt’ (Download List) in Import nodes editor.

Step 5: Setting up MQTT v3.1 on Raspberry Pi2

Setting up MQTT v3.1 on Raspberry Pi2
586a2d4f8080cffb3e001058.jpeg
586a353def665a0a630004ce.jpeg

There are two options such as using eclipse paho, installing a mosquitto sever. Also, you can use (1) option instead of (2) opption.

(1) Using “iot.eclipse.org”.

Click each MQTT node and Type it.

iot.eclipse.org

(2) Setting up MQTT v3.1 on Raspberry Pi2

This message broker(Mosquitto) is supported by MQTT v3.1 and it is easily installed on the Raspberry Pi and somewhat less easy to configure. Next we step through installing and configuring the Mosquitto broker. We are going to install & test the MQTT “mosquitto” on terminal window. Click that.

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

Step 6: Checking your NodeRED codes with MQTT on Raspberry Pi2

Checking your NodeRED codes with MQTT on Raspberry Pi2

When you will use the JSON format of the ‘NodeRED_Text_files_ver0.1.txt’ (Download List) on Node-RED, it’s automatically set up & coded each data. I have already set up the each data in each node.

(1) Click each node.

(2) Check information inside each node has been prefilled.

(3) Please don’t change the set data. (The above can be customized for more advanced users.)

Step 7: Adding & Setting up PID node, Dashboard on Raspberry Pi2

Adding & Setting up PID node, Dashboard on Raspberry Pi2
Screenshot 2017-01-17 20.45.01.png
Screenshot 2016-12-31 17.04.31.png
Screenshot 2016-12-31 17.06.38.png
Screenshot 2016-12-31 17.05.40.png
Screenshot 2016-12-31 17.05.46.png
Screenshot 2016-12-31 17.06.08.png

Searching the Nodes

Node-RED comes with a core set of useful nodes, but there are a growing number of additional nodes available for installing from both the Node-RED project as well as the wider community. You can search for available nodes in the Node-RED library or on the npm repository .

  • For example, we are going to search ‘node-red-node-pidcontrol’ at the npm web. Click here .
  • Then, we are going to install npm package, node-red-node-pidcontrol, node-red-dashboard on Raspberry Pi.

To add additional nodes you must first install the npm tool, as it is not included in the default installation. The following commands install npm and then upgrade it to the latest 2.x version.

sudo apt-get update
sudo apt-get install npm
sudo npm install -g npm@2.x
hash -r
cd /home/pi/.node-red
  • For example, ‘npm install node-red-{example node name}’
  • Copy the ‘npm install node-red-node-pidcontrol’ from the npm web. Paste it on a terminal window.
  • Ex: node-red-node-watson, node-red-contrib-play-audio, node-red-dashboard, node-red-node-pidcontrol
npm  install node-red-node-watson node-red-contrib-play-audio node-red-node-pidcontrol node-red-dashboard

You will need to restart Node-RED for it to pick-up the new nodes.

node-red-stop
node-red-start

Close your web browser and reopen the web browser.

Step 8: Configuring the PS3 EYE camera with microphone

Configuring the PS3 EYE camera with microphone
5869d5dd65d221a62200002b.jpeg
5869d6448080cf495b00030b.jpeg
Screenshot 2017-01-17 20.41.13.png

This Sony PS3 eye USB camera that can achieve up to 187 frames per second can be found for under $8 on Amazon.com that should make it quite a bargain for those wishing to experiment with CV projects. The PlayStation Eye camera for the PS3 is similar to a web camera but can also be used for computer vision and gesture recognition tasks. The PlayStation Eye has been supported by the Linux kernel since the late Linux 2.6 days but with a future update (Linux 3.20 or later given that the 3.19 merge window is closed) will support higher modes.

(1) Install a USB driver on Raspberry Pi.

 sudo apt-get install fswebcam

(2) Take a picture and then check the ‘visionImage.jpg’ file in the /home/pi

(3) Don’t forget to put the Bluemix service credentials for Watson Services such as Visual recognition, Speech to Text, and, Text to Speech. ( How to use the IBM Bluemix platform: https://console.ng.bluemix.net/docs/ )

(4) Make an image file (jpg) server for every boot.

<p>cd /etc/xdg/autostart/</p>
<p>sudo nano imageFileServer.desktop</p>

Type the description below or put the ‘imageFileServer.desktop’ file into /etc/xdg/autostart/ folder.

[Desktop Entry]
<p>Type=Application <br>Name=imageFileServer 
Comment=Start an image file server 
NoDisplay=false 
Exec=cd /home/pi 
Exec=python -m SimpleHTTPServer 7000</p>

Check the visionImage.jpg on the web browser.

http://169.254.62.80:7000/visionImage.jpg

Step 9: Configuring GPS Sensor

Configuring GPS Sensor
5869d8673dd33afb53000db3.jpeg
5869d8d88ae43b9c8f001508.jpeg
5869d986ef665a834c001426.jpeg
586cac5e8080cfca9500038e.jpeg

How to set the serial configuration for GPS module.

https://learn.adafruit.com/adafruit-ultimate-gps-on-the-raspberry-pi/using-uart-instead-of-usb

– Reference:

Adafruit Ultimate GPS & Download PDF file.

Tip: You should experiment the GPS sensor outside because this does not work inside at home. You would see an error signal. So, I made an extra node for home GPS test.

(1) Edit /boot/cmdline.txt

Next, enter the following command from the command line:

sudo nano /boot/cmdline.txt

And change:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

to:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

(eg, remove console=ttyAMA0,115200 and if there, kgdboc=ttyAMA0,115200)

Note you might see console=serial0,115200 or console=ttyS0,115200 and should remove those parts of the line if present.

(2) Edit /etc/inittab

(Raspbian Wheezy only)

From the command prompt enter the following command:

sudo nano /etc/inittab

And change:

#Spawn a getty on Raspberry Pi serial line

T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

to:

#Spawn a getty on Raspberry Pi serial line

#T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

That is, add a # to the beginning of the line!

(3) Only Raspbian Jessie

For the Raspberry Pi 1 or 2 (but NOT the 3!) Run the following two commands to stop and disable the tty service:

sudo systemctl stop serial-getty@ttyAMA0.service

sudo systemctl disable serial-getty@ttyAMA0.service

However for the Raspberry Pi 3 you need to use the /dev/ttyS0 port since that is what is normally connected to the GPIO serial port pins. Use these two commands instead:

sudo systemctl stop serial-getty@ttyS0.service

sudo systemctl disable serial-getty@ttyS0.service

(4) Raspberry Pi 3 Only

For the Raspberry Pi 3

You need to explicitly enable the serial port on the GPIO pins. The reason for this is a change with the Pi 3 to use the hardware serial port for Bluetooth and instead use a slightly different software serial port for the GPIO pins. A side effect of this change is that the serial port will actually change speed as the Pi CPU clock throttles up and down–this will unfortunately cause problems for most serial devices like GPS receivers!

Luckily there’s an easy fix detailed in this excellet blog post to force the Pi CPU into a fixed frequency which prevents speed changes on the serial port. The Pi might not perform as well but it will have a stable serial port speed.

To make this change edit the /boot/config.txt file by running:

sudo nano /boot/config.txt

At the very bottom of the file add this on a new line:

enable_uart=1

ave the file (press Ctrl-O, then enter) and exit (press Ctrl-X). You’re all set!

(5) Reboot your Pi

sudo reboot

(6) Restart GPSD with HW UART

Restart gpsd and redirect it to use HW UART instead of the USB port we pointed it to earlier. Simply entering the following two commands.

For the Raspberry Pi 1 or 2 (but NOT the 3!) run these commands:

sudo killall gpsd
sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock

And for the Raspberry Pi 3 run these commands to use the different serial port:

sudo killall gpsd
sudo gpsd /dev/ttyS0 -F /var/run/gpsd.sock

As with the USB example, you can test the output with:

cgps -s

Step 10: Using a dashboard for the robot

Using a dashboard for the robot
Screenshot 2017-01-04 18.02.05.png
Screenshot 2017-01-04 18.02.13.png
Screenshot 2017-01-04 18.02.24.png
Screenshot 2017-01-04 18.04.54.png
Screenshot 2017-01-04 18.05.07.png
Screenshot 2017-01-04 18.05.23.png
Screenshot 2017-01-04 18.06.03.png
Screenshot 2017-01-04 18.06.57.png
Screenshot 2017-01-04 18.07.15.png
Screenshot 2017-01-04 18.10.54.png
Screenshot 2017-01-04 18.16.39.png
Screenshot 2017-01-04 18.16.47.png
Screenshot 2017-01-04 18.17.00.png

The dashboard is a visual UI tool like gauge, chart. There is a basic tutorial of a Node-RED dashboard using ‘node-red-dashboard’

http://developers.sensetecnic.com/article/a-node-red-dashboard-using-node-red-contrib-ui/

Step 11: Tuning PID controller

Tuning PID controller
2.jpg

http://www.instructables.com/id/PID-Control-for-CPU-Temperature-of-Raspberry-Pi/

My instructable is really helpful to tune the PID gains for your system.

This is big job to adjust the pid gains. Use my source(node red) from the Download List.

Step 12: (Optional) Programing a Pi-Scratch Robot

(Optional) Programing a Pi-Scratch Robot
Screenshot 2017-01-09 22.06.46.png
Screenshot 2017-01-09 22.07.34.png
Screenshot 2017-01-09 22.07.42.png
Screenshot 2017-01-09 22.07.48.png
Screenshot 2017-01-09 22.07.54.png
Screenshot 2017-01-09 22.08.02.png

This part is an optional part for kid educational purpose. So, I developed it for my students in Sydney.

Let’s have fun with kids!!

Step 15: Version Note

————————————————————————–

Version rules

  • VerX.Y
    • X: Changed
    • Y: Added
    • (Ex 01) file__Ver0.2 : added something
    • (Ex 02) file__Ver1.0 : changed something

————————————————————————–

  • 06_Voice_Part_Ver0.2.txt : added a Watson Conversation (17 Jan 2017)