Posts

Working Demo of Maze Solver

Image
I used Opencv to stream the webcam to pi and used the image data from the stream to feed my tensorflow testing script as input. I compared the input image data to the training data that i got from training script that i ran several times on my local machine (PC) with different image sets. The below is a small video showing the demo of my Maze solver:

GoPiGo moving based on tensorflow prediction using image set

Image
I am have installed tensorflow in my Pi and successfully ran a script that predicts the turns of my maze. I gave it a data set of images and my robot makes turns based on the predictions straight (0), left (1) and right (2).  Some parts of my tensorflow script: Next Steps: using the pictures as soon as they are taken by my webcam.

My GoPiGo solving maze using remote control

Image
I have written a python script for my pi to take picture every sec for the number of seconds i give it using webcam to increase my database. I also wrote a python script for my GoPiGo to go forward, backward, left and right based on my commands and I also lowered the speed for my pi to get accurate pictures while moving. Here is a small video Demonstrating all the tasks i mentioned above: I installed Tensorflow on my pi and I am working on my python script to use the results from tensorfow to use as input for my gopigo remote control script.

Next Steps

Take more pictures for my data set using pi camera. write a small python script for my pi to take picture every 1/2 second and save it in a certain folder while i drive my robot manually. Install tensorflow, download my data set on pi and run my python script on my to see the results. Add robot directions to pi based on what the program predicts. 

Pi setup

Image
I imaged my SD card with Raspbian for robots OS. I used the below tutorial to do it: https://www.dexterindustries.com/howto/install-raspbian-for-robots-image-on-an-sd-card/ I then put the SD card in the Raspberry pi and installed VNC viewer on my laptop. The below links are where i downloaded the VNC viewer from. https://www.realvnc.com/en/connect/download/viewer/ I powered up my robot and connected my laptop to my robots network. Then I used VNC viewer to connect with IP 10.10.10:1 and boot up my pi and log into it with  username: pi password: robots1234 This screen should come up after you login: I followed the tutorial from below pdf to configure my pi. https://drive.google.com/file/d/0B1sUsr9DiI5Da0Q5cWlmN2pKZEk/view?usp=sharing

Tensorflow (Image Recognition)

Image
I developed a program that trains my robot to recognize the turns which helps it solve the maze. I built a model maze and took several images of my maze's left, right and straight directions. Model maze#1 Model maze#2 Then, I used those images as three data sets then I used one picture from each set to test the code.  Left Image  Right Image   Straight Image I used the tutorial from below link to guide me through building my model in Tensorflow: https://www.datacamp.com/community/tutorials/tensorflow-tutorial#gs._ktlR3U

Maze Solver

Image
Project Plan : I would like to design and train a Robot called Maze Solver that can navigate itself through any maze build with black constructor paper. Software: I am planning on using Tensorflow to train my robot to recognize the turns.  I have create my own database with around 500 pictures of left, right and straight turns.  I am using python as my main programming language. I am also planning on using webcam to stream and take pictures .   I have already built the robot with GoPiGo startup kit, Raspberry pi and Pi Camera  ( click here ). I have also connected the robot to my PC and tinkered with some of the example python code files. Next steps: I am planning on programming my robot to navigate through the maze and live feed. Also start training it to recognize turns using sample pictures.  Future work : In the future I would like my robot to water the plant after it recognizes them.