Bringing everything together in ROS

This is a quick description of the steps the ROS system goes through during the painting process. Later on the files will be uploaded to the Documents tab when complete, here we go:

The orientation data from the SLAM system will be transferred to the Robot Operating System (http://www.ros.org/). At first start the orientation will be measured and from that point on will be used as origin. Then after completion of painting box1 we move the platform. The Kinect continuously measures it’s orientation and passes the data to ROS who translates the x,y, transformation and rotation to the new position of the robot base.
Relative to its first orientation (origin) it then calculates the  position of the second box. We then ask it to calculate if, from its current position, it can reach the corners of the box. If a corner can not be reached, it checks the position of the corner relative to the position of the base formulating a vector with the distance it still needs to cover. This is then translated into a rotation, X and Y component which we can use to re-position the robot. When it has confirmed it can reach every corner it will move to the first line and start painting.
The Arduino driver has been integrated into the ROS system to allow for a synchronized call moving the arm and starting to paint.

Building the image

With making the image we faced an interesting challenge. For the robot to understand the image it has to be given a vector file (SVG). Vector files are optimized for accurate smooth lines yet are limited in their edditing capabilities. First we attempted to devide the image via an automated cropping system in python. This would load the image multiple times and crop it according to the predetermined grid. However because of the way SVG is build up it still saves all the lines and only changes the viewbox. As a result all the lines would parse to the robot resulting in an out of reach error(for this see the post Can we paint it). Since no automated system currently exists for splitting the lines and deleting them outside of a predetermined box we decided to do this by hand.
The SVG files were edited afterwards to share the same viewbox so that the read size by ROS is equal and it can make a fitting grid.

Can we paint it?

Another problem we face is to tell the robot which lines it can paint and which not
That it knows its position is all fine and handy but the reverse vector calculations needed are slow and messy.

Traditionally every vector has a start point, a function and a length. For the robot to know which lines it can fully paint it needs to take multiple points on that line and then check if it can reach them. These reverse Cartesian calculations take a lot of processing power and drastically slows down the system.

Therefore we decided to split up the image into a grid. Now instead of calculating the ability to draw a line it checks if it can reach the outer corners of the respective box within the grid. This has the upside that it can now much faster check which lines it can paint yet also comes with some disadvantages.

First of all the system has no way of knowing how to stitch different lines together. This means that when two boxes are within its reach it will not make a continuous curve of lines passing from one to the other box. Also since we are using boxes there might be a whole lot of white within the area, but if the robot can not reach the corner it will not start painting since the box is “outside of its reach”.

By making the box sizes smaller you reduce the second problem yet increase the first problem. We were hoping to test for an optimum within the system yet due to time pressure we were forced to make a fixed choice. We chose a 2x2m image with a grid of 4×4=16 boxes.

SLAM system update

So we have had some accuracy problems with the SLAM system. We adopted a Kinect2 which has a higher resolution than the Kinect1. We combined the RGB information of the camera with the Time of Flight information from the infrared camera and sensors intergrated into the Kinect. This reduced the stuttering of the camera and increased accuracy a lot.

The system is however still not as expected. As it turns out the system is quite sensitive to motion in the background. When many people walk through it’s viewing field it will think it is moving relative to the image and sent incorrect data to the robot. We plan on solving this by pointing the camera lower to the ground and build a low barrier pasted with checkerboard around the painting area. These will generate a lot of fixed high contrast points for the camera to pick up and remain more stationary.

Position tracking how we will tackle it

After a lot of research we narrowed our positioning technique down to a technology called SLAM (simultaneous location and mapping). Criteria for choosing were mainly cost and accuracy. Cheap systems that use magnetic fields or triangulation over sound/electronic waves are as of yet relatively inaccurate (10-20 cm on a 10×10 m field). The SLAM method uses only a stereo camera yet can achieve an accuracy of approximately 3 cm regardless of the area. This is done by tracking high contrast points in the image. By tracking their translation the camera knows how (relative to a high number of points) it is moving through space.

Currently we are working on using this opens source technology on a KINECT Camera and translating the coordinates that we get from the software to the system driving the robotic arm. We hope that by correctly implementing this we can know the position and orientation of the Robots base at all times with an accuracy below a 10 cm error margin.

For some (quite advanced) demo’s check out these video’s:
car driving while tracking its position
mapping your garden