A final recap

It has been a while but better late than never, here is our last post on the project and a recap on what worked and what didn’t. So first of we want to say we are happy with the final result. We want to thank everyone who came to the science fair talked to us and showed interest and love for the projcet. We hope that others will find this blog and improve our findings, now without further ado, the review:

The location tracking system.
As discussed before we used the RGB information on a kinect camera to track points and triangulate the position. Later we switched to the Kinect2 for higher resolution and allowed us to use Depth information via IR dots check out this link to understand how it works. However in the showroom during the science fair the IR light coming from the glass ceilling kinda messed up the depth readings of the kinect. Therefore we wanted to switch back to RGB information. unfortunately the system bugged out. We tried to get it back up but it would just not initiate. We hope that in the future a system without this limitation (or without bugs) could be created to ensure working under different lighting conditions.

The platform and displacement.
To recap on the changed wheels: They worked great! Ofcourse the platform was not the most pretty but it did the job properly. We also added a marker on the corner which allowed us to accurately measure the displacement and compare this to the tracking system data. The center of gravity was properly low and the locked wheels created enough friction and momentum to keep everything still. A future platform would ideally have a motorized motor system. This way there can be feedback on the displacement measured by the camera also you could link this to an external system to make it more user friendly.

Splitting SVG files.
This worked very well! The lines were accurate and easy to parse to the ROS system. Also when the robot received the accurate displacement lines continued quite seamless.
For future works it would be recommended to generate a program that splits SVG lines by cutting them on a point of intersection with a grid. For an even more advanced system it would be awesome if the robot could recognize which squares where within its reach and then connects the lines again to make them more continuous.

Actuation and spraying.
The actuation system could have been improved by using a faster stepping engine. The full revolution required to press in the can took a 0.2 seconds to rise. This in combination with a spraying mask would have allowed the lines to look smoother and be more precise. Taking into account the time it takes to stop and start within the robotic arm movement would also be a nice integration by e.g. making it wait before moving about half the rise time and initiatinng the stopping command just before hitting the end of the line. Mechanically speaking two engines rotating at the same time would prevent the can from getting stuck on the rails which happened not often but was annoying when it did.

So that was most of it, some pictures on the science fair will follow shortly and once again we want to thank everyone who helped thinking and seeing this through!

 

Bringing everything together in ROS

This is a quick description of the steps the ROS system goes through during the painting process. Later on the files will be uploaded to the Documents tab when complete, here we go:

The orientation data from the SLAM system will be transferred to the Robot Operating System (http://www.ros.org/). At first start the orientation will be measured and from that point on will be used as origin. Then after completion of painting box1 we move the platform. The Kinect continuously measures it’s orientation and passes the data to ROS who translates the x,y, transformation and rotation to the new position of the robot base.
Relative to its first orientation (origin) it then calculates the  position of the second box. We then ask it to calculate if, from its current position, it can reach the corners of the box. If a corner can not be reached, it checks the position of the corner relative to the position of the base formulating a vector with the distance it still needs to cover. This is then translated into a rotation, X and Y component which we can use to re-position the robot. When it has confirmed it can reach every corner it will move to the first line and start painting.
The Arduino driver has been integrated into the ROS system to allow for a synchronized call moving the arm and starting to paint.

Building the image

With making the image we faced an interesting challenge. For the robot to understand the image it has to be given a vector file (SVG). Vector files are optimized for accurate smooth lines yet are limited in their edditing capabilities. First we attempted to devide the image via an automated cropping system in python. This would load the image multiple times and crop it according to the predetermined grid. However because of the way SVG is build up it still saves all the lines and only changes the viewbox. As a result all the lines would parse to the robot resulting in an out of reach error(for this see the post Can we paint it). Since no automated system currently exists for splitting the lines and deleting them outside of a predetermined box we decided to do this by hand.
The SVG files were edited afterwards to share the same viewbox so that the read size by ROS is equal and it can make a fitting grid.

Can we paint it?

Another problem we face is to tell the robot which lines it can paint and which not
That it knows its position is all fine and handy but the reverse vector calculations needed are slow and messy.

Traditionally every vector has a start point, a function and a length. For the robot to know which lines it can fully paint it needs to take multiple points on that line and then check if it can reach them. These reverse Cartesian calculations take a lot of processing power and drastically slows down the system.

Therefore we decided to split up the image into a grid. Now instead of calculating the ability to draw a line it checks if it can reach the outer corners of the respective box within the grid. This has the upside that it can now much faster check which lines it can paint yet also comes with some disadvantages.

First of all the system has no way of knowing how to stitch different lines together. This means that when two boxes are within its reach it will not make a continuous curve of lines passing from one to the other box. Also since we are using boxes there might be a whole lot of white within the area, but if the robot can not reach the corner it will not start painting since the box is “outside of its reach”.

By making the box sizes smaller you reduce the second problem yet increase the first problem. We were hoping to test for an optimum within the system yet due to time pressure we were forced to make a fixed choice. We chose a 2x2m image with a grid of 4×4=16 boxes.

SLAM system update

So we have had some accuracy problems with the SLAM system. We adopted a Kinect2 which has a higher resolution than the Kinect1. We combined the RGB information of the camera with the Time of Flight information from the infrared camera and sensors intergrated into the Kinect. This reduced the stuttering of the camera and increased accuracy a lot.

The system is however still not as expected. As it turns out the system is quite sensitive to motion in the background. When many people walk through it’s viewing field it will think it is moving relative to the image and sent incorrect data to the robot. We plan on solving this by pointing the camera lower to the ground and build a low barrier pasted with checkerboard around the painting area. These will generate a lot of fixed high contrast points for the camera to pick up and remain more stationary.

Science Fair!

Finally, the project is coming to an end, tomorrow is the Science Fair, so we decided to present  something related to the name of the project, so ofcourse we chose the famous MICHELANGELO painting and the logo of our beloved university. The painting is 2m * 2m and is divided into 16 squares, each sqaure is 0.5 Meters. The robot will paint the first square and then stops, after it stops we will move it to the second location to begin painting the second square and so on.Capture

Modifications to the spray mechanism

Well our spray mechanism worked good but we wanted it to work flawless, so we decided to add plastic bearings to the mechanism, so it would slide smoothly and to reduce friction. But then a problem occured, the problem was that we used threaded rods, the rods kept hanging on the bearings, so it didnt move smoothly. We  changed the threaded rods to smooth rods as you can see in the picture below, but then we faced another problem and that was that the rods were slightly big so they didnt slide smoothly, so our solution was to turn the 10mm rods to 9.5mm.

10 11

The next modification we wanted to do was to keep the spray cap in its place so it couldn’t rotate, so we came to a solution to drill a hole on top of the mechanism and to put a pin through the hole and through the cap.

12 13

Our visit to MX3D

4

Last week we visited MX3D, it was a great experience. It was nice seeing how they ‘3D printed’ steel, We were able to see live how the robots were working and they were actually working on the bridge project. The project is to a print a fully function bridge and to place it on one of Amsterdam’s canals.

5

Next we saw the famous couch, Well we have to say the couch sits comfortabel.

6

We also got to see the robots while working, in total there was 5 working robots.  It was pretty cool to see the robots working but you can’t say they are the fastest :).

7 8

And finally we could see and hold the ‘bike’, this bike has a complete 3D printed frame, as you can see below the bike with our team member Guus. The bike actually looked really good and it wasn’t that heavy, nearly the same weight as an ordinary bike. The only difference was that the bike doesn’t have breaks. Overall we were very satisfied with the visit and it was great meeting the team of MX3D.

9

 

Modifications on the platform

So after some testing moving the platform, we concluded that the wheels are too big for our platform. We weren’t able to move the platform quite easily and turning was a nightmare, so we went searching for a new set of wheels and that resulted in these wheels( see picture below).wheels

These so called ‘ Breaking and locking casters’ are ideal for our platform, because of numerous points. First they are much smaller in size, hence the platform is much easier to guide. Second these wheels can be locked, not only the backward and forward movement but also the rotational movement. At last these wheels look much cooler on the platform :). 3 1