It has been a while but better late than never, here is our last post on the project and a recap on what worked and what didn’t. So first of we want to say we are happy with the final result. We want to thank everyone who came to the science fair talked to us and showed interest and love for the projcet. We hope that others will find this blog and improve our findings, now without further ado, the review:
The location tracking system.
As discussed before we used the RGB information on a kinect camera to track points and triangulate the position. Later we switched to the Kinect2 for higher resolution and allowed us to use Depth information via IR dots check out this link to understand how it works. However in the showroom during the science fair the IR light coming from the glass ceilling kinda messed up the depth readings of the kinect. Therefore we wanted to switch back to RGB information. unfortunately the system bugged out. We tried to get it back up but it would just not initiate. We hope that in the future a system without this limitation (or without bugs) could be created to ensure working under different lighting conditions.
The platform and displacement.
To recap on the changed wheels: They worked great! Ofcourse the platform was not the most pretty but it did the job properly. We also added a marker on the corner which allowed us to accurately measure the displacement and compare this to the tracking system data. The center of gravity was properly low and the locked wheels created enough friction and momentum to keep everything still. A future platform would ideally have a motorized motor system. This way there can be feedback on the displacement measured by the camera also you could link this to an external system to make it more user friendly.
Splitting SVG files.
This worked very well! The lines were accurate and easy to parse to the ROS system. Also when the robot received the accurate displacement lines continued quite seamless.
For future works it would be recommended to generate a program that splits SVG lines by cutting them on a point of intersection with a grid. For an even more advanced system it would be awesome if the robot could recognize which squares where within its reach and then connects the lines again to make them more continuous.
Actuation and spraying.
The actuation system could have been improved by using a faster stepping engine. The full revolution required to press in the can took a 0.2 seconds to rise. This in combination with a spraying mask would have allowed the lines to look smoother and be more precise. Taking into account the time it takes to stop and start within the robotic arm movement would also be a nice integration by e.g. making it wait before moving about half the rise time and initiatinng the stopping command just before hitting the end of the line. Mechanically speaking two engines rotating at the same time would prevent the can from getting stuck on the rails which happened not often but was annoying when it did.
So that was most of it, some pictures on the science fair will follow shortly and once again we want to thank everyone who helped thinking and seeing this through!
Finally, the project is coming to an end, tomorrow is the Science Fair, so we decided to present something related to the name of the project, so ofcourse we chose the famous MICHELANGELO painting and the logo of our beloved university. The painting is 2m * 2m and is divided into 16 squares, each sqaure is 0.5 Meters. The robot will paint the first square and then stops, after it stops we will move it to the second location to begin painting the second square and so on.
Last week we visited MX3D, it was a great experience. It was nice seeing how they ‘3D printed’ steel, We were able to see live how the robots were working and they were actually working on the bridge project. The project is to a print a fully function bridge and to place it on one of Amsterdam’s canals.
Next we saw the famous couch, Well we have to say the couch sits comfortabel.
We also got to see the robots while working, in total there was 5 working robots. It was pretty cool to see the robots working but you can’t say they are the fastest :).
And finally we could see and hold the ‘bike’, this bike has a complete 3D printed frame, as you can see below the bike with our team member Guus. The bike actually looked really good and it wasn’t that heavy, nearly the same weight as an ordinary bike. The only difference was that the bike doesn’t have breaks. Overall we were very satisfied with the visit and it was great meeting the team of MX3D.
Hey guys so here is a quick summary of the meeting of 29-09 including target points for the next meeting.
For those who are not yet familiar with the project, the goal is to paint on a large scale with a robotic arm.
The main challenge lies in the accuracy and determining the position of the robot to properly continue the painting after being moved.
In the meeting we made a list of all the involved problems and then subdivided them into three categories
- Accuracy and precision of the painted line. Possibly solvable by implementing a new nozzle or using painting masks.
Also an optimum for painting distance and speed has to be formulated for the best result.
- Position tracking. The challenge here is to find a way to accurately determine the position of the robot in relation to the drawing.
Mechanical motion and local visual confirmation might result in cumulative error, we will therefore focus on “global” visual confirmation and wireless position triangulation.
- Construction and interaction. We are aiming for a construction operate able by two people.
Also it has to be able to stabilize itself (or be highly stable from itself) in order to accurately operate the arm
During the upcoming week we will be researching these categories and discuss possible solutions as well as possible test setups.
Next week Thursday we will have another meeting and also update on the progress of the project