Spray Mechanism – First mechanical design

We started designing the spray mechanism by dividing the problem into smaller problems. These problems were holding the can, fixing the spray direction, pressing the cap and fixing the mechanism on the robotic arm. We decided to fix the nozzle and push the can to get the paint out, this way the distance to the nozzle is always a fixed distance. The first concept also involves a bench vice mechanism to hold the can and a pin to hold the cap. Furthermore an air piston is used to actuate the can.

First sketch of spray mchanism

First sketch of spray mchanism

After reviewing the first design we’ve made a few simplifications to make the mechanism easier to manufacture and more convenient to use. First of all we decided that using air pressure solely for the spray mechanism would be unnecessary. So we’ve decided to change this to an electronic system. We’ve also decided to fix the can by fitting it into a slit and leave the pin. Instead of a (complicated) bench vice system we have decided to create a cup. By making the top part of the mechanism removable the can will be able to slit in, thus fixing it in all directions.

After reviewing and redesigning we started on manufacturing the spray mechanism from aluminium.

Render

Representation of the first spray mechanism design. Note that the empty space below the can is reserved for the piston.

Position tracking how we will tackle it

After a lot of research we narrowed our positioning technique down to a technology called SLAM (simultaneous location and mapping). Criteria for choosing were mainly cost and accuracy. Cheap systems that use magnetic fields or triangulation over sound/electronic waves are as of yet relatively inaccurate (10-20 cm on a 10×10 m field). The SLAM method uses only a stereo camera yet can achieve an accuracy of approximately 3 cm regardless of the area. This is done by tracking high contrast points in the image. By tracking their translation the camera knows how (relative to a high number of points) it is moving through space.

Currently we are working on using this opens source technology on a KINECT Camera and translating the coordinates that we get from the software to the system driving the robotic arm. We hope that by correctly implementing this we can know the position and orientation of the Robots base at all times with an accuracy below a 10 cm error margin.

For some (quite advanced) demo’s check out these video’s:
car driving while tracking its position
mapping your garden

First Meeting

Hey guys so here is a quick summary of the meeting of 29-09 including target points for the next meeting.
For those who are not yet familiar with the project, the goal is to paint on a large scale with a robotic arm.
The main challenge lies in the accuracy and determining the position of the robot to properly continue the painting after being moved.

In the meeting we made a list of all the involved problems and then subdivided them into three categories

  1. Accuracy and precision of the painted line. Possibly solvable by implementing a new nozzle or using painting masks.
    Also an optimum for painting distance and speed has to be formulated for the best result.
  2. Position tracking. The challenge here is to find a way to accurately determine the position of the robot in relation to the drawing.
    Mechanical motion and local visual confirmation might result in cumulative error, we will therefore focus on “global” visual confirmation and wireless position triangulation.
  3. Construction and interaction. We are aiming for a construction operate able by two people.
    Also it has to be able to stabilize itself (or be highly stable from itself) in order to accurately operate the arm

During the upcoming week we will be researching these categories and discuss possible solutions as well as possible test setups.
Next week Thursday we will have another meeting and also update on the progress of the project