Until robots don’t need a human anymore to control or monitor it, an user interface is needed. In this article I want to explore the functionality a user needs to control and monitor a robot.
The complexity of the UI (User Interface) or GUI (Graphical User Interface) is depending on the complexity of the robot, the degrees of freedom and the amount of sensors it has available.
Let’s say you have a simple robot platform what is able to drive and steer and the user has the possibility to control this robot platform remotely from a computer connected through WiFi from another location.
There are a lot of things the user need to know to operate this robot platform. The first thing the user need are some buttons to drive and steer this platform. Of course we need a button to move the robot platform forwards and backwards and two buttons to move it left and right.
The make the robot move more smoothly we probably want to use a button for example to make the robot drive forward with a left turn. We also need something to control the speed. A simple thing like driving around is getting complex already.
In the example below you see a good example of the controls just described.
Robot Graphical User Interface from http://www.gadgettastic.com
The First thing you see on the screen above is the streaming video of the camera a the front of the robot. Is you control a robot from a remote location you need sensors to let the user know what is going on. A good thing is a video image because you can actually see what is happening. A mobile robot platform can give move information back to the user, for example:
- Distance to an object
- Bumper status (did I hit something?)
- Battery status
- Actual rotation of the wheels
- Current used by the motors
- Connection strength
- Path already driven