Controlling a robotic arm with Augmented reality

Under the circumstances of a patient's upper limb disability, aided by a robotic arm with faulty controls, assistance is needed, using augmented reality as an auxiliary. Our system, with a headset, using an internet connection and an augmented reality device, placed on the assistant's head, can ensure communication between the two, for both remote supervision and control. The assistant can enhance the control over the robotic arm, while having a head up display on the augmented reality glasses, based on what the patient sees. The communication is established through PC or mobile devices, connected to the internet. Having the patient's view, and enhanced control over the robotic arm, the assistant can interact with nearby smart objects.


Introduction and latest breakthroughs
Most robots are controlled by teleoperation, since the many uses that imply it, involve inaccessible, distant, dangerous, or in any other way, remote locations [1].
Enhanced control devices are used to manoeuvre telerobots, one of the control environments being Augmented Reality (AR) -a version of Virtual Reality (VR) that simulates a copy of the environment inside the computer, generating an altered image for the user.
Generally, a robot imitates a human, mechanically [2], making the robot's tasks, replace, more or less effectively, human tasks. The means of communicating with a robot, are now vast, meaning that a robot can communicate with a computer, or control system in many ways. Ethernet and Wi-Fi, being the most popular, providing camera connections, control system compatibility, and other advantages.
There's also a GUI to the control system's software application, in our case it's Augmented Reality. Augmented Reality (AR) is a set of environments [3], one is virtual, where all elements are computer-generated, being an overlay to the real world. The AR applications, are nowadays present in many domains, from entertainment to rehabilitation. In our study, we're using AR to rehabilitate patients with arm deficiency.

The necessity of our concept
Our concept has new and particular views over the challenge, in comparison with other concepts. In comparison with Data Gloves [4], our system is being controlled by Augmented Reality, which allows the user to perform a great number of predefined movements.
In comparison with traditional Haptics, the AR Haptics we would use could have a far higher accuracy, thanks to the assistant's presence, and control overrides, until the patient is fully calibrated and paired with the system.
The system sustains the natural reactions of the patient, whenever they reach for an object, instead of asking for unnecessary reactions, while also, processing the data, and restricting any unsafe actions, due to the integration of Artificial intelligence.

Schematic principle
The Block diagram: The real objects are precepted by the stereo camera (a). The object identification software [5], assigns them an alias, and locates them in the patient's workspace.
The precepted images are transmitted, live -to the patient and assistant, for object selection (b, c).
The robotic arm's hand can target the selected object in different manners: -Directly, by the patient's natural arm gestures (a situation where the patient has the robotic arm, holding their own), to reach the targeted object (f).
-By control pad manoeuvring, a joystick, or application GUI, for the robotic arm (d).
-By object identification and location [7] in the patient's workspace (e, g). After the object identification, and selecting the desired object, the robotic arm will go for the selected object (h). The selected object will be held, and taken close enough for the patient's healthy hand to manipulate it (i). The robotic arm will be controlled as following: The patient will control the robotic arm either by moving their own arm towards the target, while also gripping and bringing objects closer, or, by using an AR control device, thus, providing the possibility to identify objects and assessing the target's distance, moving the arm towards the target object.
There will also be an assistant, who can also control the arm, either by joystick-like control devices, or by a different AR system, where the assistant choses the target objects, and the arm's template movements, in regard to the chosen object.

Describing the elements -concept
System composition: Stereo camera that implements recognition of horizontal plans, object identification, by shape. A robotic arm [6], attached to a solid frame (a hospital bed), on a solid, moving frame (a wheelchair). An augmented reality display unit.
In the first scenario the patient is gazing upon the real objects, in front of them (on the workspace), thorough an AR device, and selects the recognized objects, using hand gestures. The selected objects are being overlaid, when "selected", a moment when the robotic arm's hand is targeting the selected object. The hand (white background icon) represents the robotic arm's end, which cannot be overlaid, in projection, over the targeted object.

Fig. 2. Scenario 1 -Identified object (with patient)
The hand (alpha blended icon) represents the robotic arm's end, that will grab the selected object.
In the second scenario the assistant, identified by face recognition, can manoeuvre the robotic arm thorough gestures, done in the active area of the controller.
Control Aliases: The left circle: the robotic arm executing the (hand) arm end's motion, vertically (x,z). The diagonal pointer: for the robotic arm's hand motion (back and forth on the y axis). The display-right cursor: represents the selected object grip control, between two states (tight/loose).
The third scenario is very similar with first scenario, differing by the fact that object selection is done by the assistant. In the fourth scenario, The stereo camera will pick up workspace data, to be analysed, for surface feature identification -where the objects are placed. The surfaces can be vertical (doors, walls, windows), or horizontal (tables, floors, beds).
These surfaces can specifically hold real objects: Vertically (paintings, tv-sets, hung clothes), Horizontally (food, electrics, tools, medicine). The surfaces could have overlaid messages, a grid, to help depth perception. Object's coordinates could be established, by placing them left or right in relation with a focal point and up or down, in relation with the closest edge of the surface (nearest to the user).
The apple has the coordinates 4 left-right, 2 back-forth. Because the hand is not near the selected object, and cannot reach it, even if the arm is fully extended.
To reach the object, auxiliary actions are required: -If the patient is in a fixed frame (hospital bed), the frame is equipped with a moving table, that gets closer to the patient, or farther.
-If the patient is in a moving frame (wheelchair), the frame would move towards the selected object, to aid its reach.
The robotic arm's control application, will also control other system parts (a moving table, wheelchair, camera setups).

Conclusions
Our concept integrates Artificial Intelligence, used in object recognition, object location and motion device control.
The entire system can be controlled from any distance. Future research would lead to the control system's further development.
The system can be mounted on fixed frames (hospital bed), where the table is moving, and on moving frames (wheelchair), where the wheels are moving.
Future work could introduce more scenarios, more augmented reality user interfaces, to be compatible with patients of different ages (adults, children, or elderly patients) this way, making the user interface more enviting, thus, engaging the patient's enthusiasm into getting them to work on their rehabilitation even more.