Controlling the industrial robot model with the hybrid BCI based on EOG and eye tracking

. The article describes the design process of building a hybrid brain-computer interface based on Electrooculography (EOG) and centre eye tracking. In the first paragraph authors presented theoretical information about Electroencephalography (EEG), Electrooculography (EOG), and Eye. Authors prepared an overview of the literature concerning hybrid BCIs. The interface was built with use of bioactive sensors mounted on the head. Movement of industrial robot model was triggered by a signal from eyes movement by EOG and eye tracking. The built interface has been tested. Three experiments were carried out. In all experiments, three people aged 25-35 were involved. 30 attempts per scenario were recorded. Between each attempt, a respondent had a 1-minute break. The investigators attempted to move cube from one table to the other.


Introduction
Nowadays, computers can be found everywhere around us. They serve us both for work and fun. Non-disabled people have no problems using devices such as a mouse or keyboard to communicate with them. Despite this fact, people are constantly looking for new interfaces to communicate with the computer. Especially, the part dealing with the interaction of people with disabilities with a computer seems to be particularly interesting. For instance, systems based on vision systems. An interface was created using the Kinect camera, which is usually used as a play tool. Thanks to it, people with paresis can issue commands by looking at the screen [1]. Using the same camera, an interface for sign language was created [2]. Another type of interface is the voice recognition system, implemented nowadays in almost every telephone [3,4]. Currently, a very interesting part of human-computer interfaces are these based on data coming straight from the human body. There are two main sources of these signals. The most commonly used ones are these from muscles. Using electromyography (EMG) hand gestures can be recognised [5]. Other interfaces are based on signals coming from the human brain. This is especially important for bedridden people. Due to a significant drop in the price of devices, the most common method of obtaining signals from the brain is Electroencephalography [6].

Brain-Computer Interface (BCI)
Electroencephalography is a non-invasive method of recording a bioelectric activity of the brain straight from a subject's skull. Electrodes are usually placed in the "10-20" system [7]. Thanks to the use of this system, the influence of the skull size of the examined person is omitted. EEG is the basic method used in constructing the brain-computer interface. Each of the BCI methods is limited to the number of commands that can be generated. Therefore, more and more often hybrid BCI are used. The division based on [8] is presented in Fig.1. The main purpose of using more than one interface is to increase the number of commands and improve the classification of existing ones.

Combination of two
BCIs BCI + HCI Parallel work with the same function Different functions

Electrooculography EOG
During the EEG test, artefacts can be observed. Artefacts are all signals that do not come directly from the brain [9]. Artefacts can come from the environment as a change in the electrodes, movement of the electrodes, related to the bioelectric activity of the skin, as well as from the human body like muscle activity, induced eye movements and facial gestures. Artefacts are usually considered useless and are removed from the result. Authors used AF3, AF4, F7 and F8 electrodes to do very simple electrooculography test. Electrooculography (EOG) is a non-invasive diagnostic method applied near eyeballs. An eye is an electric dipole. Changes in the position of the eyeball result in changes in the electrical charge. This change is registered by electrode [10].

Eye Tracking
There are many ways to track eye movements. In this article, authors used an algorithm based on Means of Gradients [11]. This algorithm has been implemented in an eyeLike library. The library in the first place recognises the face in a picture. For this purpose, the Viola-Jones method based on Haar wavelets [12] was used. On the basis of the contrast between the iris and the sclera, a group of colours was isolated. The optimal centre c * of a circular object in an image: where: gi is a gradient vector at position xi, di normalised displacement vector. Weight was introduced because dark centres are more likely than bright centres. This is due to the fact that eyelids and eyelashes or wrinkles are also visible on the image, which are dominant.
Where is the grey value at ( , ). Results are shown in Fig. 2.

Mitsubishi RV-12sl robot kinematics
Mitsubishi RV-12sl is a six-axis industrial robot. In order to solve the inverse kinematics, a D-H parameter table should be created. It is presented in Tab. 1. These parameters describe the structure of the robot shown in Fig. 3.
By using the same method and a simple geometric rule an angle for 3 can be also calculated.
To obtain 4 and 6 , rotation matrix 6 4 is used.

State of the Art
In the article [14] authors built a hybrid brain-computer interface. They combined steady-state visual evoked potential (SSVEP) and P300 paradigm. Thanks to that, authors increased the precision of command detection. Authors of articles [15] proposed hybrid interface which fused electromyographic (EMG) with electroencephalographic (EEG). Using this system facilitates a controlling process. This system may be used even by people tired throughout the day.
Another hybrid interface was presented by authors of the article [16]. ERS method was used as switch to activate functions of four-step orthosis. To open and close this orthosis was used SSVEP.
Authors of articles [17] presented a combination of tasks that could inspire BCI systems that are more accurate than conventional BCIs, especially for users who cannot attain accuracy adequate for effective communication.

Emotiv EPOC+ Headset
Emotiv EPOC+ Headset (Fig. 4) is low cost neuroheadset. This headset can record data from a range of 0.16 to 43Hz. It has 14 EEG electrodes and two additional reference electrodes which are arranged according to the 10-20 system. The 16-bit ADC can record data with sampling rate equal 256 samples per second. For measurements in this headset, wet electrodes based on saline were used [18]. Fig. 4. Emotiv EPOC+ headset [18].

Software
Two programs in C# were written for the purpose of this paper (Fig. 5). First in Visual Studio to receive signal from EOG from Emotiv EPOC+ using Xavier SDK. The same program recognises the centre of eye using camera and eyeLike library. In this program, a user can change influence of signal from EOG and camera to results and change value of a threshold.
The second program was written in Unity 3D. In this program, kinematics model and gravitation were implemented. In the same program all tests were carried out. Both programs exchange data with Memory Mapped Files.

Tests and results
In all experiments, three people aged 25-35 were involved. Each of them participated in 5 to 10 attempts of each scenario. Between each attempt, the respondent had a 1-minute break. The eye movement to the right or left triggered the movement of the robot. To change the direction of movement, the tested person was closing and opening eyes twice in no more than 2 seconds. In the first scenario, the tested person practiced using the system (Fig. 6). The threshold was also adapted to the person being examined. The subject was to raise the cube using an industry robot model. The cube was connected to the end of the manipulator at the moment of its contact to simulate the gripper. In the second scenario, the investigators attempted to move cube from one virtual table to another table in the shortest possible time (Fig. 7). All tests have been completed successfully. The fastest attempt lasted 15.6 seconds. The longest attempt lasted 27.7 seconds. The average duration of trials was 23.8 seconds. Fig. 8 shows the path of the end point of robotic arm in the all axes of moves for the one of the attempt.
In the third scenario, the investigators attempted to move cube from one virtual table to another table in the shortest possible time (Fig. 9). In this scenario, a wall between tables was added. With this experiment, not all attempts were successful. Approximately 85% of the tests were completed with success. The fastest attempt was 19.2 seconds. The longest attempt lasted 38.7 seconds. The average trial duration was 37.8 seconds.

Conclusion
The article describes design process of building the hybrid brain-computer interface based on Electrooculography (EOG) and centre eye tracking. Authors have proved that it is possible to obtain satisfactory results in controlling the industrial robot model only from electrodes already placed as standard in the Emotiv EPOC + headset and standard web camera. It was proved that moving objects with the robot and performing more complicated tasks such as avoiding obstacles is possible. It is also noticeable that the length of trials was getting shorter with practice. The method of controlling the industrial robot is intuitive, which confirms the obtained results. Most errors in the third scenario resulted from human mistakes. There were no incorrect classifications in the trails. The least intuitive is the way the axes movement. The selection in the loop is ambiguous. The specific axis cannot be chosen. When changing from the Y axis to X there is a need for a double change. This moment was crucial for the times obtained. Authors want to add additional interface based on SSVEP method and control real industrial robot.
The work described in this paper was funded from 02/22/DSMK/1458.