Human Arm Motion Capture using Gyroscopic Sensors

. By using the most rudimentary microcontroller chips, that receive data from sensors, and transmit the data to a computer system, thorough a virtual serial port, motion of many objects, bodies and joints can be captured. Capturing the motion and reproducing it live is not the only destination for the data usage. Recording and studying the motion data, can reduce a lot of work in a wide range of domains. Using the simplest methods to capture the data, also means making it so widely accessible for learning, editing and also developing systems that use very little processing power, granting data access for the less efficient computers. We propose using the MPU-6050 MEMS sensor in a dual instance, and the Arduino UNO microcontroller, connected to a computer for data acquisition, to capture the motion of a human arm, and reproduce it in a projected environment. Other experiments, conducted by other researchers and developers have used a higher number of sensors, and the data acquisition and recording systems were much more complex, but our research reduced the number of sensors to just two. One of the high impact innovations brought by this system, in particular, is that we’ve virtually hooked the end of one sensor to the tip of the other, creating a virtual motion chain.


Introduction
During the last decade, multiple approaches on the techniques and methods of motion capture have been the guidelines of the motion capture processes, regardless of the work domains they've been incorporated in [1]. Multiple inertial sensors [2,3], ranging from accelerometers, magnetometers, gyroscopes, cameras, infrared sensors have all been put to the test in the race to which of the sensors were most fit for the job [4][5][6].
These sensors each have their own way of perceiving motion or displacement.
Here's a little brief approach of each category: 1) Accelerometers measure the acceleration that they are moved with, simply said.
2) Magnetometers, even if only useful in magnetic fields can theoretically detect the mechanical torque from a magnetic perspective. 3) Gyroscopes perceive angular velocity, by the type of their construction, they can be built for a) spinning rotors; b) spring lasers; c) vibrating mass. 4) Cameras, usually stereoscopic, can have depth of field perception, for motion capture in particular. 5) Infrared sensors can also work as if they were distance measuring sensors due to the response time, using amplitude response. The technique we are using is, however, one that employs the MPU6050 a 3-axis accelerometer and gyroscope sensor, capable of communication with the I2C interface, requiring only two connections. Two such devices are connected to an Arduino UNO development board, all these, connected to a computer's USB port, while using the Arduino IDE and the Processing IDE. Not only have we chosen the above-mentioned devices and software for very low resource consumption reason, but also for operator safety, the voltages never getting higher than 5V.

State of the art
Wearable devices for motion capture have proliferated incredibly during the last few years [7,8] yet, in the future years may become obsolete, as non-invasive technologies are on the rise [9]. Gait (or other motion analysis) is well-established [10] in clinical assessment of movement disabilities. The causes being numerous for the disabilities, reproducing the correct movements is an achievable desire. The motion capture systems used vary, depending on multiple forms of demand, for example we have Fig. 1 [11] and regarding, of course the body area treated, the distribution chart below can express the demand resultsstating this as leverage for sustaining the idea that no work of motion capture, treatment, or reproduction of motion capture and treatment technology is to be researched without demand, implying purpose. The trajectory of research that we've taken is shown in the figure below (Fig. 2). Our system is a non-optical motion capture system that uses multiple mathematical and computational approaches. Non-optical systems are mostly based on inertial sensors with incorporated accelerometers, gyroscopes and magnetometers, to allow recordings of the movement associated data, in an integrated device, being low cost, precise and wearable. IMU sensors are already part of the medical domain, in the detriment of requiring supervised try-outs and drift-avoidance configurations [12]. In the study of exoskeletons and human-computer interaction, the motion-capture technologies can also involve and employ machine learning and neural networks [13]. That kind of an approach is also important not only to analyse the recorded datasets, but also to generate more natural motion and, of course, more ergonomic motion, respectively. In deep learning there are a few different architectures that can be used for different purposes [14]. Deep Belief Networks are composed of several fully-connected layers. Convolutional Neural Networks are inspired by the human visual cortex hierarchical structure. Recurrent Neural Networks are specialized in processing value series and sequences, having the ability to capture long-distance dependencies.
Virtual reality [15] is also a domain where our work can be applied, more so, virtual reality has the power of transforming the data that our system records and generates, into whatever data can be imported, reproduced and manipulated in a virtual environment. In the field of virtual reality, the benefit can be indoor exercising in confined spaces, yet give the users (patients) better insight [16] of their movement.

System design
The data trajectory from our system works according to the following figure:   Fig. 3. A concise version of the data trajectory, from the data acquisition, to the user's interpretation of the movement.
The data from the two MPU6050 sensor kits is being picked up, sent to the Arduino Uno board thorough the I2C Interface, the Arduino UNO sends the data to the PC thorough a serial port (in our case 115200) through port COM 5 (a virtual serial port). In the PC, the quaternion data is being processed in the software (Processing IDE) here the data is not only processed in the sense that it's stored, but also in the sense that the data is being processed into the motion of the 3D rendered arm shaped object, this way, the data is used, live, in a discrete type of processing (8kHz sample rate), which displays the motion of the 3D rendered arm at a desired framerate (up to 120 FPS). We used the data samples with the default error rate published in the MPU6050 datasheet, taken "as is" therefore we did not test the sensor data precision, but the captured motion seemed as natural as one could see. Future experiments and research will subject all the errors to exhaustive testing, dedicated to bettering the system. One of the major advantages regarding the video processing output is that it uses the classic OpenGL rendering interface that comes, nowadays with any stock video card for desktops and laptops, cross platform and running smoothly even in software rendering for the non-OpenGL capable video cards. But there's a catch: if the Arduino Uno board isn't tested with the File.ino script for reading the data from known addresses from the sensors and known virtual ports, the Processing IDE program will not know how to acquire the data. Neither would the Arduino Uno, on its own, be able to properly separate the read values. Connecting both sensors to the Arduino UNO was very thicky, as the addressing system is hardwired into the sensor cards, therefore there is a special pin configuration that changes the port's address. In the Processing IDE software, replicating the MPU6050 movements to the animated arm it is necessary to convert the serial data from the two sensors into quaternion data values.

Mathematical model
Based on the frames from Fig. 5, applying the Denavit-Hartenberg formalism for the human upper limb, we get the following parameters (Table 1): The general form of the Denavit-Hartenberg matrix that depicts the relative movement from one reference frame to the following one is: So, we get the following matrixes, that, multiplied together, give us the movement matrix T 05 of the final element in the base reference frame, that is the direct kinematic mathematical model: [ ]; (2)

Conclusions
The use of robust, small sized components and the present technology allows us to build compact devices towards the advancement of wearables. The improvements of wearables, regardless of purpose, increasingly integrates wearable technology into everyday life. The multiple use sensors can provide a lot of data, therefore, in the future, the recording and management of datasets is more accessible, and the use of machine learning would be facilitated. The degrees of freedom regarding motion capture are easily enlarged by the methods of data transmission, conversion and processing. Regarding affordability, not only is the cost per components very low, but for skilled users it could always become a "do it yourself project". It can also be stated that this project can be done with free software from one end to the other, this option becoming one of the very few equivalents to a professional exercise solution for both rehabilitation, recreational, and technical purposes.
Future research and development will involve testing the system on multiple users, and the recording of their opinions and experiences while using this motion capture system.