Interfaces Used for Smartglass Devices into the Augmented Reality Projects

. This paper addresses the themes of the interfaces for the devices usable in the virtual reality projects, respectively augmented reality. Smart goggles are typically used to implement these. Today's device interfaces are designed for use on flat screens, such as monitors, TVs, or smart phone screens. These screens are located some distance away from the eyes of the user, while the lenses of the smart glasses will be just a few millimeters away. Intelligent eyeglass interfaces will have several patterns, some framed in the user's field of view, others will require sliding on their surface. Everyone's field of view is unique, and this will require a calibration of the device so that the graphics can be readable.


Introduction
Interfaces have the role of helping the elements of a system communicate with each other, in our case the the human-machine system. This paper deals with the theme of the interfaces for the devices usable in the virtual reality and augmented reality projects. To implement these, smart glasses are usually used.
Today's device interfaces are designed for use on flat screens, such as monitors, TVs or smartphone displays. These displays are located some distance away from the eyes of the user, while the lenses of the smart glasses will be just a few millimeters away. Smart eyeglass interfaces will have several patterns, some of them in the user's field of view, others will need the exploration of their surface.
Everyone's field of view is unique, and this will require calibration of the device for the graphical elements to be readable. The field of view is much larger than the surface of ordinary digital display devices, because it can be located in space, relatively close to the eye. These displays are located within 2-5 cm (virtual reality headphones), 20-40 cm (smartphones), 30-50 cm (desktop or laptop monitor) and 2-5 m (TV) [1], while the lenses of the smart glasses will be at a distance of only 15 mm. The display of a laptop, for instance, covers between 20 and 30% of the field of view, which leads to the idea that inside the field of view there can be implemented more graphical elements. This depends on the user's focal length, the interface type, and the need to display those graphical elements. * Corresponding author: monicaleba@upet.ro

Field of View
The field of view has a dynamic profile by the fact that it has to be adapted to each individual. In order to generate the perimeter, there must be determined several values, given by the focal length of the user (focus point being considered the point zero of the reference system) and the eyes mobility limit points in horizontal, vertical and oblical downwards movement (two calibration points, left-right, at an angle of 30 and 60 degrees to the vertical). The surface within the perimeter of the field of view is called the active surface, and what is outside this perimeter is the inactive surface ( Fig. 1.).

The interface
Interfaces are the ordinary computer-based communication tools of human-machine or human-human mediated by computer systems [2], which do not have to generate additional cognitive tasks or the interaction between the two systems do not have to distract anyone from the ongoing work [3]. The type of interfaces identified is for augmented reality devices that have the projection displays close to the eye, such as smart glasses. In order to control the interactivity elements (buttons, text fields, multimedia files), there are used eye tracking devices. These can include the whole area of the Reality Continuum [4], from Real to Virtual, positioned on two levels ( Fig. 2.).

First Level Virtual
The interface is fully displayed on the display of the device in the active area, where all the interactive graphical elements are located and will remain displayed for as long as the user desires [5]. Their access will be done with the help of the eyes. Head movements are not taken into account. Inside the active surface area, there are displayed contextual information regarding the environmental elements, smart object interfaces, personal information (various other useful functions taken from the current mobile devices universe) (Fig. 3.).

Second Level Virtual
The interface exposes only a part of its surface, as much as it can be contained in the active area, the rest can be accessed by dragging the field of view on the interface surface. This is useful in virtual, mixed, 360-degree photo-videos or augmented reality environments using surrounding objects recognition. Head movements play an important role in exploring the inactive interface area (Fig. 4.) .

First Level Real
Smart objects (whether or not connected to the Internet) will be present in the environment. These objects will emit presence signals to be easily located by the AR mobile devices, and upon access will display an interface that will be inside the active area. Each smart object will display an access icon, which after selecting will show the controls in a custom interface. If a smart object performs an action, a step in the selected activity algorithm, it is highlighted by a warning symbol (for example, in the form of a triangle). These symbols are used to indicate a certain functionality without providing additional information [3]. After executing the action, the triangle symbol will highlight the next step if it exists. Icons of smart objects will appear in the active desktop perimeter when the user enters the communication area. Communication between the AR device and the smart object is usualy accomplished by Bluetooth wireless connection. After using the smart object, the interface will close by user action, and the access icon will be removed from the active desktop perimeter when the user exits from the communication area (Fig. 5.).

Fig. 5.
The first real level interface. Most of the included interactivity elements belong to the intelligent objects around the user. a. the active area; b. smart object icons; c. smart object interface; d. the "circle" symbol indicates an action that takes place throughout the smart object's entire activity period (e.g. device power up); e. the "triangle" symbol draws the user's attention that this "step" is in progress; f. the "square" symbol indicates the already taken steps or the next steps.

Second Level Real
The environment with its component parts now behaves as an interface, when there are applications to identify these elements. The cameras mounted on the augmented reality device will take snapshots from the environment, and these images will be analyzed using applications for surrounding object recognition. Once these have been identified, contextual information will be displayed, or if those objects are in a local database, being smart objects, they will be able to download the access interfaces. The interfaces of the identified smart objects will be able to replace most of today's standalone applications.

Interface Browsing
Depending on the type of interfaces, their browsing can be accomplished by controlled eye movements in the active area perimeter or controlled head movements [3] to explore the inactive area. The eye or head movement tracking systems shall be calibrated in advance to determine the reference coordinates, the zero point, and the extreme left-right and top-down mobility points. Eyes can rotate anatomically, from a "zero" center, left-right point up to a maximum of 60°, upwards of 60°, down to 75°, and the head can move 60° left-right, 55° up and 70° downwards (Fig. 6.). Fig. 6. Eye and head movement angles.

The control system
Control of interactivity and browsing elements must be exercised by conscious controlled movements [6] (smooth tracking) [1], especially if the control is done with the eye movement device. Consciously controlled movements could be eye shifts between fixed, calibration, points, and the central point x. For example, xDx can be "Left move", xBx "Right move", xAx "Select an interactivity element", and xCx "Deselect an interactivity element". Accessing an element can be accomplished by fixing the eye on that element [7]. The time required to fix the view is the trigger of the interactive element selection command [1], and its selection can be visually highlighted. (Fig. 7.)   Fig. 7. Angles of the eye and head movement.
Interactive elements are all the multimedia elements that can interact to control the interfaces accessed. Depending on the type of the selected item, additional buttons will appear to control that item. The focus points in the focus point area, those in the sight direction, will have a better resolution than those at distance from the focal point, for a more natural video rendering [8].
The selection of interactivity items can be done by moving the cursor over the object to be selected [9]. Moving cursor can be done: -with a handheld or integrated in the AR device controller;

Conclusions
In this article, there were presented the four types of interfaces, classified on levels, for augmented reality, together with their specific exploration and control methods. These are built to use the active area of the field of view, and the interactivity elements can be, in addition to the usual graphic elements (buttons, images, text) also the images or interfaces of intelligent objects in the environment. These interfaces "attached" to different objects, both real and virtual, can be accessed online, without the need for the same service to create and install stand-alone applications.