Application of Cause-Effect-Networks for the process planning in laser rod end melting

In micro manufacturing, a precise configuration of manufacturing processes constitutes an essential factor for success. The continuing miniaturization of work pieces results in ever decreasing tolerances, whereas machines and processes become more and more specialized. As a result, a precise determination of each process result is important to guarantee the final product quality. Unfortunately, so called size effects often prevent the direct transfer of knowledge from the area of macro manufacturing. To cope with these effects, finite element simulations provide a suitable tool to simulate forming processes and their results in advance and to perform parameter studies in order to analyze the process and effect interdependencies. Unfortunately, these simulations usually require a rather long computing time, so that only simulations for a small subset of the available parameter range can be performed in a reasonable planning interval. In this context, this article presents an application of the method “Micro – Process Planning and Analysis” (μ-ProPlAn) for the configuration of laser rod end melting, which is used to create preforms for further forming processes. This method uses cause-effect networks, to combine expert knowledge with methods from artificial intelligence to estimate the result of laser melting processes quickly. For this purpose, the cause-effect networks are trained using a finite element simulation of the laser process using different process parameters and varying rod diameters. Results show a high accuracy for the prediction of the finite element simulation results. This article focusses on the validation of these causeeffect networks in comparison to the real laser rod end melting process and demonstrates how these models can be used to predict the resulting volume, eccentricity and the largest diameter of the solidified preform for different process configurations. Keyword: Laser Micro Machining, Simulation, Predictive Model


Introduction
During the last decades, the demand for metallic micro parts continuously increased. On the one hand, these components become increasingly smaller while on the other hand, their shape's complexity and their functionality constantly increases [1], [2], [3]. Major factors contributing to this development are a rising number and complexity of applications for micro components as well as an increasing demand for such components in the markets of medical-and consumerelectronics [3]. Besides the growing demand for Micro-Electro-Mechanical-Systems (MEMS), generally produced using methods from the semi-conductor industry, the demand for metallic micromechanical components grows similarly. Often used as connectors for MEMS, casings, or contacts, these components cannot be manufactured using semi-conductor technology but are manufactured by applying processes from micro forming, micro injection, micro milling etc. [2], [4]. In this context, cold forming processes constitute an option for an economic mass production of metallic micromechanical components, as these processes generally provide high throughput rates at comparably low energy and waste costs [5].
An efficient industrial production of such components usually requires high throughput rates up to several hundred parts per minute [6], whereby very small tolerances have to be achieved. These tolerances result from the components' small geometrical dimensions, which, by definition, are below one millimeter in at least two dimensions [7]. Additionally, so called size-effects, defined as "deviations from intensive or proportional extrapolated extensive values of a process, which occur when scaling the geometrical dimensions" [8], can result in increasing uncertainties and unexpected process behaviors when processes, work pieces, and tools originating from the macro domain are miniaturized.
As a result, the planning and configuration of process chains is seen as a major factor of success for an industrial production of metallic micromechanical components [9]. To cope with the occurrence of size-MATEC Web of Conferences 190, 15005 (2018) https://doi.org/10.1051/matecconf/201819015005 ICNFT 2018 effects, companies require tools and methods for a highly precise planning, not only of the single processes and their configurations but also spanning the complete process chain. Thereby, they have to consider interrelationships between processes, materials, tools and devices. In micro manufacturing, small variations in single parameters or characteristics can have a significant influence along the process chain and can finally impede the compliance with the respective tolerances [10].
Consequently, the planning and configuration of processes requires highly detailed information on the effects and behaviors of the involved processes. As companies often face a lack of long-standing expertise with particular processes, physical or mathematical simulation models are typically used to provide this information so that different process configurations or process chains can be analyzed and compared thoroughly. Unfortunately, these simulations often take a long time to compute and have to be performed individually for varying process parameters and different process configurations. This is especially disadvantageous since the continuous development of new processes and machines leads to a broad range of available process chains for the same product. Each of these alternative process chains may provide different advantages and disadvantages regarding the product's quality as well as the performance of the production system. Hence, it is necessary to design, configure and evaluate several process chains in order to determine the best solution. In order to facilitate the planning and configuration process, this article presents the application of the "Micro -Processes Planning and Analysis" (µ-ProPlAn) methodology as an additional layer of abstraction between the process planning and the configuration of process chains via Finite-Element Models (FEM). The methodology itself provides tools and methods to model, configure and evaluate process chains in micro forming [11]. Whereby the process design relies on classical approaches, the configuration is performed by the application of so-called cause-effect networks, detailing the interrelationships between relevant process-, machine-, tool-and work piece parameters. These cause-effect networks can act as surrogates for numerical FEM simulations, thus increasing the planning speed. The aim is to provide approximate results with less computational efforts during the planning stages.

Process Planning and Configuration in Micro Manufacturing
There exist only few approaches that support a joint planning of process chains and its technological and logistic configuration. In contrast, during the last years several articles focused on the configuration of specific processes (compare e.g. [12]). Most of these approaches rely on detailed models of the corresponding processes and are usually supported by very detailed physical models in form of finite element simulations (e.g. [9], [13]). A different kind of approach found in the literature focusses on the use of historical data as templates for the configuration (e.g. [14]). Although both of these approaches allow a precise configuration of single processes, they are not directly applicable to process chains without conducting comprehensive studies of all involved processes. Even then, the interrelationships between these processes cannot be addressed directly. In addition, the construction of finite element simulations as well as the direct application of historical information requires a comprehensive understanding of the processes, their behaviors as well as of the physical backgrounds, which, in many cases, is unavailable due to size-effects or the novelty of processes.
The planning of process chains is generally conducted using methods like event-driven process chains, Unified Modelling Language, or simple flow charts. As these methods do not allow a configuration of processes, Denkena et.al. proposed an approach that indirectly addresses this topic [15]. This approach uses the modelling concept for process chains [16]. Thereby, a process chain consists of different process elements, again consisting of operations. These operations connect to each other by so-called technological interfaces, describing sets of pre-or post-conditions for each operation. While these interfaces can be configured manually, Denkena et.al. propose the use of physical, numerical or empirical models to estimate an operation's post-conditions based on its pre-conditions [15], [17]. Although this approach bridges the gap between process chain planning and the configuration, the creation of these models requires very detailed insight into the processes as stated before. In addition, a single model can be unsuitable to capture all relevant interrelations between process-, machine-, tool-and work pieceparameters, particularly due to the influence of sizeeffects.
Based on the literature review, it can be concluded, that there exists no method enabling a joint planning and configuration of process chains for the micro domain, which provides the necessary level of detail and generality to cope with size-effects as well as with the continuous development of new technologies in this domain. While classic methods from the area of process planning lack the required level of detail to enable process configurations, methods for the configuration of (forming) processes usually come with comparably high demands in computational power and time, rendering them unsuitable for an application in process chain planning tasks.
One approach to achieve the required computational speed as well as the level of detail is the use of surrogate functions. Surrogates usually substitute computationally expensive simulations with an easier to compute function or model to increase the speed [18]. In general, two different approaches to construct a surrogate model can be observed in the literature. On the one hand, surrogates can be acquired by model reduction. On the other hand, surrogates can be created by learning an approximate model based on a small set of evaluations. In the context of finite-element simulations, both approaches can be found in the literature [19].
For complex FEM simulations, so called low-fidelity models or physical surrogates can be constructed in various ways. Examples are the focusing on relevant parts of the model, by reducing the models degree of detail (selecting coarser meshes) or by replacing certain structures with adequate, faster to compute structures [19]. In this context, recent work focused on the combination (fusion) (e.g. [20]) or automatic deduction (e.g. [21]) of high and low-fidelity models to achieve higher computational speeds while maintaining the required precision. While the use of low-fidelity models can achieve high decreases in the required computational time, the according simulations are still time consuming. For the task of designing and configuring process chains, several of such simulations are required to first determine the most promising sequence of operations or machines and second, to evaluate different configurations for those. Consequently, the use of approximate models, so called response models is a more suitable approach for this task.
Response models usually try to approximate the simulation results based on selected properties. Therefore, a set of FEM simulations is conducted and the response model is then modelled or learned according to the simulation results. In this context, statistic methods from the Design of Experiments are usually applied to plan and execute the simulations. In the literature, there exist several studies on the relationship between problems and suitable methods to learn the response model (e.g. [22], [23], [24]). According to Goel et al. [25], methods commonly proposed in the literature come from the area of statistics (e.g. Kriging, polynomial Response Surfaces) or use methods from the field of Artificial Intelligence (e.g. Artificial Neural Networks or Radial Basis Functions) to learn the relation between the simulations input settings and the simulation results (e.g. [26] [27]). While these models lack the descriptive properties of FEM simulations and thus can only be validated using key performance indicators common to regression techniques (see [24] for a summary of common measures), they can be calculated very quickly.
In the context of process chain planning and configuration, response models can provide a substantial decrease in computational times, which is particularly important to evaluate different process chains and configurations. In order to structure the process of deriving surrogate models and to integrate them into a consistent framework for the planning, configuration and evaluation of process chains in micro manufacturing, the µ-ProPlAn methodology can be used.

Alternative upsetting process: Twostage cold forming
In order to upset a certain length 0 of a rod with diameter 0 in macro scale, conventionally a multi-stage cold forming process is used. While the achievable upset ratio = 0 0 is already limited to approximately 2.1 in macro scale [28], this upset ratio decreases significantly in micro range due to size effects.
Therefore, an alternative upsetting process based on laser rod end melting and a subsequent forming step has been developed. Here, the shape-balance effect enables the production of a material accumulation. As shown in Fig. 1 and described in [8], a laser beam is deflected laterally to a thin rod and the material melts because of the absorbed laser energy. Due to the shape-balance effect, i.e. surface tension predominates gravitational force, the melt pool forms a spherical and moves upwards simultaneously to the laser beam while staying connected to the rod [29]. The thereby accumulated material retains its spherical shape even after the laser is switched off. After solidification, it is called preform. Depending on the material mass as well as the accumulated volume, comparable upset ratios of > 200 can be reached [30]. The completely solidified preform is then formed into the requested final geometry during a subsequent cold forming step. Since the master forming step defines the shape, eccentricity, and volume of the preform, a thorough understanding of the process is mandatory to match small tolerances requested by the industry. Therefore, a partial differential equation based model of the process coupling the Stefan problem [31] with the Navier-Stokes equations including a free capillary surface, cf. [32] has been established and a corresponding finite element method has been developed. This simulation allows for a numerical computation of suitable parameters and process results and is used among other things to analyze the energy balance in the work-piece and the dynamics in the melt (e.g [29], [33], [34]).

µ-ProPlAn
In order to create surrogate models for the FEM simulation, the modelling methodology µ-ProPlAn is applied. In general, it covers all phases from the process and material flow planning to the configuration and evaluation of the processes and process chain models [10]. It focusses on an integrated planning of manufacturing, handling and quality inspection activities at different levels of detail, from process chains, to the level of cause-effect relations between single parameters. At this, µ-ProPlAn applies so-called cause-effect networks. In contrast to holistic approaches, like FEM simulations described in the state of the art section or artificial neural networks, each network consists of a set of parameters and a set of cause-effect relationships, forming a directed graph of sub-models. The set of parameters consists of all technical and logistic characteristics that are relevant to describe the influence on the production process. In case of work pieces, these are e.g. costs per unit, material properties or geometrical characteristics. As for machines and devices, these parameters include velocities, forces or other characteristics that can be selected, calculated or measured. The cause-effect networks are constructed hierarchically. Each material flow object (work pieces, machines, tools, workers, etc.) holds its own cause-effect network, or at least a set of parameters. When combining these single elements to operations, process elements or process chains, higher-level cause-effect networks are created by introducing additional relationships between parameters of the networks or by connecting them through previously specified process interfaces.
The creation of cause-effect networks is divided into two steps: the qualitative modelling and the quantification. The qualitative model is designed by collecting all relevant parameters and denoting their influences among each other. The second step concerns the quantification of the cause-effect networks. The objective is to enable the propagation of different configurations throughout all connected networks. For instance, to estimate the characteristics of a product for different materials or machining strategies. Through quantifying the cause-effect relationships, it is possible to estimate the results of parameter changes to all connected parameters along complete process chains. In contrast to holistic approaches like artificial neural networks, cause-effect networks use a different model for each parameter instead of a single, large model for the overall process. As a result, cause-effect networks are highly customizable as each parameter can be described using different formulas or model types.
In case of well-known relations, µ-ProPlAn allows the direct input of mathematical equations for causeeffect relations. Thereby, the network calculates the value of a parameter directly, based on its input parameters' values. For example, a continuous manufacturing process generally has a duration according to the total length of the work piece divided by its feed velocity. More complex but well-established cause-effect relations can be included from literature, e.g. the calculation of the static friction between tools and work pieces based on their surface roughnesses. Nevertheless, in micro manufacturing some parameters can have a significant impact on the process chain, which can be neglected in macro manufacturing. In addition, size effects may induce a different behaviour than expected in macro forming. As a result, it is often impossible to describe all parameters and cause-effects relations directly. In such cases, µ-ProPlAn provides several methods to derive or learn prediction models from experimental or production data for the respective parameters. Therefore, the software prototype offers a variety of regression techniques from the area of Artificial Intelligence (e.g. Support-Vector Machines or Artificial Neural Networks) as well as statistical methods (e.g. least-square linear or polynomial regressions as well as localized weighted linear regressions). In practice, the application of locally weighted linear regression models has shown a high precision in estimating unknown or hard-to-describe cause-effect relations (see e.g. [10], [35]). After quantification, each parameter holds a model (mathematical or empiric) that allows to estimate is value based on the remaining values on the cause-effect network.
As a result, these networks enable a fast and, depending on the selected models, precise evaluation of different process configurations (e.g. the use of different materials or different production velocities). They enable an assessment of the impacts of different choices on follow-up processes or the production system in general. As cause-effect networks and material flow elements are closely connected, µ-ProPlAn can directly reflect changes to the configuration within the material flow simulation and evaluate these configurations e.g. regarding work-in-progress-levels, lead times or the products' estimated qualities.

Cause-Effect Networks as Surrogate for Laser Rod End Melting
While cause-effect networks constitute an option to achieve a suitable process configuration as part of the overall process chain design, their quantification requires a sufficiently extensive set of production or experimental data. Within a running production system, this dataset can be acquired relatively easy by relying on production and measurement data for already existing processes. For new machines, technologies or products, this information may not be available directly. As a result, it often becomes necessary to resort to physical or numerical models (e.g. finite element models) during the process chain design. These models can be used for the construction and during the quantification of cause-effect networks. In such cases, the cause-effect networks can act as surrogates during the process chain design, to achieve a fast configuration of alternative process chains. This section demonstrates the design and quantification of a cause-effect network for a laser based melting process, specifically designed for micro manufacturing.

Characteristics of generated preforms
As described before, the laser rod end melting process is the essential process step of the two-stage forming process. The geometry of the preforms depends primarly on the laser power, the deflection velocity as well as the rod diameter. Brüning and Jahn have investigated the influence of the aforementioned parameters on different geometrical features in a series of articles: In [29] the specific molten volume in dependency of the thermal upset ratio is determined for different rod diameters. It is shown that a decreasing laser intensity with an increasing thermal upset ratio leads to a constant specific melting volume for all rod diameters with appropriate laser powers.
In [33] the influence of the cold forming stage on the eccentricity is shown. The diameter of preforms varied between 0.38 mm and 0.43 mm by constant rod diameter d0 = 0.2 mm, processed with a laser power of 102 W, a deflection velocity of 72 mm/s and a variation in thermal upset length of 0.9 mm to 1. In [34] the influence of the keyhole formation on the eccentricity of the preform is investigated. It is shown, that the preforms generated with a laser power of 51 W exhibit a higher eccentricity than the preforms generated with a laser power of 102 W, which leads to a keyhole formation. It is assumed that the heat-affected zone is smaller when using the keyhole configuration instead of the Fresnel absorption. Furthermore, the melt front is oriented almost parallel to the keyhole. During Fresnel absorption, the heat is absorbed and reflected at the surface, which leads to an inclination of the melt front and therefore to a high eccentricity.

Experimental setup and characterization
In order to design and quantify a cause-effect network for the laser rod end melting process, a parameter study is performed experimentally and numerically. Using the parameters listed in Table 1, preforms for each set of parameters are generated and simulated. Afterwards, their diameter, volume, and eccentricity are measured.

Maximum radius:
The diameter of the preform is determined experimentally by the largest diameter of the preform orthogonal to the rod as shown in Fig. 2a. The simulation determines the deviations between the surface area and the center of the preform and determines for each quadrant the maximum deviation lS as shown in Fig. 2b. To account for possibly skewed preformes, the mean value of all maximum radii is determined and used as maximum estimate for the overall radius rp resp. the preform's diameter.

Fig. 2. Maximum diameter
Volume: While the volume of the preform can be identified easily within the finite element simulation by adding together the volume of all elements in the corresponding subdomain, measuring the volume of the experimentally generated preforms is more complicated. For this purpose, an additional line is marked at the rod as shown in Fig. 3. To calculate the final accumulation length lF, equation (1) is used with as theoretical accumulation length, as distance between line and preform, as measurement 1. . of the real distance between the additional line and the preform as it is shown in Fig. 3. Using this averaged accumulation length, the molten volume constituting the preform can be calculated by the equation for a cylinder volume as shown in equation (2) with the diameter of the rod d0.

Fig. 3. Volume
Eccentricity: In regards to the eccentricity, the distance between the center of the rod and the center of the preform Δx is determined for all generated and simulated preforms as shown in Fig. 4. Therefore, the center of the preform is determined experimentally by fitting a circle around the spherical shape. Since the mount of the rod is installed permanently, the initial rod position is constant and thus the distance can be measured with a microscopy.

Results
Based on the parameters of the process and the FEM simulation, the cause-effect network is constructed using the laser power, the deflection velocity as well as the rod diameter (radius) as inputs. The cause-effect network is evaluated in terms of its accuracy with respect to the

Quantification
Using a training set of 56 simulation runs, the causeeffect network learns a prediction model for each output parameter using a linear regression. Thereby, the causeeffect network tries to estimate the resulting radius rp of the preform, the eccentricity e, the volume of the preform V, as well as the accumulation length lF of rod material that is molten during the process. For the estimation of the length, the energy per unit length (see equation 3) as well as the rod's diameter were selected as input parameters.
For the other dependent parameters rp, e, V, the rod's diameter, the laser power and the deflection velocity were selected as input parameters (see Fig. 5). The linear regression was chosen to construct the prediction models for all the parameters, as the resulting models already showed a high accuracy concerning the training and evaluation datasets. Although more complex models, e.g. locally weighted linear regressions, achieve an even higher accuracy, linear regression models provide several advantages. For example, they can be used to weight the importance of the input parameters, or can be reformed to calculate specific inputs for a desired result. Table 2 summarizes the evaluation of the prediction models. The first two columns present the Pearson product-moment correlation coefficient (PPMCC) as well as the Spearman rank correlation coefficient (SRCC) to assess linear and non-linear correlations between the prediction model and the training set. Thereby, values close to 1 represent a high correlation, i.e. the regression model closely represents trends in the presented dataset. Values close to 0 mean that there is no correlation between the model and the dataset. The next column depicts the mean absolute error (MAE), i.e. the sum of residuals between the models predictions and the provided dataset. At this, a low value represents a good approximation of the model in relation the single values within the dataset. Nevertheless, as prediction models try to generalize and smooth the input data, a certain error is to be expected, depending on the training data's variance. The last column provides each parameters value range to allow a better assessment of the error values. For example, an error of 1.0 can be high if the value range only ranges between 0 and 1, whereby it is small if the value range lies between 0 and 100. The results of the quantification show very high correlation coefficients and comparably low errors. Consequently, the cause-effect network provides a good approximation concerning the FEM-simulation results.

Validation
To evaluate the prediction model, the cause-effect network is presented a dataset containing production and measurement data of 281 preforms generated using the settings described in Table 1.
For each dataset, the cause-effect values input parameters are set according to the dataset, i.e. the rod diameter, the laser power and the deflection velocity. Afterwards, the cause-effect network predicts the corresponding results. These predictions are then compared to the measurements, noted within the evaluation dataset. Overall, the evaluation of the complete dataset took less than a second on a standard desktop computer (i7-3770k, 8GB RAM, JAVA8). The results of this evaluation in terms of correlation and error are presented in Table 3. Further details are presented in the respective subsections. Thereby, each of these subsections provides a figure of the predictions and the expected results. In order to generate these figures, the prediction models were reduced in dimensionality. Originally, each model uses two or three parameters as input. To generate these figures, the models response, as well as the measured values are provided only for a single input (energy per unit length for the radius, volume and length or laser power for eccentricity). As a result, the variance of measurements in these figures is usually higher, as each mark represents several configurations. The same accounts for the prediction model. While these usually represent a plane in 3D space or a hyper-plane in 4D space, they are depicted only as a single line in these figures, neglecting the models' additional dimensionality. As a result, the figures are only meant to provide a general impression of how the prediction models conform the measurements. Their overall performance can be seen in Table 3.   Table 2 for more details.

Maximum radius
With regard to the maximum radius of the preform rp, Table 3 shows a high correlation between the causeeffect network and the evaluation dataset. Nevertheless, Fig. 6 shows, that the prediction model increasingly underestimates the measured radius with an increasing energy per unit length. This effect was also observed in regards to the FEM simulation [36] and is caused by assuming material parameters to be constant for each physical condition. The same is true for all parameters controlling heat dissipation mechanisms.

Eccentricity
In regards to the eccentricity, it turned out that, firstly, the eccentricity varies a lot due to the high dynamics of the melt during the process and, secondly, that finding a reliable and precise measuring method is rather complex, especially for rods with small diameters because of handling and buckling effects. Therefore, experiments were only conducted for a rod diameter of d0 = 0.3mm. Fig. 7 presents the comparison of the prediction with 13 measurements. As can be seen in the figure, the standard deviation of the measurements is still comparably large. Despite the low correlation coefficients given in Table 3, a general trend for a decreasing eccentricity with increasing laser power can be observed in the experimental data and likewise in the cause-effect networks' predictions.

Volume
In general, the results for the volume are comparable to the results concerning the maximum preform radius. Table 3 shows a high correlation between the causeeffect network and the evaluation dataset, whereby Fig. 8 depicts the same behavior of underestimation as with the radius. As the volume of the melt pool forms approximately spherical relating to the surface tension, it is expected that the radius of the spherical melt pool is comparable to the maximum radius of the preform. The underestimation is explainable again by the constant parameters for the physical condition of the simulations.

Accumulation length
In general, Table 3 shows a good but comparably low correlation between the prediction model and the measurements. The cause effect network was only trained using data for rods with a diameter of d0=0.2mm, whereby the evaluation included rods of all diameters. At this, the difference between the predictions and the measurements increases for higher diameters. In contrast, Fig. 9 depicts only the predictions and measurements for rods with a diameter with d0=0.2mm. The figure shows, that the trend between the measurements and the predictions is comparable, whereby the prediction has a slight offset of ~200µm compared to the majority of measurements. Consequently, it can be assumed that the estimation quality would increase if training data for additional rod diameters would be used.

Conclusion
As a result it can be stated, that the application of causeeffect networks as surrogates is a suitable way to decrease computational time for predicting process results. The results demonstrate, that a cause-effect network can be efficiently trained using only a small set of simulation data and achieve a very high prediction quality and generalization, in order to sufficiently reflect the simulation. Depending on the complexity of the network and the chosen prediction models, an estimation can be acquired in way under a second, allowing for a broad range of parametrizations to be evaluated. Thereby, the results show that the composition of training data constitutes a major factor for the generalizability of the cause-effect network. In particular, the estimation of the accumulation length demonstrates that the prediction model can be applied comparably well for rods conforming to the training set, but generates insufficient predictions for rods with a larger diameter.
In regards to predicting real-world process data, it is known that the algorithms highly depend on the training data and small deviations of the experimental results. This explains the effect of an increasing difference between prediction and experimental data for the preforms' final radius and volume when using higher energies. Despite this effect, it has been shown that cause-effect network are an interesting alternative to provide an approximation of the final preforms characteristics.