Numerical prediction of steady state temperature based on transient measurements

We show how to use numerical analysis of short-time range experimental data for predicting the limit steady-state value of the investigated parameter. In this article the approach has been applied to a specific, although typical, thermal problem: determining the average steady-state temperature of a heater in the convective and radiative heat exchange with the environment. First, we describe a heat exchange experiment aimed at obtaining temperature experimental data in both short and long time range. Then we present a methodology for applying two methods, i.e., neural networks and least squares approximations, for obtaining predictions about the steady-state temperature values based on short time experimental data. The aim of the study is to compare the predictions to each other and to the long time experimental values, with the aim of determining the applicability range of the two methods.


Introduction
All physical systems do evolve with time. However, if the system is in steady state, i.e., the dependence of its parameters on time is negligible, then the description of physical phenomena in the system simplifies considerably. That's why many material properties (in particular the thermal ones), such as heat transfer coefficient, Pecklet coefficient, or emissivity, are determined experimentally through measurements performed in steady state. In the experimental practice, steady state is achieved as a result of prolonged maintenance of the system under constant external conditions. Time needed to reach steady state can be very long. For example, for thermal systems, in order to achieve a steady state temperature field, heating times of several hours or days may be required. Such long-lasting experiments become expensive and otherwise troublesome. The main motivation for this paper is an attempt to shorten such long-lasting experiments without substantial loss in accuracy of final results. We compare two numerical procedures predicting the limit steady-state temperature values based on measurements performed in much shorter time. The first method is based on artificial neural networks while the second one uses least squares approximations.
The paper is organized as follows. We start with a description of a test rig providing experimental temperature values to be numerically tested. Next, a short description of neural networks used in the paper is given, followed by a rapid review of the least squares approximation method. Numerical analysis of the experimental data is preceded by a test application of the methods to artificial, computer-generated data. Having passed successfully the test, each method is applied to real data. Finally, the steady-state predictions are obtained and compared.

Obtaining experimental data
A test bench described below has been designed in order to obtain several series of experimental temperature data in a time range long enough to be considered as reaching the steady state. Initial portions of the data will serve as input for our numerical procedures. Their output -the limit predictions -will be compared with the limit experimental data.

Test rig
The main part of our test bench is a heater, which is an aluminum cuboid sized 3 by 3 by 4 cm, containing a resistance wire. The heater is placed in a closed cubical chamber made of a 5 mm thick aluminum plate, with outside dimension 15 cm. The heater is suspended approximately in the center of the chamber. Three thermocouples are placed on its surface. The heater (5) with thermocouples inside the chamber (3) are shown in the right part of Fig.1. The chamber is placed inside the desiccator (1) connected to vacuum pump (2). The heater and the interior of the chamber are painted with black paint of known emissivity 97 . 0   . The temperature is measured by 6 thermocouples (6), previously scaled to each other. Three of them were placed on the surface of the heater, the remaining three -on the aluminum chamber surface. The readings of each triple was averaged, which gave two streams of space-average values: one for the heater and one for the chamber. The voltage is maintained at the desired level by a voltage stabilizer (4).

Experimental data
The data from the test-bench were saved on the disk, using in-house software coded in Delphi. The measurements were performed in a continuous mode using a Keithley 2000 multimeter. The temperature in the specified places of the station was recorded in the form of strings of voltage values and placed in files that could be read using Excel. Each experiment was performed in two stages. First, the heater was heated for some time at the atmospheric pressure. Then which the air was removed from the desiccator with help of the vacuum pump, until the absolute pressure inside the desiccator reached about 300 Pa. The heat stream dissipated by the heater was still the same, but the lack of air inside the desiccator caused an increase in temperature on the tested surfaces, especially on the surface of the heater. The measurements were performed until temperature stabilized again. The duration of the external conditions at the set level was long in this case. Fig. 2 presents a sample temperature distribution in K, on the heater and on the surface of the closed chamber.
Time, on the abscissa axis, is presented in the form of number of the consecutive measuring points. The measuring point is the moment in which the meter measured the temperature and then saved this information to a file. The average time between two following measuring points is 10.88s. The moment of switching on the vacuum pump about 600th measuring point divides the temperature graph into two parts. The first part represents heating in conditions in which free convection dominated at atmospheric pressure. The second part of the graph shows the temperature change during further heating of the heater under the conditions of absolute pressure in the desiccator at the level of 300Pa. Under such conditions, the heat exchange through radiation was dominant.  Artificial neural networks have found many applications. The first of them are from the 1950s. Very simple objects were modeled using the first networks and the network itself was optimized. A breakthrough in the work on artificial networks was the development of the backward propagation algorithm in the 80's, which was rediscovered three times . Networks are currently applied to a wide range of areas. For example, they are used in the regulation of technological processes to optimize regulators, including PID regulators, for complex objects such as boilers, heating nodes [1], [2], gas turbines. Large objects are modeled using neural networks too. In [3], for example, the authors modeled a furnace, in [4]a power block, and in [5] -a steam cooler. Neural networks are also used in predicting various parameters characterized by high randomness of measured values. In [6], [7] the authors successfully used neural networks to make long-term temperature and air humidity predictions of relevance in housing. Works using neural networks often describe, to a very limited extent, the architecture of the network used, the way of signal processing, but very rarely one can find a description of the procedures for obtaining an appropriate teaching base. Artificial neural networks have been programmed on the model of neuronal cells in the brain, in which the signal is delivered by means of several protrusions, the so-called dendrites, processed in a cell and sent out with exactly one output, the so-called axon. A similar approach is used here. The input data is provided by the so-called input layer to the hidden layer [8]. The calculation results are recovered in the so-called output layer. Our program uses the back propagation algorithm. The accuracy of the returned results is influenced by the structure of the network (weights, architecture) and the number of the hidden neurons [9], [10], [11]. The number of neurons is usually determined by trial and error. In this work we used a program written in R version 3.4.2 X86 = 64-w 64-mgw32 / x64, which works with the nnet database containing the Fit Neural Network module [12]. The neural network used for calculations has one hidden layer, created by 6 neurons.

The least squares approximation
The method of least squares is a standard approach in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, i.e., the differences between the observed (measured) values, and the values provided by a model. The simplest choice for the model is a linear combination of fixed, properly chosen functions. The fit is accomplished by finding the coefficients of the combination, satisfying the least squares paradigm, and this leads to a linear system of equations. Our data are obtained from a physical process of heat exchange between a body and environment, presumably satisfying Newton's Law of Cooling : If the body is treated as an internal heat source with constant power . Q and heat capacity C, then the heat balance : leads to a solution : As C h, are unknown, we represent the solution in the form: and seek the coefficients according to the least squares principle. Thus the model is linear in D A, but coefficients by equating to zero the three partial derivatives of the error function (5) whose minimum is sought : This leads to a system of equations (6) : The first two equations enable one to express A and D explicitly in terms of B. Substitution to the third equation gives a single non-linear equation in B: where the notation has been used : This highly non-linear equation can be solved numerically and provides the value of B, which lets us find the values of D and A. Thus the long-term prediction of the temperature is made on the basis of: We also use a simplified approach, in which the approximating function has the form : This corresponds to D A  and is applicable if the data has been shifted to satisfy 0 ; 4 Modelling based on neural network

Artificial data
In the first step, we checked the operation of the network for an analytic function with an explicitly defined form: It can therefore be considered as a fixed value. The method of using the network to predict y in a steady state consisted of two stages. In the first one, the network was taught on data generated from a specific subset of the domain of the function y. This dataset is hereinafter referred to as the teaching base. The teaching base may have a different amount of data, which is referred to as different length of the teaching base. In this paper, all teaching bases have a common origin, here it is the point (1, 297). The first database marked as 1  i contained data from the first 100 cells. After the teaching process, the program generated the so-called Testing Vector, which generated the last 50 predicted y-values from which the average value i y was calculated, different for each teaching base. Having y and i y , three error functions were constructed as: where i determines the length of the teaching base. Table 1 contains the values of the error functions for the analytically constructed function y , for different teaching bases.

Radiative data
Neural networks had been used in a real experiment to determine the average temperature of the heater in the steady state, in conditions of a heat exchange dominated by radiation. The evolution of the heater temperature is shown in the right part of Figure 2 definitely less than the values of the other error functions. The data used in both cases, i.e. analytical and experimental one, differed by the amplitude of the oscillatory distortions and the place of their occurrence. We also have tested the methodology using bases of equal length placed in different places, but the results were very divergent and did not allow us to draw any conclusions regarding the steady-state temperature. Research in the area will be continued.

Mixed convective-radiative data
The artificial neural network has been used to determine the temperature value in the distant future, which we consider as the time in steady-state. For this purpose we used the data from the first part of the experiment Fig.2. in which the convective heat exchange was dominating. As before, the teaching bases were constructed as multiples of data from 100 measuring points with a common beginning. To determine the average temperature on the heater, the equations of error functions have to be transformed. If (i G are negligible the average temperature in distant time can be determined as g T in form: where: gi T is the average temperature on the heater provided by the network, for different teaching bases as in Table 3. returned by the network are much higher than the others. This is probably due to the larger change in the slope of the experimental curve and the specific placement of the teaching base ending. Therefore, this value was excluded in further considerations. After the exclusion 3  gi T calculated in Table 3, the values of all error functions had been reduced. After recalculating the corresponding values and taking into account that . Note that the linear trend coefficient during the experiment varied from 0.00015 to 0.00010 K/s. The last value corresponds to 0.36 K per hour and one can expect that a comparable raise of the temperature would take place if the experiment was prolonged. Therefore one can suspect that the prediction was underestimated. In order to verify this suspicion our prediction method was applied to several data ranges forming initial segments of the full range. The predictions of the limit values of T obtained for the initial segments containing multiples of 50 experimental data are shown in the first two columns of Table 4. For example, the prediction based on the first 200 data (from the whole set containing 580 data) was 4.02 K. This is certainly an under-estimation, since that value of the temperature difference has been actually obtained as soon as at the 470 th measurement. Taking (as the approximation basis) the final segments instead of the initial ones we get predictions shown the last two columns of Table 4. The final segment beginning at 1 is the full range and gives prediction 4.15 as before. Shorter final segments give higher predictions, which confirms our previous observation that temperature grows faster there than it would follow from the theoretical expectation based on Newton's Law.

Pure radiative data
The second set of the experimental data was obtained for pure radiative heat exchange. The experiment lasted longer and the trend line coefficient reached 0.00001 K/s, which is 10 times smaller than in the first experiment. This corresponds to 0.036 K per hour growth and lets us consider the process to be stabilized. We agree that the steady-state (limit) value is 39188 , the average of the last 100 experimental values. Averaging was applied in order to eliminate oscillations. First we note that the approximation by a function   i Bt e D   1 in the full range seems to be slightly better than that for convection, as the standard deviation is now 0.016 and average error is 0.013, compared to 0.023 and 0.019 for the previous set of data. It is crucial to verify the quality of the approximation based on initial segments of different length. The relevant information is collected in Table 5. The length of the segment is given in the first column; it is followed by the coefficients B and C for the approximating function of the . It turns out that the forecasts for long intervals differ from the previous ones by mK. For shorter intervals the differences are bigger, reaching 0.03 K, always in favor of the second approximation form.

Conclusions
Sample temperature distribution for the mixed convective-radiative and almost purely radiative heat exchange processes have been obtained experimentally in order to provide data needed for the analysis performed in the paper. The steady-state temperature, i.e., its value at distant time, for the chosen distribution has been gained in two ways. One uses artificial neutral network and the other is based on the least squares algorithm. Each method used initial data segments of different length and provided an independent forecast of the limit steady-state temperature value. For long initial segments the methods gave temperature forecasts highly compatible with each other as well as with the experimental steady-state temperature value. As expected, for shorter intervals the forecasts were less compatible. In any case, the application of the approximation makes it possible to obtain a reliable forecast of the steady-state limit temperature with accuracy of approximately 0.03 K based on experiment lasting 3-4 hours, while reaching the same limit temperature experimentally required about 8-10 hours. Shortening of the experiment time allows one to reduce costs and other inconvenience connected with prolonged experiments.
The predictions of steady-state temperature obtainable by least squares are less prone to input distortion than those from neural networks. These issues are the subject of further work.