Neural Network Prediction
of Baseline Values for Centrifugal Chiller
Fault Detection and Diagnosis
by
Paul Riemer
June 20, 2000
Semester Project
Introduction to Artificial Neural Networks and Fuzzy Systems
ECE/CS/ME 539
Prof. Y.H. Hu
Introduction
One of the most interesting research areas of neural networks, expert systems, and fuzzy logic is in application to building conditioning systems often referred to as the heating, ventilating, and air conditioning (HVAC) industry, in terms of both control and fault detection and diagnosis (FDD). My graduate research is in this vein and hence, the area of application for this semester project.
Specifically, I am refining and expanding FDD methodologies for centrifugal chillers as a continuation of Ian McIntosh’s Ph.D. thesis project. The goal is to be able to identify faulty behavior to the extent of knowing which components are not performing optimally using currently monitored quantities. Air conditioning systems are prime candidates for advanced control and FDD techniques for several reasons including: the number of systems in operation, their large operating hours, their high energy demand and usage, the health and comfort of the building occupants at stake, and their actual and possible environmental effects (energy usage and refrigerant issues).
Chillers
As the workhorse of large-scale commercial air conditioning, a chiller cools water to be piped around the building to air handling units (ahu’s). In the ahu’s, the air temperature and humidity is decreased as it passes over the coiled water pipes. Chillers can be designed to utilize one of several mechanisms but the vast majority in the US, use a vapor compression cycle. A centrifugal, vapor compression chiller is represented in Figure 1 below:
The dotted line denotes the physical boundary of the chiller. The blue lines denote water and the maroon line, R-22 refrigerant. At state 1, the refrigerant is a low-pressure vapor. As it passes through the compressor it becomes high-pressure vapor (2), which then becomes liquid (3) in the condenser as it rejects heat to the cooling tower water loop. Through the expansion device the liquid pressure drops (4) and then the refrigerant boils back to vapor (1) in the evaporator as it absorbs heat from the ahu water loop. The condenser and evaporator are both large shell and tube heat exchangers.
Fault Detection and Diagnosis
The existence of faults is determined through the comparison of characteristic quantities (CQ’s) that represent the actual operation of the equipment with some baseline quantities that have been deemed to represent acceptable operation. With some additional knowledge (experiment, model, or other), the patterns of which CQ’s vary, and the amount and direction in which they vary, can be interpreted to identify faulty operating conditions and hopefully determine which components are involved.
McIntosh utilized 11 CQ’s for centrifugal chillers, which are computed from 10 typically, monitored quantities (MQ’s) using a data reduction program in EES. Chillers are designed and operated to meet the cooling and dehumidification loads of a building. Obviously, these loads change significantly over a day and during the season and the equipment must respond. The operation of the chiller is also affected by the operation of the water pumps and the cooling tower. So because the operation of the chiller is incredible dynamic the acceptable CQ’s are not constant but vary with the conditions. The conditions that the chiller operates in can be defined in five quantities, which from here on will be referred to as the forcing inputs quantities. These five forcing inputs are a subset of the 10 MQ’s. The 11 baseline CQ’s are each actually unknown functions of these five forcing inputs. (Technically, some are known, but I did not utilize that knowledge in this project.)
All of these quantities are tabulated below with evaporator and condenser abbreviated as E. and C. respectively:
Table 1: Quantity Summary
Forcing Inputs(5) / Monitored Quantities (10) / Characteristic Quantities
(11) / CQ Abbr.
C. Mass Flow Rate / C. Mass Flow Rate / E. Heat Transfer Rate / QEVAP
E. Mass Flow Rate / E. Mass Flow Rate / Chilled Water Temp Difference / DTCHW
E. Water Supply Temp / E. Water Supply Temp / E. Approach / APPREVAP
E. Water Return Temp / E. Water Return Temp / E. Conductance Area Product / UAEVAP
C. Water Supply Temp / C. Water Supply Temp / C. Heat Transfer Rate / QCOND
C. Saturation Temp / C. Water Temp Difference / DTCW
E. Saturation Temp / C. Approach / APPRCOND
Compressor Exit Temp / C. Conductance Area Product / UACOND
C. Water Return Temp / Compressor Isentropic Efficiency / NISEN
Power / Motor Efficiency / NMOTOR
Coefficient of Performance / COP
Problem Description
Through my research project, I have obtained an expansive data set on four “identical” centrifugal chillers. The data has the 10 measured quantities in one-minute intervals for an entire cooling season. My original intent for this course project was to choose one of the chillers and use a neural network to predict the 11 baseline CQ’s from the 5 forcing inputs for a cooling season. A portion of the data at the beginning of the cooling season would be treated as fault free or acceptable and used as the training and testing data sets. Prediction of the baseline CQ’s would be performed on the remainder of the cooling season, which would then be available for use in comparison to the actual CQ’s to perform FDD.
However as I investigated further, I realized that an alternative approach would utilize a neural network to predicted the other 5 MQ’s from the 5 forcing inputs. This network configuration (Approach 1) would provide a set of 10 fault free monitored quantities, which could then be plugged through the EES data reduction code just like the actual monitored quantities. Thereby, we would have actual and baseline CQ’s. The prime advantage of this revised plan is that fewer outputs should lead to quicker convergence and higher accuracy of the network. This does add an additional step in using EES twice rather than once, but the EES code is quite quick and requires minimal effort. The EES steps could be avoidable at some later point, if the fault criteria can be extrapolated from the CQ patterns into the MQ patterns but that work is outside this course project.
Running with this revised approach, I began coding. I quickly decided that an interesting comparison could be made between the previously described Approach 1 (using one network with the 5 other MQ’s as outputs) and a set of 5 networks each with one of the other 5 MQ’s as the output (Approach 2). This second approach would again utilize the EES data reduction code to determine the baseline CQ’s from the predicted fault free MQ’s.
After completing Approaches 1 & 2, I decided a third approach with the original intended single network configuration to predict the 11 CQ’s from the five forcing inputs would still be interesting for comparative purposes. And for summary, this table lists the approaches:
Table 2: Network Configurations Summary
Approach # / M (Network Inputs) / N (Network Outputs) / Run #’s1 / 5 forcing inputs / 5 other MQ’s (all at once) / 34
2 / “ / 5 other MQ’s (one per) / 51-55
3 / “ / 11 CQ’s / 74
Work Completed
The first step was to harvest the data and divide it into training/testing and prediction data sets. Through talking with McIntosh and examining the operating periods of the chiller, I decided to use the April data as the training and testing set. All of the data points collected while the chiller was off were trimmed. Even after this, to use the one minute interval data would be overwhelming for a first attempt so the data files were further reduced to about 750 points per month. Microsoft Excel was used to perform these manipulations leaving each month (except September to November) as a separate workbook. The data was then written into text files, of 21 columns, in which the first 5 columns were the forcing inputs, then the remaining 5 MQ’s and finally the actual 11 CQ’s output from the EES code.
Then the coding began in Matlab. I wrote 2 M files: trainandtest.m and predict.m. These files utilized the neural network toolbox and some of the coding techniques from the course. The only outside code included was Prof. Hu’s randomizing function. Both programs are interactive and rather flexible.
Trainandtest.m begins by prompting the user for an input file. It then prompts the user to establish which columns represent the feature space (5 forcing inputs) and target (other 5 MQ’s or 11 CQ’s) space. The data set is randomized and split into training and testing sets. Next, the construction of the neural net begins using the “newff” command in the neural network toolbox to create a new feed forward network. This quite simply is a multi-layer perceptron configuration, which allows the user to determine the number of hidden nodes, the level of interconnectivity, the activation functions and much more. The user must input the number of hidden nodes, but the other parameters have default values. The train command, again from the toolbox, is applied next. This command also has several modifiable parameters, including the epoch number, the weight adjustment algorithm, and the convergence criteria that the user can modify if desired, or again defaults exist. The train commands takes only the network and the training feature and target dimensions as inputs. After the training, the code plots the network predicted values versus the provided target values for both the training and testing data sets. The user may then enter a name for the trained network and some related variables to be saved as for later prediction.
Predict.m is the M file designed for performing predictions. This file can be run either immediately after trainandtest.m or at anytime a network (named “net”) is loaded within the Matlab session. The first portion of the code prompts the user for the data file that prediction should be formed with. The code then arranges this data appropriately for the loaded neural network. The actual prediction is done using “sim”, a third command from the toolbox. This command takes the loaded network and feature dimensions as inputs. The user can then specify a file where the predicted values should be written. Finally, for each network output, the predicted value is plotted against the actual values and then both are plotted against time.
After these two M files were written, they were utilized in various different configurations of the network and training parameters. However the April data was always used for trainandtest.m. In all configurations, the activation function utilized for all nodes was “purelin”. The training and learning functions were “trainbfg” and “learngdm” respectively. The performance function used was “mse” denoting mean square error. The following Matlab scripts demonstrate runs of both M files:
» trainandtest
The files in this directory are:
. run17.mat run32.mat run41.mat run49.mat
.. run18.mat run33.mat run41all.mat run49all.mat
april run19.mat run34.mat run42.mat run50.mat
august run20.mat run35.mat run42all.mat run50all.mat
july run21.mat run35all.mat run43.mat run51.mat
june run22.mat run36.mat run44.mat run51all.mat
log.xls run23.mat run36all.mat run44all.mat run52.mat
predict.m run24.mat run37.mat run45.mat run52all.mat
randomize.m run25.mat run37all.mat run45all.mat run53.mat
run11.mat run26.mat run38.mat run46.mat run53all.mat
run12.mat run27.mat run38all.mat run46all.mat run54.mat
run13.mat run28.mat run39.mat run47.mat run54all.mat
run14.mat run29.mat run39all.mat run47all.mat sepnov
run15.mat run30.mat run40.mat run48.mat trainandtest.m
run16.mat run31.mat run40all.mat run48all.mat
Enter the input file for train and testing in single quotes without the file extension: 'april'
There are 864 samples. And the first one is:
1.0e+006 *
Columns 1 through 7
0.0048 0.0000 0.0000 0.0077 0.0001 0.0001 0.0001
Columns 8 through 14
0.0000 0.0001 0.0009 0.0011 2.0700 0.0000 0.0000
Columns 15 through 21
0.0013 2.4800 0.0000 0.0000 0.0000 0.0000 0.0000
Randomizing the input file
This code assumes all feature dimensions are in the first (left most) columns of the matrix.
Feature(input) dimensions + Possible Target (output) dimensions = 21
Please enter the number of feature dimensions : 5
This leaves a maximum of 16 target dimensions.
Type 1 to select the first five.0
Using the column numbers (1-16) as identifiers,
Please enter the first target dimension:5
Please enter the next additional target dimension or 0 if done: 0
To verify, the first 5 samples of your chosen target dimension(s):
761.6000
734.5000
764.3000
701.3000
764.7000
Please enter the percentage of the input file to use as the training data set(1-99): 75
This corresponds to 648 training samples and 216 testing samples.
Utilizing a Feed Forward Neural Network
Please enter the number of nodes for the hidden layer: 8
Type 1 to configure the neural network construction.1
Please enter the first layer activation function in single quotes (Default = purelin):
Please enter the second layer activation function in single quotes (Default = purelin):
Please enter the training function in single quotes (Default = trainbfg):
Please enter the learning function in single quotes (Default = learngdm):
Please enter the performance function in single quotes (Default = mse):
Type 1 to configure the training parameters.1
net.trainParam.epochs (Default=100)=
net.trainParam.alpha (Default=0.001)=
net.trainParam.beta (Default=0.100)=
net.trainParam.delta (Default=0.01)=
net.trainParam.gama (Default=0.1)=
TRAINBFG-srchbac, Epoch 0/100, MSE 1.63106e+008/0, Gradient 5.07807e+008/1e-006
TRAINBFG-srchbac, Epoch 25/100, MSE 5665.79/0, Gradient 5.89374e+006/1e-006
TRAINBFG-srchbac, Epoch 50/100, MSE 1254.24/0, Gradient 6.07927e-006/1e-006
TRAINBFG-srchbac, Epoch 52/100, MSE 1254.24/0, Gradient 4.56241e-007/1e-006
TRAINBFG, Minimum gradient reached, performance goal was not met.
Prediction using training and testing sets completed. Plots generated.
To save this network, type the desired name in single quotes without the file extension: 'run55'
If you would like to perform additional predictions on a specified data set, type "predict".
»
Figure 2: Trainandtest.m Plot 1
Figure 3: Trainandtest.m Plot 2
» predict
The files in this directory are:
. june51.txt run24.mat run40.mat run51.mat
.. june52.txt run25.mat run40all.mat run51all.mat
april june53.txt run26.mat run41.mat run52.mat
april51.txt june54.txt run27.mat run41all.mat run52all.mat
april52.txt june55.txt run28.mat run42.mat run53.mat
april53.txt log.xls run29.mat run42all.mat run53all.mat
april54.txt predict.m run30.mat run43.mat run54.mat
april55.txt randomize.m run31.mat run44.mat run54all.mat
august run11.mat run32.mat run44all.mat run55.mat
august51.txt run12.mat run33.mat run45.mat run55all.mat
august52.txt run13.mat run34.mat run45all.mat sepnov
august53.txt run14.mat run35.mat run46.mat sepnov51.txt
august54.txt run15.mat run35all.mat run46all.mat sepnov52.txt
august55.txt run16.mat run36.mat run47.mat sepnov53.txt
july run17.mat run36all.mat run47all.mat sepnov54.txt
july51.txt run18.mat run37.mat run48.mat sepnov55.txt
july52.txt run19.mat run37all.mat run48all.mat trainandtest.m
july53.txt run20.mat run38.mat run49.mat
july54.txt run21.mat run38all.mat run49all.mat
july55.txt run22.mat run39.mat run50.mat
june run23.mat run39all.mat run50all.mat
Enter the data set for prediction in single quotes without the file extension: 'july'
There are 665 samples.
Using the first 5 columns as the feature dimensions
and of the 16 possible output dimensions, predicting the following:
5
Please input desired output file name in single quotes(i.e. "output.txt"): 'july55.txt'
File written.
»
Figure 4: Predict.m Plot 1
Results
The above scripts correspond to run number 55 of the training and testing configurations. Table 4 summarizes all of the runs the runs attempted. The scale column refers to the use of “aprilscaled” instead of “april”. In “aprilscaled”, UAEVAP and UACOND were scaled down by a factor of 104, and NISEN and NMOTOR were scaled up by a factor of 102. This scaling was done after Matlab had matrix manipulation errors due to the large size (106) of the UA terms and the minimum gradient was prematurely reached due to the small size (<1) of the efficiency terms. The “H” column is the number of nodes in the hidden layer and “N”, the number of output nodes. The outputs nodes chosen are listed in “Outputs” and detailed in the following table:
Table 3: Output Summary
# / Code / Description / Classification1 / TCWR / C. Water Return Temp / Non forcing input MQ
2 / TCOND / C. Saturation Temp / “
3 / TEVAP / E. Saturation Temp / “
4 / T2 / Compressor Exit Temp / “
5 / Power / Electric Power Draw / “
6 / QEVAP / E. Heat Transfer Rate / CQ
7 / UAEVAP / Chilled Water Temp Difference
8 / APPREVAP / E. Approach
9 / DTCHW / E. Conductance Area Product
10 / QCOND / C. Heat Transfer Rate
11 / UACOND / C. Water Temp Difference
12 / APPRCOND / C. Approach
13 / DTCW / C. Conductance Area Product
14 / NISEN / Compressor Isentropic Efficiency
15 / NMOTOR / Motor Efficiency
16 / COP / Coefficient of Performance
Returning our attention to Table 4, the “%” column denotes what percentage of the input data set was used for training with the remainder used for testing. The “alpha” through “gama” columns are the training parameters for the train command. The “stop” column relates whether the training ceased due to the maximum epoch number, e, or the minimum gradient reached,g. “Epoch #” refers to last training epoch completed. “Perf” is the final numeric value of the peformance quantity used, in this case, the mean square error.
Table 4: Training and Testing Runs
Run / Scale / H / N / Outputs / % / alpha / beta / delta / gama / stop / epoch # / perf0 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1
1 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / e / 100 / 265.42
2 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 209 / 254.706
3 / N / 10 / 5 / 1-5 / 75 / 0.01 / 0.1 / 0.01 / 0.1 / g / 355 / 261.065
4 / N / 15 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 238 / 273.255
5 / N / 10 / 4 / 1-4 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 209 / 0.522348
6 / N / 10 / 1 / 5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 50 / 1302.34
7 / N / 15 / 1 / 5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 72 / 1258.15
8 / N / 10 / 5 / 1-5 / 80 / 0.001 / 0.1 / 0.01 / 0.1 / e / 250 / oops
9 / N / 10 / 5 / 1-5 / 80 / 0.001 / 0.1 / 0.01 / 0.1 / g / 253 / 268.856
10 / N / 5 / 5 / 1-5 / 80 / 0.001 / 0.1 / 0.01 / 0.1 / e / 100 / 255.681
11 / N / 7 / 5 / 1-5 / 80 / 0.001 / 0.1 / 0.01 / 0.1 / g / 146 / 259.373
12 / N / 10 / 1 / 1 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 52 / 0.344589
13 / N / 5 / 1 / 1 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 57 / 0.343731
14 / N / 8 / 1 / 1 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 44 / 0.335705
15 / N / 8 / 1 / 2 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 45 / 0.504918
16 / N / 8 / 1 / 3 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 53 / 0.443702
17 / N / 8 / 1 / 4 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 56 / 0.966219
18 / N / 8 / 1 / 5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 49 / 1215.88
19 / N / 8 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 292 / 263.863
20 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 404 / 252.503
21 / N / 12 / 5 / 1-5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 232 / 266.937
22 / N / 10 / 5 / 1-5 / 75 / 0.0001 / 0.1 / 0.01 / 0.1 / g / 255 / 261.065
23 / N / 10 / 5 / 1-5 / 75 / 0.01 / 0.1 / 0.01 / 0.1 / g / 198 / 273.255
24 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 253 / 243.911
25 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.001 / 0.01 / 0.1 / g / 260 / 263.899
26 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 250 / 238.25
27 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.5 / 0.1 / g / 264 / 263.617
28 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.01 / g / 298 / 249.129
29 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.5 / g / 276 / 243.033
30 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 236 / 249.477
31 / N / 5 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / e / 500 / 247.621
32 / N / 6 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 375 / 250.692
33 / N / 7 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 248 / 271.954
34 / N / 8 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 219 / 239.848
35 / N / 9 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 345 / 268.648
36 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 270 / 266.399
37 / N / 11 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 314 / 245.029
38 / N / 12 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 290 / 257.407
39 / N / 13 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.1 / 0.1 / g / 298 / 248.97
40 / N / 14 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 241 / 254.294
41 / N / 15 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 298 / 249.56
42 / N / 20 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 284 / 255.269
43 / N / 10 / 5 / 1-5 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 238 / 263.863
44 / N / 5 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 40 / 0.353381
45 / N / 6 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 37 / 0.360037
46 / N / 7 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 42 / 0.359606
47 / N / 8 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 53 / 0.371072
48 / N / 9 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 49 / 0.355618
49 / N / 10 / 1 / 1 / 75 / 0.001 / 0.01 / 0.01 / 0.1 / g / 66 / 0.363597
50 / N / 8 / 1 / 1 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 47 / 0.343251
51 / N / 8 / 1 / 1 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 62 / 0.321294
52 / N / 8 / 1 / 2 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 43 / 0.468783
53 / N / 8 / 1 / 3 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 42 / 0.458375
54 / N / 8 / 1 / 4 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 89 / 1.03235
55 / N / 8 / 1 / 5 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 52 / 1254.24
56 / N / 11 / 11 / 6-16 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / e / 750 / oops
57 / N / 8 / 1 / 6 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 50 / 37.0455
58 / N / 8 / 1 / 7 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / e / 250 / 9.23E+10
59 / N / 8 / 1 / 8 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 40 / 0.450475
60 / N / 8 / 1 / 9 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 39 / 3.07E-24
61 / N / 8 / 1 / 10 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 60 / 78.8327
62 / N / 8 / 1 / 11 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / e / 250 / 6.29E+11
63 / N / 8 / 1 / 12 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 46 / 0.462533
64 / N / 8 / 1 / 13 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 43 / 0.00092
65 / N / 8 / 1 / 14 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 38 / 0.000128
66 / N / 8 / 1 / 15 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 43 / 0.000987
67 / N / 8 / 1 / 16 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 40 / 0.04822
68 / Y / 20 / 11 / 6-16 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / e / 500 / 1349.68
69 / Y / 10 / 1 / 7 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 58 / 947.544
70 / Y / 15 / 1 / 11 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 76 / 12634.8
71 / Y / 10 / 1 / 14 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 67 / 1.35496
72 / Y / 10 / 1 / 15 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 57 / 9.05744
73 / Y / 10 / 1 / 16 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / 51 / 0.045684
74 / Y / 30 / 11 / 6-16 / 75 / 0.001 / 0.1 / 0.01 / 0.1 / g / ~600 / ~425
In the above runs, the network was usually given enough epochs to reach convergence. The number of hidden nodes and the training parameters did not significantly affect the performance of the network as noted by the value of the mean square error. The variation seems to be simply due to the variation in the training points used from the April data set.