Interval type-2 fuzzy weight adjustment for backpropagation neural networks with application in time series prediction
Introduction
In this paper we propose an adaptation of weights in the backpropagation algorithm for the neural network using type-2 and type-1 fuzzy inference systems. This proposed approach is different than other ones in the literature, where the adaptation is in the momentum and adaptive learning rate [3], [14], [27], [33], or with triangular or trapezoidal fuzzy numbers for the weights [19], [20], and also because the proposed method works with type-2 fuzzy weights, which is the main difference with respect to the others methods.
The proposed approach is applied to time series prediction for the Mackey-Glass series. In this case, the objective is obtaining the minimum prediction error for the time series data.
This paper is focused on the comparison of the traditional monolithic neural network with respect to the neural networks with type-1 and type-2 fuzzy weights. The same architecture and learning algorithm for the three neural models are used. Noise in the real test data to analyze the performance of the models is also applied.
This work is based on comparing the performance for the neural network with type-1 fuzzy weights and type-2 fuzzy weights, with the traditional approach of using real numbers for the weights, which is important because the weights affect the performance of the learning process of the neural network and therefore obtaining better results.
This conclusion is based on the application of neural networks of this type, where previous works have shown that the training of neural networks for the same problem initialized with different weights or its adjustment in a different way, but at the end is possible to reach a similar result.
The contribution of the paper is the proposed method for type-2 fuzzy weight adjustment in backpropagation learning of neural networks for providing them the ability to manage uncertainty in real data. The main idea of the proposed method is that enhancing the backpropagation method with type-2 fuzzy logic enables better management of uncertainty in the learning process and with this improved results can be achieved.
Simulation results show that neural networks with the proposed type-2 fuzzy approach have the ability of outperforming their type-1 and type-0 counterparts.
The proposed method is different than the adaptive neuro-fuzzy inference system (ANFIS) method, because ANFIS uses the neural network architecture for obtaining the characteristics of the fuzzy systems, and performs the operations based on the calculations of the fuzzy systems; this is different to the proposed method that uses the type-2 fuzzy systems to update the weights used in the neural network for the training (learning) process.
The next section presents a background of different weight management strategies and modifications of the backpropagation algorithm in neural networks. Section 3 explains the proposed method and the problem description. Section 4 describes the monolithic neural network architecture and neural network architecture with type-1 fuzzy weights. Section 5 describes the neural network with type-2 fuzzy weights proposed in this paper. Section 6 presents the simulation results for the proposed methods. Finally, in Section 7, conclusions are presented.
Section snippets
Overview of related works
The most useful basic training method in the area of neural networks is the backpropagation algorithm and its variations [9], [16], [24], [32]. When these methods are applied in practical problems, the training time of the basic backpropagation algorithm can be very high [11], [35]. In the literature several methods have been proposed to accelerate the convergence of the backpropagation algorithm [3], [14], [27], [33]. There exists many works about adjustment or managing of weights, but only
Proposed method and problem description
The proposed approach in this paper has the goal of generalizing the backpropagation algorithm using type-1 fuzzy sets and type-2 fuzzy sets to allow the neural network to handle data with uncertainty. In the type-2 fuzzy sets, it will be necessary vary the footprint of uncertainty (FOU) of the membership functions using an optimization method to make it automatically or vary it manually for the corresponding applications [31], [37], [40], [41].
The process of obtaining the weights in the
The monolithic neural network architecture and neural network architecture with type-1 fuzzy weights
The proposed monolithic neural network architecture (see Fig. 4) is described as follows: we used a monolithic neural network with the test data of the Mackey-Glass time series for the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer.
The proposed neural network architecture with type-1 fuzzy weights (see Fig. 5) is described as follows:
Layer 0: inputs.
Layer 1: type-1 fuzzy weights for the hidden layer.
Layer 2: hidden neuron with
The neural network architecture with type-2 fuzzy weights
The proposed neural network architecture with type-2 fuzzy weights (see Fig. 12) is described as follows:
Layer 0: inputs.
Layer 1: interval type-2 fuzzy weights for the hidden layer [7].where:where L and R are the switch points [7].
Layer 2: hidden neurons with interval type-2 fuzzy weights.
Layer 3: output neurons with interval type-2 fuzzy weights.
Simulation results
The obtained results for the experiments with the monolithic neural network are shown on Table 1 and Fig. 19, and all parameters of the neural network are established empirically. The best prediction error is of 0.055, and the average error is of 0.077.
We are presenting 10 experiments in Table 1, but the average error was calculated considering 40 experiments with the same parameters and conditions.
The obtained results for the experiments with the neural network with type-1 fuzzy weights are
Conclusions
In the experiments, we observe that the neural network with type-2 fuzzy weights obtains better results than the monolithic neural network and the neural network with type-1 fuzzy weights for the Mackey-Glass time series. This conclusion is based in the prediction errors of 0.039, 0.053 and 0.055 respectively, and average errors (40 experiments) of 0.061, 0.077 and 0.094 respectively.
The neural network with type-2 fuzzy weights shows better behavior at different levels of uncertainty than the
References (41)
- et al.
Locally recurrent neural networks for wind speed prediction using spatial correlation
Inf. Sci.
(2007) - et al.
On the robustness of type-1 and interval type-2 fuzzy logic systems in modeling
Inf. Sci.
(2011) - et al.
A classifier neural net with complex-valued weights and square-law nonlinearities
Neural Networks
(1995) - et al.
A review on the design and optimization of interval type-2 fuzzy controllers
Appl. Soft Comput.
(2012) - et al.
A hybrid learning algorithm for a class of interval type-2 fuzzy neural networks
Inf. Sci.
(2009) - et al.
Fuzzy decision making systems based on interval type-2 fuzzy sets
Inf. Sci.
(2013) On the capabilities of neural networks using limited precision weights
Neural Networks
(2002)Additive neural networks and periodic patterns
Neural Networks
(1999)- et al.
An optimization method for designing type-2 fuzzy inference systems based on the footprint of uncertainty using genetic algorithms
Exp. Syst. Appl.
(2012) - et al.
Accelerating neural network training using weight extrapolations
Neural Networks
(1999)