Elsevier

Information Sciences

Volume 260, 1 March 2014, Pages 1-14
Information Sciences

Interval type-2 fuzzy weight adjustment for backpropagation neural networks with application in time series prediction

https://doi.org/10.1016/j.ins.2013.11.006Get rights and content

Abstract

In this paper a new backpropagation learning method enhanced with type-2 fuzzy logic is presented. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. In this work, type-2 fuzzy inference systems are used to obtain the type-2 fuzzy weights by applying a different size of the footprint of uncertainty (FOU). The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a case of prediction for the Mackey-Glass time series (for τ = 17). Noise was applied in different levels to the test data of the Mackey-Glass time series for showing that the type-2 fuzzy backpropagation approach obtains better behavior and tolerance to noise than the other methods.

Introduction

In this paper we propose an adaptation of weights in the backpropagation algorithm for the neural network using type-2 and type-1 fuzzy inference systems. This proposed approach is different than other ones in the literature, where the adaptation is in the momentum and adaptive learning rate [3], [14], [27], [33], or with triangular or trapezoidal fuzzy numbers for the weights [19], [20], and also because the proposed method works with type-2 fuzzy weights, which is the main difference with respect to the others methods.

The proposed approach is applied to time series prediction for the Mackey-Glass series. In this case, the objective is obtaining the minimum prediction error for the time series data.

This paper is focused on the comparison of the traditional monolithic neural network with respect to the neural networks with type-1 and type-2 fuzzy weights. The same architecture and learning algorithm for the three neural models are used. Noise in the real test data to analyze the performance of the models is also applied.

This work is based on comparing the performance for the neural network with type-1 fuzzy weights and type-2 fuzzy weights, with the traditional approach of using real numbers for the weights, which is important because the weights affect the performance of the learning process of the neural network and therefore obtaining better results.

This conclusion is based on the application of neural networks of this type, where previous works have shown that the training of neural networks for the same problem initialized with different weights or its adjustment in a different way, but at the end is possible to reach a similar result.

The contribution of the paper is the proposed method for type-2 fuzzy weight adjustment in backpropagation learning of neural networks for providing them the ability to manage uncertainty in real data. The main idea of the proposed method is that enhancing the backpropagation method with type-2 fuzzy logic enables better management of uncertainty in the learning process and with this improved results can be achieved.

Simulation results show that neural networks with the proposed type-2 fuzzy approach have the ability of outperforming their type-1 and type-0 counterparts.

The proposed method is different than the adaptive neuro-fuzzy inference system (ANFIS) method, because ANFIS uses the neural network architecture for obtaining the characteristics of the fuzzy systems, and performs the operations based on the calculations of the fuzzy systems; this is different to the proposed method that uses the type-2 fuzzy systems to update the weights used in the neural network for the training (learning) process.

The next section presents a background of different weight management strategies and modifications of the backpropagation algorithm in neural networks. Section 3 explains the proposed method and the problem description. Section 4 describes the monolithic neural network architecture and neural network architecture with type-1 fuzzy weights. Section 5 describes the neural network with type-2 fuzzy weights proposed in this paper. Section 6 presents the simulation results for the proposed methods. Finally, in Section 7, conclusions are presented.

Section snippets

Overview of related works

The most useful basic training method in the area of neural networks is the backpropagation algorithm and its variations [9], [16], [24], [32]. When these methods are applied in practical problems, the training time of the basic backpropagation algorithm can be very high [11], [35]. In the literature several methods have been proposed to accelerate the convergence of the backpropagation algorithm [3], [14], [27], [33]. There exists many works about adjustment or managing of weights, but only

Proposed method and problem description

The proposed approach in this paper has the goal of generalizing the backpropagation algorithm using type-1 fuzzy sets and type-2 fuzzy sets to allow the neural network to handle data with uncertainty. In the type-2 fuzzy sets, it will be necessary vary the footprint of uncertainty (FOU) of the membership functions using an optimization method to make it automatically or vary it manually for the corresponding applications [31], [37], [40], [41].

The process of obtaining the weights in the

The monolithic neural network architecture and neural network architecture with type-1 fuzzy weights

The proposed monolithic neural network architecture (see Fig. 4) is described as follows: we used a monolithic neural network with the test data of the Mackey-Glass time series for the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer.

The proposed neural network architecture with type-1 fuzzy weights (see Fig. 5) is described as follows:

Layer 0: inputs.x=[x1,x2,,xn].

Layer 1: type-1 fuzzy weights for the hidden layer.w=i=1n(fiwi)i=1n(fi).

Layer 2: hidden neuron with

The neural network architecture with type-2 fuzzy weights

The proposed neural network architecture with type-2 fuzzy weights (see Fig. 12) is described as follows:

Layer 0: inputs.x=[x1,x2,,xn].

Layer 1: interval type-2 fuzzy weights for the hidden layer [7].w̃=[w̲,w¯].where:w¯=k=1Lf¯k·wlk+k=L+1Mf̲k·wlkk=1Lf¯k+k=L+1Mf̲k.w̲=k=1Rf̲k·wrk+k=R+1Mf¯k·wrkk=1Rf̲k+k=R+1Mf¯k.where L and R are the switch points [7].

Layer 2: hidden neurons with interval type-2 fuzzy weights.Net=i=1nxiw̃i.

Layer 3: output neurons with interval type-2 fuzzy weights.Out=i=1ny

Simulation results

The obtained results for the experiments with the monolithic neural network are shown on Table 1 and Fig. 19, and all parameters of the neural network are established empirically. The best prediction error is of 0.055, and the average error is of 0.077.

We are presenting 10 experiments in Table 1, but the average error was calculated considering 40 experiments with the same parameters and conditions.

The obtained results for the experiments with the neural network with type-1 fuzzy weights are

Conclusions

In the experiments, we observe that the neural network with type-2 fuzzy weights obtains better results than the monolithic neural network and the neural network with type-1 fuzzy weights for the Mackey-Glass time series. This conclusion is based in the prediction errors of 0.039, 0.053 and 0.055 respectively, and average errors (40 experiments) of 0.061, 0.077 and 0.094 respectively.

The neural network with type-2 fuzzy weights shows better behavior at different levels of uncertainty than the

References (41)

Cited by (0)

View full text