Skip to main content

The influence of the sigmoid function parameters on the speed of backpropagation learning

  • Computational Models of Neurons and Neural Nets
  • Conference paper
  • First Online:
From Natural to Artificial Neural Computation (IWANN 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Included in the following conference series:

Abstract

Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hornik, K., Stinchcombe, M., and White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks, Vol. 2, no. 5, 359–366, (1989)

    Google Scholar 

  2. Kosko, B.: Neural Networks and Fuzzy Systems. Prentice-Hall, INC. (1992)

    Google Scholar 

  3. Fahlman, S.E.: An Empirical Study of Learning Speed in Back-Propagation Networks. Technical Report CMU-CS-88-162, CMU, (1988)

    Google Scholar 

  4. Fahlman, S.E., Lebiere, C.: The Cascade-Correlation Learning Architecture. in Touretzky (ed.) Advances in Neural Information Processing Systems 2, Morgan-Kaufmann, (1990)

    Google Scholar 

  5. Stornetta, W.S., Huberman, B.A.: An improved Three-Layer, Back Propagation Algorithm. IEEE First Int. Conf. on Neural Networks, (1987)

    Google Scholar 

  6. Little, W.A.: The Existence of Persistent States in the Brain. Math. Biosci. 19, 101, (1974)

    Google Scholar 

  7. Little, W.A., Shaw, G.L.: Analytic Study of the Memory Capacity of a Neural Network. Math. Biosci. 39, 281, (1978)

    Google Scholar 

  8. Müller, B., Reinhardt. J.: Neural Networks. Springer-Verlag, (1990)

    Google Scholar 

  9. Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the theory of neural computation. Addison-Wesley publishing Company, (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Han, J., Moraga, C. (1995). The influence of the sigmoid function parameters on the speed of backpropagation learning. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_175

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_175

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics