skip to main content
article

An updated set of basic linear algebra subprograms (BLAS)

Published:01 June 2002Publication History
First page image

References

  1. HP BLAS. http://www.compaq.com/math/documentation/cxml/dxml.3dxml.html.Google ScholarGoogle Scholar
  2. IBM BLAS. http://www-1.ibm.com/servers/eserver/pseries/library/sp_books/essl.html.Google ScholarGoogle Scholar
  3. Intel BLAS. http://developer.intel.com/software/products/mkl/mkl52/index.htm.Google ScholarGoogle Scholar
  4. SGI BLAS. http://www.sgi.com/software/scsl.html.Google ScholarGoogle Scholar
  5. SUN BLAS. http://docs.sun.com/htmlcoll/coll.118.3/iso-8859-1/PERFLIBUG/plug_bookTOC.html.Google ScholarGoogle Scholar
  6. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., and Sorensen, D. 1999. LAPACK Users' Guide. SIAM, Philadelphia, PA, USA, third edition. (Also available in Japanese, published by Maruzen, Tokyo, translated by Dr Oguni). Google ScholarGoogle Scholar
  7. ANSI/IEEE Std 754--1985. IEEE Standard for Binary Floating Point Arithmetic.Google ScholarGoogle Scholar
  8. Blackford, S., Corliss, G., Demmel., J., Dongarra, J., Duff, I., Hammarling, S., Henry, G., Heroux, M., Hu, C., Kahan, W., Kaufmann, L., Kearfott, B., Krogh, F., Li, X., Maany, Z., Petitet, A., Pozo, R., Remington, K., Walster, W., Whaley, C., Wolff, V., Gudenberg, J., and Lumsdaine, A. 2002. Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard. Int. J. High Perform. Comput. 16, 1--2. (also available at www.netlib.org/blas/blast-forum).Google ScholarGoogle Scholar
  9. Demmel, J. 1997. Applied Numerical Linear Algebra. SIAM, Philadelphia, Pa. Google ScholarGoogle Scholar
  10. Dodson, D. S. 1983. Corrigendum: Remark on "Algorithm 539: Basic linear algebra subroutines for FORTRAN usage". ACM Trans. Math. Software 9, 140. (See also Lawson et al. {1979} and Dodson and Grimes {1982}). Google ScholarGoogle Scholar
  11. Dodson, D. S. and Grimes, R. G. 1982. Remark on algorithm 539: Basic linear algebra subprograms for Fortran usage. ACM Trans. Math. Softw. 8, 403--404. (See also Lawson et al. {1979} and Dodson {1983}). Google ScholarGoogle Scholar
  12. Dodson, D. S., Grimes, R. G., and Lewis, J. G. 1991. Sparse extensions to the FORTRAN basic linear algebra subprograms. ACM Trans. Math. Software 17, 253--272. (Algorithm 692). Google ScholarGoogle Scholar
  13. Dongarra, J. J., Bunch, J. R., Moler, C. B., and Stewart, G. W. 1979. LINPACK Users' Guide. Society for Industrial and Applied Mathematics, Philadelphia, Pa.Google ScholarGoogle Scholar
  14. Dongarra, J. J., Du Croz, J., Duff, I. S., and Hammarling, S. 1990. A set of Level 3 basic linear algebra subprograms. ACM Trans. Math. Softw. 16, 1--28. (Algorithm 679). Google ScholarGoogle Scholar
  15. Dongarra, J. J., Du Croz, J., Hammarling, S., and Hanson, R. J. 1988. An extended set of FORTRAN basic linear algebra subprograms. ACM Trans. Math. Softw. 14, 1--32, 399. (Algorithm 656). Google ScholarGoogle Scholar
  16. Duff, I. S., Heroux, M. A., and Pozo, R. 2002. An overview of the sparse basic linear algebra subprograms: The new standard from the BLAS Technical Forum. ACM Trans. Math. Softw. 28, 2 (June), 000--000. Google ScholarGoogle Scholar
  17. Duff, I. S., Marrone, M., Radicati, G., and Vittoli, C. 1997. Level 3 basic linear algebra subprograms for sparse matrics: A user-level interface. ACM Trans. Math. Softw. 23, 379--401. Google ScholarGoogle Scholar
  18. Golub, G. and van Loan, C. 1996. Matrix Computations. 3rd ed. Johns-Hopkins, Baltimore, Md. Google ScholarGoogle Scholar
  19. Higham, N. J. 1996. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia, Pa. Google ScholarGoogle Scholar
  20. Kåagström, B., Ling, P., and Van Loan, C. 1998a GEMM-based level 3 BLAS: High-performance model implementations and performance evaluation benchmark. ACM Trans. Math. Softw. 24, 3, 268--302. Google ScholarGoogle Scholar
  21. Kåagström, B., Ling, P., and Van Loan, C. 1998b. Algorithm 784: GEMM-based level 3 BLAS: Portability and optimization issues. ACM Trans. Math. Softw. 24, 3, 303--316. Google ScholarGoogle Scholar
  22. Lawson, C. L., Hanson, R. J., Kincaid, D., and Krogh, F. T. 1979. Basic linear algebra subprograms for FORTRAN usage. ACM Trans. Math. Softw. 5, 308--323. (Algorithm 539. See also Dodson and Grimes {1982} and Dodson {1983}.). Google ScholarGoogle Scholar
  23. Li X. S., Demmel, J. W., Bailey, D. H., Henry, G., Hida, Y., Iskandar, J., Kahan, W., Kang, S. Y., Kapur, A., Martin, M. C., Thompson, B. J., Tung, T., and Yod, D. J. 2002. Design, implementation and testing of extended and mixed precision BLAS. ACM Trans. Math. Softw. 28, 2 (June), 000--000. Google ScholarGoogle Scholar
  24. Robert III, H. M., Evans, W. J., Honemann, D. H., and Balch, T. J. 2000. Robert's Rules of Order 10th ed. Perseus Book GroupGoogle ScholarGoogle Scholar

Index Terms

  1. An updated set of basic linear algebra subprograms (BLAS)

        Recommendations

        Reviews

        Jesse Louis Barlow

        The basic linear algebra subroutines (BLAS) have become an essential part of the development of numerical software in Fortran, and in other languages that have versions of them. Their beauty has always been that computer manufacturers have been encouraged to implement the BLAS as efficiently as possible. The BLAS began with the simple observation that a number of vector operations (dot product, Euclidean norm, vector scale, and add) were commonly implemented in numerical software, and thus a set of Fortran subroutines for them with standardized names was proposed [1]. The linear algebraic equation package (LINPACK) project [2] was built around these routines for the sake of modularity and portability. The vector BLAS are now called level-1 BLAS. Level-2 BLAS (matrix-vector operations) [3], and level-3 BLAS (matrix-matrix operations) [4] were developed in tandem with another linear algebra package (LAPACK) project [5]. Conventional wisdom is that numerical algorithm design should take advantage of level-3 BLAS operations to the greatest extent possible, since this reduces the ratio of (costly) memory references to (cheap) arithmetic operations. This paper updates the ongoing BLAS effort, and summarizes what types of operations are now available. We now have flavors of BLAS for dense, banded, and sparse vector and matrix operations. Some BLAS are also supported in extended and mixed precision. That effort is carefully chronicled here, by some of the many contributors to the BLAS. Online Computing Reviews Service

        Access critical reviews of Computing literature here

        Become a reviewer for Computing Reviews.

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Mathematical Software
          ACM Transactions on Mathematical Software  Volume 28, Issue 2
          June 2002
          151 pages
          ISSN:0098-3500
          EISSN:1557-7295
          DOI:10.1145/567806
          Issue’s Table of Contents

          Copyright © 2002 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 June 2002
          Published in toms Volume 28, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader