Skip to main content

Advertisement

Log in

There Can Be no Other Reason for this Behavior: Issues in the Ascription of Knowledge to Humans and AI

  • Regular Article
  • Published:
Integrative Psychological and Behavioral Science Aims and scope Submit manuscript

Abstract

While machine learning techniques have been used to model categorization/decision making tasks that are beyond the capabilities of traditional AI, these new models are typically uninterpretable, i.e., the reasons for their decisions are not clear. Some have argued that, in developing machines that can report the reasons for their decisions, developers should take, as a guide, human explanations for behavior, which make reference to mental states (e.g., knowledge/belief). This proposal is correct, but unattainable given certain characteristics of current AI. To explain, this article draws on recent discourse-analytic research showing that ascriptions of knowledge/belief presume behavioral performances to instantiate particular sorts of broader dispositions. This is reflected by the possibility of ascribing knowledge/belief to an agent on the basis that there can be no other explanation for their observed behavior. The behavior of AI trained through machine learning is unpredictable in ways that precludes such certainty. Consequently, while it is certainly possible to program machines to report mental states of knowledge/belief to account for their decisions, the failure of current AI to engage in typically human forms of life means that such ascribed mental states are inevitably meaningless.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The general issues here are well summarized by Dreyfus (1979) and Varela et al. (1991).

  2. For the purposes of the current analysis, the differences between knowing that X and believing that X are not relevant.

  3. De Graaf and Malle (2017) claim that people already ascribe mental states (esp. knowledge/beliefs) to artificial agents, and cite several studies purporting to show this. This claim is true, but only in a somewhat trivial/nominal sense. For one thing, most of the studies they cited involved ascriptions made about simulated human agents in computer programs, with the program often prompting the human user to make the ascription (in the other studies, insufficient information was available to determine how ascriptions were made). Such claims might be unfavorably compared to concluding that people like to eat paper because they agree that a picture of a ripe peach looks appetizing.

    Nevertheless, there are certainly cases in which people spontaneously ascribe mental states to machines, at least in a nominal sense. For example, when the overlapping roads of a highway interchange causes a GPS to misidentify its location, we may say that the GPS believes we are on a different road. However, as the analysis below will make clear, this matter deserves more careful scrutiny. Words and phrases in language are used in complex, variable, and polysemous ways. From the fact that people will, under certain conditions, say that a machine knows, believes, or wants something, it does not follow that the functions that these terms play in negotiating human social life can be automatically extended to interactions with machines. Interestingly, this point can be illustrated with Heider and Simmel’s (1944) finding that people do attribute mental states to simple animated geometric forms, which both Miller (2018) and de Graaf and Malle (2017) cite to support their claim that people may see non-living beings as having minds. While the participants in the study did behave as described, it is hardly controversial to suggest that the meanings of these ascriptions were context dependent: We can safely assume that the participants did not conceive of the shapes having a mental life in the same way they would another person. To evaluate the potential for humans to ascribe mental states to machines, it is instructive to look closely at what people are doing when they make such ascriptions to other people.

  4. While the distinctions between knowledge and belief, or between declarative and procedural knowledge are important for a variety of theoretical purposes, Byers et al.’s (2018) analysis identified discursive functions that are common across these categories. Except where distinctions between them are specifically relevant, they will be grouped together as knowledge claims for the purposes of the current analysis.

  5. This theory pertains to the general semiotic/discursive functioning of knowledge claims, and does not intend to exhaustively account for their function in specific instances, where they may be used in the service of various rhetorical purposes, as described by Edwards and Potter (2005, pp. 245–246).

  6. This is somewhat obscured by the representation in Figure 1, which portrays the forms of behavior subsumed under a more general knowledge claim as a finite number of distinct and discrete behavioral performances.

  7. While it is true that the range of behavior denoted by a knowledge/belief ascription is fuzzily bounded (e.g., it is not the case that we could definitively say whether certain actions are or aren’t definitively implied by the claim that someone knows how to drive, or in Wittgenstein’s words, “the extension of a concept is not closed by a frontier” (1953, §68)), this doesn’t affect the point being made, since even without clear boundaries, the range of implied behavior is still heterogeneous in a way the precludes clear grouping on the basis of intrinsic characteristics.

  8. According to Byers et al. (2018), “broader competency” is taken to mean a broad disposition for competent counting-related behavior. While it is obvious that such across-the-board behavioral competence would not be expected from young children, it is still possible to make the claim as long as the “childish” nature of the participants is stressed as a way to account for these lapses. This is precisely what Gelman and her colleagues do, arguing that children are unable to express their knowledge of counting principles due to the performance demands that these forms of behavior entail.

  9. For a list of definitions, see Lai (2011). Kuhn and Dean Jr (2004) definition of metacognition as “awareness and management of one’s own thought, or ‘thinking about thinking’” (p. 270) is fairly typical.

  10. While knowledge claims may be ascribed to these devices, they are trivial due to how the limited degrees of freedom of the device preclude virtually all of the semantic potential of the ascription (e.g., that alarm clock knows to go off at 6 am). Alternately, as Searle (1980, p. 419) points out, such ascriptions may also be understood as indirect ascriptions to the designer of the device.

  11. It’s worth noting the parallel between the last example, and studies of the inflexible, instinctive behavior of animals by Tinbergen (1954).

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.

    Article  Google Scholar 

  • Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.

  • Brooks, N., Audet, J., & Barner, D. (2013). Pragmatic inference, not semantic competence, guides 3-year-olds' interpretation of unknown number words. Developmental Psychology, 49(6), 1066–1075.

    Article  Google Scholar 

  • Byers, P. (2016). Knowledge claims in cognitive development research: Problems and alternatives. New Ideas in Psychology, 43, 16–27.

    Article  Google Scholar 

  • Byers, P., Abdulsalam, S., & Vvedenskiy, E. (2018). Knowledge claims as descriptions of dispositions: A discourse analytic study of conceptual knowledge. Journal of Theoretical & Philosophical Psychology, 38(3), 165–183.

    Article  Google Scholar 

  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721-1730). ACM.

  • Cole, M., & Bruner, J. S. (1971). Cultural differences and inferences about psychological processes. American Psychologist, 26(10), 867–876.

    Article  Google Scholar 

  • Cole, M., Gay, J., Glick, J. A., & Sharp, D. W. (1971). The cultural context of thinking and learning. New York: Basic Books.

    Google Scholar 

  • Davidson, K., Eng, K., & Barner, D. (2012). Does learning to count involve a semantic induction? Cognition, 123(1), 162–173.

    Article  Google Scholar 

  • De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series.

  • Dreyfus, H. L. (1979). What computers can't do: The limits of artificial intelligence. New York: Harper & Row.

    Google Scholar 

  • Dreyfus, H. L. (1987). Misrepresenting human intelligence. In R. Born (Ed.), Artificial intelligence: The case against (pp. 41–54). Beckenham: Croom Helm.

    Google Scholar 

  • Edwards, D., & Potter, J. (2005). 11 discursive psychology, mental states and descriptions. Conversation and cognition, 241.

  • Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906–911.

    Article  Google Scholar 

  • Fuson, K. C. (1988). Children’s counting and concepts of number. New York: Springer.

    Book  Google Scholar 

  • Gelman, R., & Meck, E. (1983). Preschoolers’ counting: Principles before skill. Cognition, 13(3), 343–359.

    Article  Google Scholar 

  • Ghorbani, A., Abid, A., & Zou, J. (2017). Interpretation of neural networks is fragile. arXiv preprint arXiv:1710.10547.

  • Gilpin, L. H., Testart, C., Fruchter, N., & Adebayo, J. (2019). Explaining explanations to society. arXiv preprint arXiv:1901.06560.

  • Glick, J. (1969). Thinking about thinking: Aspects of conceptual organization among the Kpelle of Liberia. American Anthropological Association. Paper presented at the symposium on cognitive and linguistic studies, 68th annual meeting of the American Anthropological Association, New Orleans, LA.

  • Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57.

    Article  Google Scholar 

  • Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243–259.

    Article  Google Scholar 

  • Hesslow, G. (1988). The problem of causal selection. Contemporary science and natural explanation: Commonsense conceptions of causality, 11–32.

  • Hilton, D. J. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107(1), 65–81.

    Article  Google Scholar 

  • Jia, R., & Liang, P. (2017). Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328.

  • Kansky, K., Silver, T., Mély, D. A., Eldawy, M., Lázaro-Gredilla, M., Lou, X., ... & George, D. (2017). Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 1809-1818). JMLR.org.

  • Kindermans, P. J., Hooker, S., Adebayo, J., Alber, M., Schütt, K. T., Dähne, S. & Kim, B. (2017). The (Un) reliability of saliency methods. arXiv preprint arXiv:1711.00867. Available at https://openreview.net/pdf?id=r1Oen--RW

  • Kuhn, D., & Dean Jr., D. (2004). Metacognition: A bridge between cognitive psychology and educational practice. Theory Into Practice, 43(4), 268–273.

    Article  Google Scholar 

  • Lai, E. R. (2011). Metacognition: A literature review. Retrieved from http://images.pearsonassessments.com/images/tmrs/metacognition_literature_review_final.pdf

  • Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490. https://arxiv.org/pdf/1606.03490.pdf

  • Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

  • Markman, E. M. (1977). Realizing that you don't understand: A preliminary investigation. Child Development, 986–992.

  • Marshall, A. (2018). The Uber Crash Won’t Be the Last Shocking Self-Driving Death. Wired. Retrieved from https://www.wired.com/story/uber-self-driving-crash-explanation-lidar-sensors/

  • Miller, T. (2018). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence https://arxiv.org/pdf/1706.07269.pdf

  • Miotto, R., Li, L., Kidd, B. A., & Dudley, J. T. (2016). Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports, 6, 26094.

    Article  Google Scholar 

  • Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427-436).

  • Pedreshi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 560-568). ACM.

  • Ras, G., van Gerven, M., & Haselager, P. (2018). Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning (pp. 19–36). Springer, Cham.

  • Schoenfeld, A. (1987). What's all this fuss about metacognition? In. A. Schoenfeld (Ed.), Cognitive Science and Mathematics Education (pp. 189-216). Lawrence Erlbaum associates.

  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424.

    Article  Google Scholar 

  • Tinbergen, N. (1954). Curious naturalists. New York: Basic Books.

    Google Scholar 

  • Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind (Vol. 2). Cambridge: MIT Press.

    Book  Google Scholar 

  • Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3156-3164).

  • Wittgenstein, L. (1953). Philosophical investigations (G.E.M. Anscombe, Trans. Oxford: Blackwell.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Byers.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Byers, P. There Can Be no Other Reason for this Behavior: Issues in the Ascription of Knowledge to Humans and AI. Integr. psych. behav. 56, 590–608 (2022). https://doi.org/10.1007/s12124-020-09531-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12124-020-09531-6

Keywords

Navigation