THE PSYCHOLOGY OF CHATTING WITH ChatGPT

Dušica Filipović Đurđević

Department of Psychology, Laboratory for Experimental Psychology, Department for Psychology, Faculty of Philosophy, University of Belgrade | dusica.djurdjevic@f.bg.ac.rs

In early 2023 humans around the globe became attracted to verbal interaction with a software application which was based on a large language model developed by the company OpenAI. Although it has reached popularity only recently, ChatGPT is a state-of-the-art descendant of the long line of research in natural language processing, psycholinguistics and machine learning. In our talks, we will first focus on the fundamental technical aspects of this model and then address two major issues pertaining to it. First, we will discuss ethical issues related to the application of chatbots, such as authorship attribution and plagiarism, but also the many useful applications in science, education, and clinical setting. Next, we will discuss the relationship of ChatGPT and artificial intelligence in general with human cognition, as captured in the title ChatGPT devised in response to the request to suggest a title for the panel discussion on ChatGPT.

ChatGPT – TECHNOLOGICAL BACKGROUND AND APPLICATION POTENTIAL

Mlađan Jovanović

Singidunum University | mjovanovic@singidunum.ac.rs

Large language models (LLMs) and tools, such as ChatGPT, have been demonstrated to be valuable in different fields. The models learn patterns as probabilities of occurrences of word sequences from massive amounts of natural language (NL) data (aka training data). A sequence is represented with numerical values describing the positions of words. Thus, the meaning of the word essentially depends on its position. Based on learned probabilities, the model analyzes new (unseen) input text to recognize the intent (matched pattern) and outputs the NL response. The input may be a question to answer or a sentence in the source language to translate into the destination language. As such, the models ultimately depend on the quality of the training data. While effectively answering a broad range of user inquiries, they expose limitations. Current limitations include reasoning issues, factual errors, and bias and fairness. These will be briefed and discussed.

ETHICAL QUESTIONS ARISING FROM THE USE OF AI-BASED CHATBOTS RELEVANT TO PSYCHOLOGY

Vlasta Sikimić

Cluster of Excellence – Machine Learning for Science & Hector Research Institute of Education Sciences and Psychology, University of Tübingen | vlasta.sikimic@uni-tuebingen.de

AI-based chatbots can be very helpful both for educational and scientific purposes. Among the most popular platforms based on GPT-3 are ChatGPT and Elicit. They can quickly answer complex questions, summarize material and even propose new research topics. Moreover, talking to chatbots can also be understood as a form of interactive learning.  Still, their use raises ethical concerns. Scientific journals insist that AI cannot be listed as an author of a paper, since only humans can take the responsibility for the results presented in it. This is particularly important if there are flaws in the publication. Moreover, chatbots open space for a new type of plagiarism that, at least at the moment, passes automated plagiarism checkers, putting more responsibility on the editors and peer reviewers. From the educational perspective, learning how to successfully use AI-based chatbots can be a valuable skill, but it is equally important to teach students to critically assess the texts generated by AI, recognize potential biases that they might bring, and the importance of mastering a skill by oneself. Though we have calculators, we still learn how to multiply.  Finally, there are great risks in using random AI-based chatbots for the purpose of psychotherapy. We have already witnessed that Bing’s AI chat continuously insisted on answers that were troublesome for the human user. Thus, only chatbots developed for emotional support, tested and approved can be used when the human therapist is not available. The principles of having a responsible human in the loop and creating AI while having the user in mind are necessary from an ethical perspective. Moreover, careful monitoring, evaluating, and updating advanced digital solutions based on feedback is the only way to responsibly use them.

THE RELATION OF AI AND COGNITION

Kaja Damnjanović
Department of Psychology, Laboratory for Experimental Psychology, Institute of Philosophy, Faculty of Philosophy, University of Belgrade | kdamnjan@f.bg.ac.rs

The addition of the label „artificial“ before „intelligence“ led to the stance that AI represents human-made human intelligence (HI), moreover human-like intelligence with superior features. While this is true for some aspects of AI, AI does not represent human intelligence as a whole. Human intelligence represents a quintessential human function and psychological construct, and as such is nested into a broader class of higher cognitive functions, like reasoning and problem-solving. Comparison of AI and HI maps limitations of AI, besides those stemming from the technology itself. One path for this is to analyze the rationality of AI, which is extensively covered in literature, especially through the approach of the rational agent within AI. We will discuss several paradoxes and limitations of the possibility of modelling unbounded rational AI agents. The computation power of AI is multiple times bigger than that of HI, it is not limitless. Furthermore, computation, as complex as it can be, is not the only nor enough feature for an agent to be dubbed as intelligent. For a machine to think intelligently about limitless information and degrees of freedom, the bounded rationality approach is needed, and several solutions have been proposed to limit the rationality of AI agents. Finally, modelling some, and not all features and their complex interplay of human intelligence, makes AI different in its nature, and, for time being, a weaker system than HI. We will also discuss which other psychological features of human (or other natural) cognitive systems should be embedded into developing AI which may successfully think, reason and solve problems, and some of those are far from being fully explained, not yet reaching implementation level in Marr’s hierarchy. We conclude that AI serves as a proxy for some operations of human intelligence.