I modelli linguistici di IA possono migliorare la ricerca nelle Scienze Umane? Analisi fenomenologica e direzioni future

Autori

  • Marika D'Oria Scientific Directorate, Fondazione Policlinico Universitario A. Gemelli IRCCS https://orcid.org/0000-0003-4253-8223

DOI:

https://doi.org/10.6092/issn.1825-8670/16554

Parole chiave:

Human-Computer Interaction, Natural Language Processing, Intelligenza Artificiale, Scienze Umane, Metodologia

Abstract

L'articolo esplora l’uso del modello linguistico di intelligenza artificiale “ChatGPT” nel campo delle scienze umane. ChatGPT utilizza tecniche di elaborazione del linguaggio naturale per imitare il linguaggio umano e avviare conversazioni artificiali. Sebbene la piattaforma abbia ottenuto l’attenzione della comunità scientifica, le opinioni sul suo utilizzo sono discordanti. L’articolo presenta alcune conversazioni con ChatGPT per esaminare le questioni etiche, relazionali e linguistiche legate all'interazione uomo-computer (HCI) e valutare il suo potenziale per la ricerca nelle Scienze Umane. L’interazione con la piattaforma ricorda il fenomeno della “uncanny valley” già noto alla Robotica Sociale. Sebbene ChatGPT possa essere utile, esso richiede un’adeguata supervisione e verifica dei risultati. Inoltre, è necessario sviluppare nuovi metodi di ricerca qualitativi, quantitativi e misti.

Riferimenti bibliografici

Alavi, H. S., & Lalanne, D. (2020). Interfacing AI with Social Sciences: The Call for a New Research Focus in HCI. In F. Loizides, M. Winckler, U. Chatterjee, J. Abdelnour-Nocera, & A. Parmaxi (Eds.), Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from the INTERACT 2019 Workshops (pp. 197–202). Cardiff: Cardiff University Press.

Bateson, G. (1972). Steps to an Ecology of Mind. Chicago: University of Chicago Press.

Bateson, G., Jackson, D., Haley, J., & Weakland, J. (1956). Toward a Theory of Schizophrenia. Behavioral Science, 1, 251–264.

Boella, L. (2017). Grammatica del sentire. Compassione, simpatia, empatia. Milano: CUEM.

Bradford, D. K., Ireland, D., McDonald, J., Tan, T., Hatfield-White, E., Regan, T., et al. (2020). ‘Hear’ to Help Chatbot. Co-development of a Chatbot to Facilitate Participation in Tertiary Education for Students on the Autism Spectrum and those with Related Conditions. Final Report. Brisbane: Cooperative Researcher Centre for Living with Autism. Retrieved June 18, 2023 from https://www.autismcrc.com.au/sites/default/files/reports/3-062_Hear-to-Help-Chatbot_Final-Report.pdf

Cave, S., & Dihal, K. (2019). Hopes and Fears for Intelligent Machines in Fiction and Reality. Nature Machine Intelligence, 1, 74–78. https://doi.org/10.1038/s42256-019-0020-9

ChatGPT Generative Pre-trained Transformer, & Zhavoronkov, A. (2022). Rapamycin in the Context of Pascal’s Wager: Generative Pre-trained Transformer Perspective. Oncoscience, 9, 82–84. https://doi.org/10.18632/oncoscience.571

Cheetam, M., Suter, P., & Jancke, L. (2014). Perceptual Discrimination Difficulty and Familiarity in the Uncanny Valley: More Like a “Happy Valley”. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01219

Crutzen, R., Peters, G., Portugal, S., Fisser, E., & Grolleman, J. (2011). An Artificially Intelligent Chat Agent That Answers Adolescents’ Questions Related to Sex, Drugs, and Alcohol: An Exploratory Study. Journal of Adolescent Health, 48(5), 514–519. https://doi.org/10.1016/j.jadohealth.2010.09.002

de Arriba-Pérez, F., García-Méndez, S., González-Castaño, F. J., & Costa-Montenegro, E. (2022). Automatic Detection of Cognitive Impairment in Elderly People Using an Entertainment Chatbot with Natural Language Processing Capabilities. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-022-03849-2

Fiske, A., Henningsen, P., & Buyx, A. (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology and Psychotherapy. Journal of Medical Internet Research, 21(5). https://doi.org/10.2196/13216

Giorgi, A. (1970). Psychology as a Human Science. New York: Harper & Row.

Gurcan, F., Cagiltay, N., & Cagiltay, K. (2021). Mapping Human-Computer Interaction Research Themes and Trends from Its Existence to Today: A Topic Modeling-Based Review of past 60 Years. International Journal of Human-Computer Interaction, 37(3), 267–280. https://doi.org/10.1080/10447318.2020.1819668

Hertz, N., & Wiese, E. (2019). Good advice is beyond all price, but what if it comes from a machine?. Journal of Experimental Psychology: Applied, 25(3), 386–395. https://doi.org/10.1037/xap0000205

Infarinato, F., Jansen-Kosterink, S., Romano, P., van Velsen, L., Op den Akker, H., Rizza, F., et al. (2020). Acceptance and Potential Impact of the eWALL Platform for Health Monitoring and Promotion in Persons with a Chronic Disease or Age-Related Impairment. International Journal of Environmental Research and Public Health, 17(21). https://doi.org/10.3390/ijerph17217893

Kyriazakos, S., Schlieter, H., Gand, K., Caprino, M., Corbo, M., Tropea, P., et al. (2020). A Novel Virtual Coaching System Based on Personalized Clinical Pathways for Rehabilitation of Older Adults-Requirements and Implementation Plan of the vCare Project. Frontiers in Digital Health, 2. https://doi.org/10.3389/fdgth.2020.546562

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press.

Lord, S. P., Sheng, E., Imel, Z. E., Baer, J., & Atkins, D. C. (2015). More Than Reflections: Empathy in Motivational Interviewing Includes Language Style Synchrony Between Therapist and Client. Behavior Therapy, 46(3), 296–303. https://doi.org/10.1016/j.beth.2014.11.002

Maxmen, A. (2018). Self-driving Car Dilemmas Reveal that Moral Choices are not Universal. Nature, 562, 469–470. https://doi.org/10.1038/d41586-018-07135-0

Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris: Gallimard.

Ong, D. C., Zaki, J., & Goodman, N. D. (2020). Computational Models of Emotion Inference in Theory of Mind: A Review and Roadmap. Topics in Cognitive Science, 11(2), 338–357. https://doi.org/10.1111/tops.12371

OpenAI (2023). Introducing ChatGPT. Retrieved March 6, 2023 from https://openai.com/blog/chatgpt

Osborne, J. W. (2011). Some Basic Existential-Phenomenological Research Methodology for Counsellors. Canadian Journal of Counselling and Psychotherapy, 24(2), 79–91.

Qasem, F. (2023). ChatGPT in Scientific and Academic Research: Fears and Reassurances. Library Hi Tech News, 40(3), 30–32. https://doi.org/10.1108/LHTN-03-2023-0043

Reddit (2022). ChatGPT, Tell Me a Joke About Men & Women. Retrieved March 6, 2023 from https://www.reddit.com/r/ChatGPT/comments/znxz38/tell_me_a_joke_about_men_women/

Riegelsberg, J., Sasse, M. A., & McCarty, J. D. (2015). The Mechanics of Trust: A Framework for Research and Design. International Journal of Human-Computer Studies, 62(3), 381–422. https://doi.org/10.1016/j.ijhcs.2005.01.001

Rizzolatti, G., & Sinigaglia, C. (2008). Mirrors in the Brain. How Our Minds Share Actions, Emotions, and Experience. Oxford: Oxford University Press.

Sardareh, S. A., Brown, G. T. L., & Denny, P. (2021). Comparing Four Contemporary Statistical Software Tools for Introductory Data Science and Statistics in the Social Sciences. Teaching Statistics, 43(51), S157-S172. https://doi.org/10.1111/test.12274

Singh, H., Bhangare, A., Singh, R., Zope, S., & Saindane, P. (2023). Chatbots: A Survey of the Technology. In J. Hemanth, D. Pelusi, & J. Chen (Eds.), Intelligent Cyber Physical Systems and Internet of Things (pp. 671–91). Cham: Springer.

Singh, S. (2021). Racial Biases in Healthcare: Examining the Contributions of Point of Care Tools and Unintended Practitioner Bias to Patient Treatment and Diagnosis. Health, Online First, December 7. https://doi.org/10.1177/13634593211061215

Spicer, J., & Sanborn, A. N. (2019). What Does the Mind Learn? A Comparison of Human and Machine Learning Representations. Current Opinion in Neurobiology, 55, 97–102. https://doi.org/10.1016/j.conb.2019.02.004

Stein, E. (1989). On the Problem of Empathy (3rd revised ed.). Washington (DC): CS Publications.

Stoker-Walker, C. (2023). ChatGPT Listed as Author on Research Papers: Many Scientists Disapprove. Nature, 613, 620–621. https://doi.org/10.1038/d41586-023-00107-z

Van Manen, M. (2016). Researching Lived Experience: Human Science for an Action Sensitive Pedagogy (2nd ed.). Abingdon: Routledge.

Wankhade, M., Rao, A. C. S., & Kulkarni, C. (2022). A Survey on Sentiment Analysis Methods, Applications, and Challenges. Artificial Intelligence Review, 55, 5731–5780. https://doi.org/10.1007/s10462-022-10144-1

Watzlawick, P. (1983). The Situation Is Hopeless, But Not Serious: The Pursuit of Unhappiness. New York: W. W. Norton & Company.

Wiese, E., & Weis, P. (2020). It Matters to Me if You are Human - Examining Categorical Perception in Human and Nonhuman Agents. International Journal of Human-Computer Studies, 133, 1–12. https://doi.org/10.1016/j.ijhcs.2019.08.002

Zahour, O., Benlahmar, E. H., Eddaoui, A., Ouchra, H., & Hourrane, O. (2020). A System for Educational and Vocational Guidance in Morocco: Chatbot E-Orientation. Procedia Computer Science, 175, 554–559. https://doi.org/10.1016/j.procs.2020.07.079

Zou, J., & Schiebinger, L. (2018). AI Can be Sexist and Racist – It’s Time to Make it Fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8

Pubblicato

2023-08-30

Come citare

D’Oria, M. (2023). I modelli linguistici di IA possono migliorare la ricerca nelle Scienze Umane? Analisi fenomenologica e direzioni future. Encyclopaideia, 27(66), 77–92. https://doi.org/10.6092/issn.1825-8670/16554

Fascicolo

Sezione

Saggi