Vol. 2026 No. 1 (2026): 2026 Continuous Issue
Articles

Critical Trust in Generative AI: Suspending Disbelief and Developing Critical Friend Groups Among University Students

Timothy Mattison
University of Southern Indiana
Elizabeth Wilkins
University of Southern Indiana
Julie Conrad
University of Southern Indiana

Published 2026-03-10

Keywords

  • generative artificial intelligence,
  • AI,
  • GenAI,
  • trust,
  • critical hope,
  • critical friend groups
  • ...More
    Less

How to Cite

Mattison, T., Wilkins, E., & Conrad, J. (2026). Critical Trust in Generative AI: Suspending Disbelief and Developing Critical Friend Groups Among University Students. International Journal of AI in Pedagogy, Innovation, and Learning Futures, 2026(1). Retrieved from https://journals.calstate.edu/ijaipil/article/view/6967

Abstract

In a 2025 international study involving 11,706 undergraduate students, 29% overall (67% in the U.S.) reported regular use of generative artificial intelligence (GenAI) programs. This could be problematic, since GenAI outputs often contain randomly parroted falsehoods, and students could overuse the technology. This manuscript proposes a new framework, critical trust, to help university students balance trust and mistrust in the validity of information in GenAI outputs. A critical trust framework could also help students make ethical decisions about how and when to use GenAI applications in their academic work. University students can employ critical trust in critical friend groups, which could be especially beneficial for students from historically marginalized communities without college-educated mentors, helping them avoid accusations of misuse of GenAI and the loss of credibility that can result from plagiarism. The critical trust framework could also provide education researchers with a pathway to develop and test a scale that measures critical trust, ultimately determining the optimal levels to minimize harms from GenAI hallucinations and inappropriate overuse.

References

  1. Abrardi, L., Cambini, C., & Pino, F. (2024). Regulating data sales: The role of data selling mechanisms. Telecommunications Policy, 48(8), 102813.
  2. Ahmad, N., Murugesan, S., & Kshetri, N. (2023). Generative artificial intelligence and the education sector. Computer, 56(6), 72–76.
  3. Ali, S., Ravi, P., Williams, R., DiPaola, D., & Breazeal, C. (2024). Constructing dreams using generative AI. In Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23268–23275.
  4. Alomary, A., & Woollard, J. (2015, November 21). How is technology accepted by users? A review of technology acceptance models and theories. In Proceedings of the IRES 17th International Conference (pp. 1–4). https://eprints.soton.ac.uk/382037/1/110-14486008271-4.pdf (ePrints Soton)
  5. American Civil Liberties Union. (2025, May 1). ICE deports 3 U.S. citizen children held incommunicado prior to the deportation [Press release]. https://www.aclu.org/press-releases/ice-deports-3-u-s-citizen-children-held-incommunicado-prior-to-the-deportation
  6. Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62.
  7. Bandi, A., Adapa, P. V. S. R., & Kuchi, Y. E. V. P. K. (2023). The power of generative AI: A review of requirements, models, input–output formats, evaluation metrics, and challenges. Future Internet, 15(8), 260.
  8. Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33(1), 63.
  9. Bordum, A. (2004). Trust as a critical concept (Working Paper No. wp2004-004). Copenhagen Business School. https://research-api.cbs.dk/ws/portalfiles/portal/59000504/wp2004_004.pdf
  10. Bozalek, V., Leibowitz, B., Carolissen, R., & Boler, M. (Eds.). (2014). Discerning critical hope in educational practices. Routledge.
  11. Brown, T. H., Lee, H. E., Hicken, M. T., Bonilla-Silva, E., & Homan, P. (2025). Conceptualizing and measuring systemic racism. Annual Review of Public Health, 46(1), 69–90.
  12. Canaan, J. (2005). Developing a pedagogy of critical hope. Learning & Teaching in the Social Sciences, 2(3), 159–174.
  13. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278.
  14. Chiu, T. K. (2023). The impact of generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments. Advance online publication, 1–17.
  15. Chopra, K., & Wallace, W. A. (2003). Trust in electronic environments. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS-36). https://doi.org/10.1109/HICSS.2003.1174902
  16. Černý, M. (2022). The history of chatbots: The journey from psychological experiment to educational object. Journal of Applied Technical and Educational Sciences, 12(3), 322–322. https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai
  17. Chegg, Inc. (2025, January 28). Chegg Global Student Survey 2025: 80% of undergraduates worldwide have used GenAI to support their studies—but accuracy a top concern [Press release]. https://investor.chegg.com/Press-Releases/press-release-details/2025/Chegg-Global-Student-Survey-2025-80-of-Undergraduates-Worldwide-Have-Used-GenAI-to-Support-their-Studies--But-Accuracy-a-Top-Concern/default.aspx
  18. Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32(3), 444–452.
  19. Costa, A. L., & Kallick, B. (1993). Through the lens of a critical friend. Educational Leadership, 51, 49–49.
  20. Delgado, R., & Stefancic, J. (2023). Critical race theory: An introduction (Vol. 87). NYU Press.
  21. Duncan-Andrade, J. (2009). Note to educators: Hope required when growing roses in concrete. Harvard Educational Review, 79(2), 181–194. https://doi.org/10.17763/haer.79.2.nu3436017730384
  22. Duncan-Andrade, J. M. R. (2010). What a coach can teach a teacher: Lessons urban schools can learn from a successful sports program. Peter Lang.
  23. Dunne, F., & Honts, F. (1998, April 13–17). That group really makes me think! Critical friends groups and the development of reflective practitioners [Conference presentation]. Annual Meeting of the American Educational Research Association, San Diego, CA, United States.
  24. Elena-Bucea, A., Cruz-Jesus, F., Oliveira, T., & Coelho, P. S. (2021). Assessing the role of age, education, gender and income on the digital divide: Evidence for the European Union. Information Systems Frontiers, 23, 1007–1021.
  25. Epstein, Z., Hertzmann, A., Investigators of Human Creativity, Akten, M., Farid, H., Fjeld, J., … Smith, A. (2023). Art and the science of generative AI. Science, 380(6650), 1110–1111.
  26. Fett, A. K. J., Shergill, S. S., Gromann, P. M., Dumontheil, I., Blakemore, S.-J., Yakub, F., & Krabbendam, L. (2014). Trust and social reciprocity in adolescence—A matter of perspective-taking. Journal of Adolescence, 37(2), 175–184.
  27. Flanagan, C. A., & Stout, M. (2010). Developmental patterns of social trust between early and late adolescence: Age and school climate effects. Journal of Research on Adolescence, 20(3), 748–773.
  28. Foster-Fishman, P., & Watson, E. (2017). Understanding and promoting systems change. In M. A. Bond, I. Serrano-García, C. B. Keys, & M. Shinn (Eds.), APA handbook of community psychology: Methods for community research and action for diverse groups and issues (pp. 255–274). American Psychological Association.
  29. Freire, P. (2020). Pedagogy of the oppressed. In Toward a sociology of education (pp. 374–386). Routledge.
  30. Frey, C. B., & Osborne, M. (2023). Generative AI and the future of work: A reappraisal. Brown Journal of World Affairs, 30(1), 1–17.
  31. Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304.
  32. Geniusas, S. (2022). What is immersion? Towards a phenomenology of virtual reality. Journal of Phenomenological Psychology, 53(1), 1–24.
  33. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660.
  34. Gondode, P., Duggal, S., & Mahor, V. (2024). Artificial intelligence hallucinations in anaesthesia: Causes, consequences and countermeasures. Indian Journal of Anaesthesia, 68(7), 658–661.
  35. Grain, K. (2022). Critical hope: How to grapple with complexity, lead with purpose, and cultivate transformative social change. North Atlantic Books.
  36. Grain, K. M., & Land, D. E. (2017). The social justice turn: Cultivating critical hope in an age of despair. Michigan Journal of Community Service Learning, 23(1), 1–21.
  37. Graves, J. M., Abshire, D. A., Amiri, S., & Mackelprang, J. L. (2021). Disparities in technology and broadband internet access across rurality: Implications for health and education. Family & Community Health, 44(4), 257–265.
  38. Greene, M., & Macrine, S. L. (2020). Teaching as possibility: A light in dark times. In Critical pedagogy in uncertain times: Hope and possibilities (pp. 81–94). Springer.
  39. Hardiman, M., & Dewing, J. (2014). Critical ally and critical friend: Stepping stones to facilitating practice development. International Practice Development Journal, 4(1), 3-1 – 3-19.
  40. Harris, P. L., Corriveau, K. H., Pasquini, E. S., Koenig, M., Fusaro, M., & Clément, F. (2012). Credulity and the development of selective trust in early childhood. In M. J. Beran, J. L. Brandl, J. Perner, & J. Proust (Eds.), Foundations of Metacognition (pp. 193–210). Oxford University Press.
  41. Heller, H. (1988). The advisory service and consultancy. In H. Gray (Ed.), Management consultancy in schools. Cassell.
  42. Hien, L. T., & Nga, L. P. (2024). Current status and measures to develop critical thinking for pedagogical students in using generative artificial intelligence. Vinh University Journal of Science: Educational Science and Technology, 53. https://doi.org/10.56824/vujs.2024.htkhgd180
  43. Holland, N. N. (2008). Spider-Man? Sure! The neuroscience of suspending disbelief. Interdisciplinary Science Reviews, 33(4), 312–320.
  44. Hu, Y. H. (2024). Implementing generative AI chatbots as a decision aid for enhanced values clarification exercises in online business ethics education. Educational Technology & Society, 27(3), 356–373.
  45. Hwang, G.-J., & Chen, N.-S. (2023). Exploring the potential of generative artificial intelligence in education: Applications, challenges, and future research directions. Journal of Educational Technology & Society, 26(2), 1–18.
  46. Jeffrey, R. C., & Burgess, J. P. (2006). Formal logic: Its scope and limits. Hackett.
  47. Jones, J. H. (2012). Tuskegee’s truths: Rethinking the Tuskegee syphilis study. UNC Press Books.
  48. Jones, M. K., & McNulty, C. P. (2023). Critical friend groups: The unexpected relationships among education majors during study abroad. Journal of Education and Learning, 12(4), 14–25.
  49. Kalota, F. (2024). A primer on generative artificial intelligence. Education Sciences, 14(2), 172. https://doi.org/10.3390/educsci14020172
  50. Kanders, K., Stupple-Harris, L., Smith, L., & Gibson, J. L. (2024). Perspectives on the impact of generative AI on early-childhood development and education. Infant and Child Development, 33(4), e2514.
  51. Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., & Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research, 34(2), 313–338.
  52. Kauvar, G. B. (1969). Coleridge, Hawkesworth, and the willing suspension of disbelief. Papers on Language and Literature, 5(1), 91–94.
  53. Kazepides, T. (2012). Education as dialogue. Educational Philosophy and Theory, 44(9), 913–925.
  54. Ke, T. T., & Sudhir, K. (2023). Privacy rights and data security: GDPR and personal data markets. Management Science, 69(8), 4389–4412.
  55. Kim, S., Park, C., Jeon, G., Kim, S., & Kim, J. H. (2025). Automated audit and self-correction algorithm for seg-hallucination using MeshCNN-based on-demand generative AI. Bioengineering, 12(1), 81.
  56. Lamdan, S. (2022). Data cartels: The companies that control and monopolize our information. Stanford University Press.
  57. Lee, N. C., Jolles, J., & Krabbendam, L. (2016). Social information influences trust behaviour in adolescents. Journal of Adolescence, 46, 66–75.
  58. Leighton, G. R., Hugo, P. S., Roulin, A., & Amar, A. (2016). Just Google it: Assessing the use of Google Images to describe geographical variation in visible traits of organisms. Methods in Ecology and Evolution, 7(9), 1060–1070.
  59. Li, Y., Spoer, B. R., Lampe, T. M., Hsieh, P.-Y., Nelson, I. S., Vierse, A., … Gourevitch, M. N. (2023). Racial/ethnic and income disparities in neighborhood-level broadband access in 905 US cities, 2017–2021. Public Health, 217, 205–211.
  60. Luan, H., & Tsai, C.-C. (2021). A review of using machine learning approaches for precision education. Educational Technology & Society, 24(1), 250–266.
  61. Macrine, S. L. (Ed.). (2009). Critical pedagogy in uncertain times: Hope and possibilities (pp. 119–136). Palgrave Macmillan.
  62. Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. In 2024 IEEE Conference on Artificial Intelligence (CAI) (pp. 133–138). IEEE.
  63. Markson, L., & Luo, Y. (2020). Trust in early childhood. In Janette B. Benson (Ed.), Advances in Child Development and Behavior (Vol. 58, pp. 137–162). Academic Press/Elsevier.
  64. McDonald, J. P., Mohr, N., Dichter, A., & McDonald, E. C. (2015). The power of protocols: An educator’s guide to better practice. Teachers College Press.
  65. Megahed, F. M., Chen, Y. J., Ferris, J. A., Knoth, S., & Jones-Farmer, L. A. (2024). How generative AI models such as ChatGPT can be (mis)used in SPC practice, education, and research? An exploratory study. Quality Engineering, 36(2), 287–315.
  66. Meylan, A. (2024). What do we do when we suspend judgement? Philosophical Issues, 34(1), 253–270.
  67. Mollick, E. (2022, December 14). ChatGPT is a tipping point for AI. Harvard Business Review. https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai
  68. Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2, 100020.
  69. Pack, A., & Maloney, J. (2023). Potential affordances of generative AI in language education: Demonstrations and an evaluative framework. Teaching English with Technology, 23(2), 4–24.
  70. Power, P., & Corchnoy, S. (2024). The world of gaming and Jungian analysis. Psychological Perspectives, 67(1–2), 15–35.
  71. Rashel, M. M., Khandakar, S., Hossain, K., Shahid, A., Kawabata, T., Batool, W., … Rafique, T. (2024). AI in education: Unveiling the merits and applications of ChatGPT for effective teaching environments. Revista de Gestão Social e Ambiental, 18(10), e09110.
  72. Roveda, L., Magni, M., Cantoni, M., Piga, D., & Bucca, G. (2021). Human–robot collaboration in sensorless assembly task learning enhanced by uncertainties adaptation via Bayesian optimization. Robotics and Autonomous Systems, 136, 103711.
  73. Sampat, B., Mogaji, E., & Nguyen, N. P. (2024). The dark side of FinTech in financial services: A qualitative enquiry into FinTech developers’ perspective. International Journal of Bank Marketing, 42(1), 38–65.
  74. Sannon, S., & Forte, A. (2022). Privacy research with marginalized groups: What we know, what’s needed, and what’s next. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1–33.
  75. Senge, P. (1990). Peter Senge and the learning organization. Dimension, 14. [Issue/pages needed]
  76. Siontis, K. C., Attia, Z. I., Asirvatham, S. J., & Friedman, P. A. (2024). ChatGPT hallucinating: Can it get any more humanlike? European Heart Journal, 45(5), 321–323. https://doi.org/10.1093/eurheartj/ehad766
  77. Sira, M. (2023). Generative AI takes centre stage: Revolutionizing productivity and reshaping industries. System Safety: Human-Technical Facility-Environment, 5(1), 57–65.
  78. Storey, V. A., & Richard, B. (2015). The role of critical friends in supporting institutional change: Developing a collaborative environment. Journal of Applied Research in Higher Education, 7(2), 412–428.
  79. Storey, V. A., & Wang, V. C. (2017). Critical friends protocol: Andragogy and learning in a graduate classroom. Adult Learning, 28(3), 107–114.
  80. Su, J., & Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Review of Education, 6(3), 355–366.
  81. Taylor, S. N. (2014). Student self-assessment and multisource feedback assessment: Exploring benefits, limitations, and remedies. Journal of Management Education, 38(3), 359–383.
  82. van den Berg, G. (2024). Generative AI and educators: Partnering in using open digital content for transforming education. Open Praxis, 16(2), 130–141.
  83. van de Groep, S., Zanolie, K., Green, K. H., Sweijen, S. W., & Crone, E. A. (2020). A daily diary study on adolescents’ mood, empathy, and prosocial behavior during the COVID-19 pandemic. PLOS ONE, 15(10), e0240349.
  84. Wagner, V. (2024). Committing to indecision: A taxonomy of suspension of judgment. Routledge.
  85. Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15.
  86. Wu, Y., Jiang, A. Q., Li, W., Rabe, M., Staats, C., Jamnik, M., & Szegedy, C. (2022). Autoformalization with large language models. Advances in Neural Information Processing Systems, 35, 32353–32368.
  87. Yu, H., & Guo, Y. (2023). Generative artificial intelligence empowers educational reform: Current status, issues, and prospects. In Frontiers in Education (Vol. 8, Article 1183162). Frontiers Media SA.