Generative Artificial Intelligence as a Collaborative Thought Partner: Rethinking Cognitive Partnership, Creativity, and Knowledge Work in the Age of Intelligent Systems: Generative Artificial Intelligence as a Collaborative Thought Partner
Published 2026-03-11
Keywords
- Cognitive partnership,
- Distributed cognition,
- Ethical AI use,
- Generative artificial intelligence,
- Human–AI collaboration
- Hybrid intelligence ...More
How to Cite
Copyright (c) 2026 Andrea M. Wilson, Cheryl Burleigh

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
As generative artificial intelligence (GenAI) systems become increasingly integrated into scholarly, creative, and professional workflows, there is an urgent need to reconceptualize their role, not simply as tools or assistants, but as collaborative thought partners. This scholarly essay proposes a conceptual framework for understanding GenAI as a partner in cognition and creativity. The conceptual framework is grounded in sociocultural and distributed cognition theories, and situated within the context of knowledge work. We review literature on human–AI collaboration, human–machine co-creation, and on the ethical and epistemic implications of GenAI. The discussion rests on three core arguments. First, GenAI can serve as a cognitive companion beyond mere automation; second, it has the potential to enhance ideation and research productivity when designed as a partner rather than a one-way assistant; and third, it presents significant limitations and risks—including epistemic dependence, bias, and ownership dilemmas—that must be addressed. We propose a framework for responsible human–GenAI thought partnership. The framework outlines the roles of humans and AI, design conditions, and principles of high-quality collaboration. Implications for higher education, professional knowledge work, research practice, and AI governance are also explored. We recommend future empirical lines of inquiry and call for a balanced, evidence-based approach to integrating GenAI into cognitive and creative workflows.
References
- Albayati, H. (2024). Investigating undergraduate students' perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6, 100203. https://doi.org/10.1016/j.caeai.2024.100203
- Amofa, B., Kamudyariwa, X. B., Fernandes, F. A. P., Osobajo, O. A., Jeremiah, F., & Oke, A. (2025). Navigating the complexity of generative artificial intelligence in higher education: A systematic literature review. Education Sciences, 15(7), 826. https://doi.org/10.3390/educsci15070826
- Basgen, B. (2025, April 9). AI as a thought partner in higher education. EDUCAUSE Review. https://er.educause.edu/articles/2025/4/ai-as-a-thought-partner-in-higher-education
- Bell, F. (2011). Connectivism: Its place in theory-informed research and innovation in technology-enabled learning. International Review of Research in Open and Distributed Learning, 12(3), 98–118. https://files.eric.ed.gov/fulltext/EJ920745.pdf
- Burleigh, C., & Wilson, A. M. (2024). Generative AI: Is authentic qualitative research data collection possible? Journal of Educational Technology Systems, 53(2), 89-115. https://doi.org/10.1177/00472395241270278
- Burleigh, C., & Wilson, A. M. (2026). Automating academia: Implications of GenAI use in doctoral research and online mentoring. Journal of Online Mentoring, 1, 1–40. https://doi.org/10.5590/JOM.2026.1.1000
- Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
- Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E., Jeyaraj, A., Kar, A., Baabdullah, A., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M., Al-Busaidi, A., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter, L., & Wright, R. (2023). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
- Engeström, Y. (2014). Learning by expanding: An activity-theoretical approach to developmental research (2nd ed.). Cambridge University Press.
- Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70. https://doi.org/10.1016/j.infoandorg.2018.02.005
- Floridi, L. (2019). Translating principles into practice of digital ethics: Five risks of being unethical. Philosophy & Technology, 32, 185-193. https://doi.org/10.1007/s13347-019-00354-x
- Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
- Grawitch, M. (2025). When AI acts human–but lacks humanity. Psychology Today. https://www.psychologytoday.com/us/blog/a-hovercraft-full-of-eels/202510/when-ai-acts-human-but-lacks-humanity
- Gündöcs, D., Horvath, S., & Dörfler, V. (2025). Uncovering the dynamics of human–AI hybrid performance: A qualitative meta-analysis of empirical studies. International Journal of Human-Computer Studies, 205, 103622. https://doi.org/10.1016/j.ijhcs.2025.103622
- Hadwin, A. F., Järvelä, S., & Miller, M. (2018). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In D. H. Schunk, & J. A. Greene (Eds.), Handbook of Self-Regulation of Learning and Performance (2nd ed., pp. 83–106). Routledge.
- Hao, C., Uusitalo, S., Figueroa, C., Smit, Q. T. S., Strange, M., Chang, W. T., Ribeiro, M. I., Kouomogne Nana, V., Tielman, M.L., & de Boer, M.H.T. (2025). A human-centered perspective on research challenges for hybrid human artificial intelligence in lifestyle and behavior change support. Frontiers in Digital Health, 7,1544185. https://doi.org/10.3389/fdgth.2025.1544185
- Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human–computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–196. https://doi.org/10.1145/353485.353487
- Hong, T. T. M., Tung, N. T. T. & Thanh, N. T. P. (2025). Mapping artificial intelligence research in higher education toward sustainable development. Discover Sustainability, 6, 1240. https://doi.org/10.1007/s43621-025-02162-0
- Hutchins, E. (1995). Cognition in the wild. Bradford Books.
- Järvelä, S., Nguyen, A., & Hadwin, A. (2023). Human and artificial intelligence collaboration for socially shared regulation in learning. British Journal of Educational Technology, 54, 1057–1076. https://doi.org/10.1111/bjet.13325
- Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
- Kong, X., Fang, H., Chen, W., Xiao, J., & Zhang, M. (2025). Examining human–AI collaboration in hybrid intelligence learning environments: Insights from the Synergy Degree Model. Humanities and Social Sciences Communications, 12, Article 821. https://doi.org/10.1057/s41599-025-05097-z
- Liu, W., & Ling, J. (2025). Mapping the landscape of AI research in higher education. International Journal of e-Collaboration, 21(1), Article 42. https://doi.org/10.4018/IJeC.394818
- Liu, Y., Yang, Y., & Xu, H. (2025). From humans to AI: Understanding why AI is perceived as the preferred co-creation partner. Frontiers in Psychology, 16, 1695532. https://doi.org/10.3389/fpsyg.2025.1695532
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.
- Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Basic Books.
- Rogoff, B. (1995). Observing sociocultural activity on three planes: Participatory appropriation, guided participation, and apprenticeship. In J. V. Wertsch, P. del Río, & A. Álvarez (Eds.), Sociocultural Studies of Mind (pp. 139–164). Cambridge University Press.
- Rupprecht, P., & Mayrhofer, W. (2024). Hybrid intelligence - An approach towards the symbiosis of artificial and human creativity and interaction in the design and innovation process in SMEs. Creativity, Innovation and Entrepreneurship, 125, 38-42. http://doi.org/10.54941/ahfe1004718
- Schön, D. A. (1984). The reflective practitioner: How professionals think in action. Basic Books.
- Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
- Sidra, S., & Mason, C. (2025). Generative AI in Human-AI Collaboration: Validation of the Collaborative AI Literacy and Collaborative AI Metacognition Scales for Effective Use. International Journal of Human–Computer Interaction, 1–25. https://doi.org/10.1080/10447318.2025.2543997
- Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), 3–10. https://www.itdl.org/Journal/Jan_05/article01.htm
- UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137
- Vaccaro, M., Davis, G., & Malone, T. W. (2024). When combinations of humans and AI are useful. Nature Human Behaviour, 8, 2293-2303. https://doi.org/10.1038/s41562-024-02024-1
- Vygotsky, L. S., Cole, M., John-Steiner, V., Scribner, S., & Souberman, E. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
- Wertsch, J. V. (1998). Mind as action. Oxford University Press.
- Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. https://doi.org/10.1080/17439884.2020.1798995
- Wilson, A., & Burleigh, C. (2025). Research integrity in the era of generative artificial intelligence. Journal of Educational Research and Practice, 15, 1–16. https://doi.org/10.5590/JERAP.2025.15.2054
- Woolley, A. W. (2025). Generative AI and collaboration: Opportunities for collective intelligence. Journal of Organization Design, 1-6. https://doi.org/10.1007/s41469-025-00199-z
- Xu, W., Gao, Z., & Ge, L. (2022). New research paradigms and agenda of human factors science in the intelligence era. arXiv. https://doi.org/10.48550/arXiv.2208.12396
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), Article 39. https://doi.org/10.1186/s41239-019-0171-0
