For the last several years, people all over the world have been exposed to enthusiastic prognostications of how artificial intelligence (AI) has changed and will continue to change the ways we live and work. Over the last six months, I’ve embarked on the Sisyphean task of trying to make sense of information on AI. I’ve continued to add items to the sticky notes on my desk of “items to look at.” The list keeps growing, and my learning trajectory keeps lapsing into side alleys as I explore here and there with no discernible destination in sight. For instance, here’s my initial list…
- Litmaps
- Jasper
- Perplexity
- Ryter
- Writesonic
- Humata
- Citation gecko
- Research rabbit
- Story.ai
- Deepl
- Speechki
- Listnr.ai
- Murf
- Pi.ai
- Claude
- Jenni.ai
The sticky notes have morphed into documents scattered in various folders on my desktop… Yes, there’s more!
- Amazon Polly
- IBM Watson Text to speech
- Lovo.ai
- NaturalReader
- Beautiful.ai
- Designs.ai
- Tome
- Zoho Show
- Prolific
- Cassettai
- Invideo
- Nakkara
- Scite.ai
- Semantic scholar
- Consensus
- Elicit
- Trainka
- Paperpal
- Lateral.ai
I’ve read articles and attended workshops, webinars, and conference presentations. I’ve asked ChatGPT to locate information for me. I’ve looked to see how it handles writing assignments in my syllabi. I’ve prompted it to summarize an interview (with themes). I’ve learned some new words. (Grokking). I’ve tried out various protocols for writing prompts. I’ve reviewed policies on the use of AI generative tools and thought about how to revise my syllabi. I have been astounded by the subscription prices for tools. What’s to be made of it all? Here are seven things I’ve learned. And those among you who are informed on this topic, add a comment! Please teach us.
Become informed
It’s clear that early adopters are making use of these tools in creative and sometimes awe-inspiring ways. There is no question that AI generative tools are rapidly changing how we live and work. And for those of us who write, teach, research, and do qualitative research – these tools provide an assortment of ways to enhance our work. I’m enjoying one book recommended to me by a colleague: Ethan Mollick’s Co-Intelligence: Living and working with AI. And if you’d rather read about the roots of AI, read the novel, The MANIAC, by Benjamin Labatut.
Follow debates
We’ve all read and heard a lot about how students are using generative AI to “cheat.” Or, you might have met someone as I did these past few weeks who added their own writing to an AI-detector, only to be told that their writing was AI generated. (It was not.) Questions about what counts as “plagiarism” are muddied by the use of AI generative tools. And how might one know if a text is generated by AI? So if you are using these tools in writing, be sure to document your procedures. Eaton’s (2023) article on postplagiarism will provide many useful questions for you and your students to consider as you experiment.
Expect a learning curve
There is always a learning curve to using new tools. Researchers found that the learning curve for learning how to use Qualitative Data Analysis Software followed a “U” shape – that is, things get decidedly worse before they get better. AI generative tools promise to be time savers, right? But only if you know how to use them wisely. I’ve found that takes time.
Understand policies on the use of AI generative tools
My institution does not allow graduate students to use AI generative tools in dissertation writing without specific authorization from advisory committee members. Do you know what your institution’s policy is? Journals across the world have also added policies to do with the use of AI generative tools to their websites. The Committee on Publication Ethics has published guidance and discussed ethical questions that have arisen. Before you use AI generative tools in writing and publishing, be sure to examine the policies related to their use in whatever venues you intend to use the outputs.
Learn from your students
Along with me, my students have expressed both trepidation and excitement in using AI. I keep learning alongside them as we experiment together.
Be skeptical
Not all new tools are good tools. And even good tools can be used irresponsibly. I’ve been deluded by tools and gadgets that have turned out be duds (e.g.: Rabbit). I’ve been awed by the cost of subscriptions to use tools I’ve explored. Start-ups all over the world want your cash. Be slow to add your credit card number. Messeri and Crockett’s (2024) article will have you pondering on how to use AI tools responsibly. Here’s a teaser from their article: “The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less.” Ouch!
Review the user licenses
I’ve been guilty of clicking on the “agree” button without reading user licenses. Recent news articles, however, have shared how large companies have been vacuuming up data that we share on social media platforms to train their tools. Be aware of the ways in which the data you input will be used by others.
Just as I did not know where Web 2.0 would go in the early 2000s, the development of AI generative tools exhibits both promise and peril. I hope that as qualitative researchers we can harness these tools for good — providing ways to enhance the quality of our work and manage the many demands on our time. I’ll keep learning. What about you?
Kathy Roulston
NB: This blogpost was written without the aid of AI-generative tools. It took me 77 minutes to write it, and a lot longer to edit and post it. If you would like to read the post generated by ChatGPT 4.0, download it below. It took less than one minute to generate the text and a few seconds to post. I did not edit it. The not very creative prompt I used was: “Writing as a professor of qualitative research, write a 600 word blogpost on the promises and perils of using AI generative tools in qualitative research. Include 6 tips for readers.”
Reference
Eaton, S. E. (2023). Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19(1). https://doi.org/10.1007/s40979-023-00144-1
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49-58. https://doi.org/10.1038/s41586-024-07146-0