Hello, and welcome back to another episode of coloring outside the memos. I'm doctor Lizzie. I'm doctor Tiffany. Oh, just got so excited, doctor Lizzie. I know. I know. We are chomping at the bit to get this episode done. So before we get started, if you have any questions, comments, or concerns, you can always email us at c0tmpod@gmail.com. You can also check out our website at c0 c0tmpod.com, where we post updated episodes, transcripts, and lots of other goodies. And today, we are talking about artificial intelligence, everyone's favorite topic right now, or at least it certainly is in all of the teaching newsletters I get. So, doctor Tiffany, can you tell us a little bit more about the article you found for us for this journal club? Yep. So the name of the article is how to use artificial intelligence AI as a resource methodological and analysis tool in qualitative research. That is a mouthful. So, thank you, Prokopis Kristal. I hope I'm not butchering your name, but, I mean, we looked it up. We looked it up. We looked it up. Prokopis is from, Cyprus University of Technology in Cyprus, and so really excited to talk about this. This is this article can be found in the qualitative report 2023, volume 28, number seven. So oh, just wanna make sure, and we'll put that information, on the show notes, but wanted to, like, share where that this article is coming from. So I think, you know, we're beginning talking about this. It's like, what are some of the findings that I thought were interesting in this article? And the amount something that resonated with me was the amount of time that qualitative researchers use GPT. So if nobody is familiar with GPT is, it's generative, pre trained, trained transformers, and how they were were or can be used to in the academic community such as produce, translate, summarize, and analyze the information. So I go ahead. Go ahead. No. Go ahead. I think before we go too much further, I wanna just take a pause and, like, let's talk about what those GPTs are because we love our acronyms as every field does. And we've talked about our QDAs or our qualitative data analysis machine, so ATLAS TI, MAXQDA, Dedoose, NVivo. I don't remember all of the other ones we talked about, but there are some others. Mhmm. Tigat. Thank you. And so GPTs are not those, but some of those include GPTs within them. But GPTs can also be but GPTs can also be things like CHAT, GPT, or Gemini, or what are some of the other big ones? On Microsoft. Uh-huh. And even Siri on Apple, right, is technically an AI or Alexa on Amazon is technically an AI. Oh, don't say that too loud because it might go off in my house. You might end up ordering stuff. So there are lots of them that have been in our world for a hot minute. Mhmm. But it seems to me like they've really exploded in the last couple years, particularly since Chatbeat b t has become a thing, but it is not the only one. And I like to remind people of that that there are a lot of them floating around. Yeah. Definitely. Definitely. Sorry. I just got so excited talking about that. And so No. It's okay. I just I think, like because GPT sounds scary. Right? Or, like, there's been so much information or disinformation or misinformation around it. Mhmm. Like and on some level, even, like, spell check is a GPT, right, that has been around forever. Yeah. It's been before Grammarly. Right? Right. Right. And, like, for the olds that are listening, Clippy was a GPT in some ways. Right? We all miss Clippy. But, like, I think I think it's like, it is new, and it is a new level of scary, but it's also old. And so, like, also old. And so, like, don't be terribly afraid by, like, some of this language. Not all of it is scary and new. Yeah. No. You're absolutely right. I think for me, what has been scary in using it is is it going to give me the full picture that I need? And the answer is no. Like, you know but we'll get down to that, you know, later on in this, episode. Going back to, like, what is a GPT, this article says that generative pretrained transformers, GPT, for example, are types of deep learning models that are increasingly being used by the qualitative research community for various reasons and as such as the produce to produce oh, sorry. I can't I'm talking about pro produce like apples Vegetables. To produce. Oops. Produce, not produce. Translate, summarize, and analyze information. So I think, you know, it's it's and as you were explaining to me earlier, when we say, like, these not GPT. Was it learning models, deep learning models? It's more of a a science kind of tech Computer science. Computer science terminology, versus something that maybe some of us who are who are hardcore in qualitative research may not be familiar with. So which I was not familiar with. Yeah. Yeah. Well, and we've talked about machine learning for years. Mhmm. And, like, that's the same kind of process that these GPTs are using. Mhmm. Mhmm. Yeah. So I think from here, I definitely wanna address the elephant in the room, which is, like, concerns of bias and ethics to using GPT. Yeah. So, I mean, what are your thoughts on the bias the biases that can come out when using a GPT? Well, I think we need to talk about how people are using them. Right? Because I think the big thing I hear right now, in around my university or even with my students or with other faculty who are complaining about students or on all of the newsletters talking about AI, like, the biggest thing I'm hearing is, like, it's it's more than ethics or bias. It's the concern of how people are using it and that they're not doing the work. And there's something, particularly in higher ed that we prioritize more than anything else is not only being productive, but that you did the hard work to be productive. Right? Yeah. And I think it's because of the reputation of higher ed in the larger world. And when I say that, I'll give you an example of what my dad said when I told him I was getting my PhD. And he said to me, will that be nice to just sit around and read and not do work? And I was like, I'm not sit around and not do the work. Thanks, dad. The manual labor. And, like, for you, it seems like a nice vacation to just get to sit around and read, but, like, it's actually really taxing to, like, read and think hard all day. Yes. But, anyway, different context. Right? So, like, I think a lot of higher ed prioritizes that busyness and that, like, hard work mentality. Yeah. Because they wanna fight that stigma that's in the larger world. And so GPT is, like, the antithesis of that idea that higher ed prides itself in. Right? Right. And so we're looking at the how, and we're not looking at the why. We're not looking at the what it actually is. We've just jumped from, like, this could be concerning or this is fine, and we're not investigating any of those or why people are turning to this instead of doing the hard work. We're just jumping to, oh, you used a shortcut instead of doing it, like, the long hand, and that's wrong. And I think so, like, I think that concern and bias, like, I think there's a lot of that out there in the world, but I think it's really hard to put a handle on and name exactly what that is because it's not just the plagiarism thing. Because if that was the case, like, I don't know. Like, we use formulas of other statisticians to do our stats. Right? And we don't consider that plagiarism. And I think you and I have had a lot of conversations outside of this podcast, although I'm not sure I've ever said this on a podcast before. But, like, I think the way we think about plagiarism is really complicated and doesn't always make sense. And it tends to negatively impact, students of color and international students, and other marginalized students far more than it does, anyone else. And it's like Right. I don't like, when GPT has first came out, when AI first came out, I was like, sweet. We're leveling the playing field from the poor kids who could never pay somebody to do their work for them and everyone else. Right. And it's like, my colleague's crew is like, what is wrong with you? Right? But, like, I think as we get deeper into this, like, it's not just the shortcut of the work. It's not just the, like, plagiarism aspect, but it's the, especially in qualitative work, we prioritize so much, like, deeply knowing your data. Yep. And so it's this, like, how you how can you know your data if you are the one to do the work? Right? And, like, that's where I get stuck. Like, do I care that people are improving their writing with a GPT? No. Not at all. Doesn't bother me even a little bit. Right? Because grammar is inherently racist and classist, and, like, I just don't care that much. But, like, do I care if people aren't touching their data and really getting to know it? Yeah. Right. No. I I definitely agree with you. I mean, I think the other part to that is also when you're using a GPT, the privacy of the information. You know? And and Yeah. I I definitely and and this is me, like, trying to you know, when when you use other programs that transcribe for you, like, the first twenty minutes are free Mhmm. Or whatever. And I remember using that, like, just uploading my m p four file, and then it would spit back out the transcript. My concern always was where in the universe did, you know, my interview go? Like, is it still you know, that that's something that I'm really con I've always been concerned about. I think you hit the nail on the head when you were talking about how bias the GPTs are against, like, marginalized populations, and I think that's something that we don't really think of. And and, you know, it's making me think now, like, what things are what things are more important important when when I'm gonna use air quotes, grading or or or really looking at a student's work. You know? It's not, you know, it's not the fact they're using a GPT. It's the fact of how do they use the GPT. You know? And I think that's where I am. I mean, I had a student last semester who ended up using their own work, and it, you know, sent red flags. And it was just like, oh, this is plagiarism. And I had to explain to that student. I'm like, you didn't cite yourself, so, you know, first off, you shouldn't be using your work again. But most importantly, because because you already took the class, but most importantly, like, you need to cite yourself. And so I I mean, it was a real learning moment, not just for her, but for me because, you know, most of us use turn it in or not most, but maybe some of us will use turn it in on Or Safesign or there's several, but they all work the same. They all work the same. And so, you know, in you know, to see this, like, red, you know, like, 92%, I'm I'm I guess, like, it's not 92%, but 92% of this paper is plagiarized. I'm like, I don't No. I've had that. I've had a %. Oh my gosh. Oh my gosh. And, you know, and I think that's kind of where faculty for lack of better words, where faculty are freaking out, you know, because they're like, oh my gosh. Like, they used a % of this and I'm not saying you, doctor Lizzie, but, like, our colleagues in the field or who are teaching, they're like, oh, wow. Like, what do I do now? Do it does this, like, send a, you know, red flag to, you know, going to, like, a play not plagiarism, but but an honor committee or something like that. You know? Instead of it being sent there, like, maybe there needs to be a conversation had, you know, with our students. You know? And so that's kind of where and that, like, that was what I needed, like, when I was reading this article. Like, that was something that I kinda gravitated toward was, like, this particular article was, like, educating the field about how to use GPT safely. And, I mean, I just I you know, initially, in the past, I've been very hesitant about even, like, using, like, chat GPT. I mean, I put information in. Like, I have my own account, and I'm like, okay. Like, tell me what narrative inquiry is, which I do know what narrative inquiry is. But, I mean, I was like, tell me more. Like, tell me if you are citing the same people I am. Now that's that's where there's hiccups, I think, because possibly, chat g p t or g p t is not picking up on authors who are out there who are also doing the same amount of work and the same lift, but they just happen to be part of a marginalized population. So, so yeah. I, yeah, I think there's definitely concerns and biases, but there's also a need to educate. You know? Yeah. Yeah. I I mean, I think that's just it. Right? Like, I think this is one of those moments when the world has changed. Mhmm. And I think I used to think in high school a lot, like, there's this magical day in trade class where we had spent, you know, six months, you know, learning how to do problems that took you 15 pages of, like, handwriting it out. And then this magical, magical day when we got to pull out our graphing calculators for the first time that were so expensive and such a novelty at that point. Like and I cannot explain that to my students for the life of me because they just have them on their phones. But, like, it was so hard to get those stupid things. And then you punched in the a formula, and it spit out the answer in two seconds. And, like, I remember literally crying in class and being like, are you kidding me? Like, I just spent the last six hours doing this problem long handed. It took me 20 pages of paper, and it this stupid device did it in two seconds. And I'm like, I, like, lost my mind. But then I sat around, and I thought, like, what would it have been like the first time these came out? Right? And, like, when they were so, like, either cost prohibitive or you had to grapple with the world changing, and no longer did you have to do 30 stacks of paper deep to do one formula. You could just punch it into the computer. And, like, thinking about how weird that would have been to be at that technology change time. Right? And, like, I used to talk to my grandpa about, like, what it was like when phones first came out. Right? And or, like, when everyone first started to get access to that. And, like, this is one of those big technology shifts that's gonna completely change our world. Right? And I think, like, to be around at this time, do you stick your head in the sand and go, I'm not gonna use it, or do you say, like, okay. We can use it, but, or do we, like and how do we teach how to use it if people haven't learned the foundational skills? And so, like, I think the fear is people never learn the foundational skills Right. Then you can't use the short cut. Right? Like, I think there's value in still learning how to do 30 pages of math by hand. Right. Right. Like, much as, like, my 17 year old self would punch me in the face for saying that, like, like, truly. But, like, I do think there was benefit to learning how to do that right before I learned the shortcut. But, like, when I tell my students, okay. I need you to write this 10 page paper. They're like, yeah. But I could just type it into chat GPT and get it for you in two seconds. And I'm like, you know, but it's not gonna be correct. And they're like, yeah. But it is. And I'm like, no. But it's not. And, right, and, like, the amount of debates we go back and forth with that on. And, like, I think the same thing is true in research. Like, this fear of, like, losing the hard work aspect, which in some cases is really valid because you need to know how to do it. Yeah. I mean, it's definitely I fully agree with you. Let me back up and tell you, I fully agree with you. And when you talked about those graphing calculators, I was from one of those families who could not afford a graphing calculator calculator. And I remember them passing being passed out in the classroom and being like, okay. If you need one, here you go. Yeah. So I think my graphing calculator never worked. So that's why I did poorly in math. Okay? And that's that's what I'm saying. Okay? Nobody taught me how to properly use my graphing calculator. So I always had the sign, like, hands go up and turn to tilt it to the left. I believe that. I really do. It was definitely a difficult time. I mean, but you're you're absolutely right. I think now we're seeing faculty who are asked at least here. I have heard faculty asking the question, well, how do I use ChatGPT? Because my students obviously already know how to use it, but I don't know how to use it. And I thought that was that's something that's been really intriguing to me. And the other part is that our faculty willing to to learn about it. I'm not saying that here at American, they're not. I'm not but please don't come back and say, well, Tiffany Quash said this. No. I think what the deal is is that trying to understand and trying to learn what they're what the students are actually doing to get the responses that they're actually getting. And I really, really liked how, you know, how, Prokopis broke that down in terms of, like, doing research in the field. Yeah. Well, and I think that's exactly right, especially with our doctoral students. Right? Because they are using this. And, like, one of my doc students said to me, like, oh, I double checked my formulas in CHAT GPT, and I was like, how do you know CHAT GPT was correct? And they were, like, looking at me like I had five heads, and they were like, of course, it's correct. And I was like, why should it be correct? It's a human tool designed by humans. It has flaws. Right. And, like, those flaws include racism. They include classism. They include anything that is bad on the Internet because that's what's being fed to teach them how to do things. Just because it's in chat GPT doesn't mean it's right. Right. You know? So, like, why on earth would it be right about research? And they're like, no. But it is right. You don't understand. And I'm like, okay. Alright. Like, show it to me then. Like, you teach me. And, like, I think that's conversations we need to be having because how do we transform into this world that we don't understand without having some amount of openness? And Yeah. Like, tell me more. Right. And I think this is where, like, where faculty have to be humble. Mhmm. You know? Like, faculty real like, you have to be humble in being able to say to a student, a trusting student, be like, can you just show me? Like, it's more of like like, my godchildren, I love them dearly. But I've actually said to them, I'm like, can you show me how to use Instagram? Because what's going on in every time I push this button on Instagram, it's not working. And they look at me and laugh, and they're like, well, auntie, you just do this. I'm like, oh, okay. Now do I remember how I used it? No, I don't. But it was important that they showed me so that I may not make the same mistake over, which I probably did. But it was just I think, you know, faculty just need to learn how to be humble because I think some, not all, but some faculty who have PhDs or EdDs, like a doctorate of some sort, They're like, well, I've got this doctorate now. I don't need any help. You know? And it's like, no. Like, use use your resources. Use if you have a mentor out there, use your mentors. If you use if you have students who are working for you or with I shouldn't say for. With with you, then, you know, like, have them work with you and and and to understanding how to use chat g p t, you know, or g the GPTs or the learning models. Yes. Absolutely. They're not scary. Right? And I think I think particularly for me, like, I and I'm sure this is true for you too. Right? For anyone in that late Gen x millennial generations, we all grew up at a time where we have been taught our entire lives that we know more about computers than anyone else. Mhmm. And now suddenly this new technology has come about and we don't understand it. And it doesn't make sense to us, and it is really hard for us to get our brains around. Mhmm. You know, it's the kids are running away with it, and it, like, freaks out our little, like, primal lizard brain that we're not good at something that we've always been told we're good at. Yeah. It's scary. I get it. But, also, it's like, take a deep breath. You're alright. Just, you know, when I do, I make a joke and I say, someday, you small children, you gen z or gen alphas or whoever you are in my room, you're going to be looking at the hologram machine and going, I don't know how to use this. And then you're gonna think back to this moment and go doctor b was right. I won't always understand technology. I you you're so right, and it's so funny. Like, you know, my my lovely spouse has said to me, I don't understand how you can be a Gen z or Gen x or whatever and a and a millennial or whatever at the same time because, you know, she's like, why don't you just follow these directions? And I'm like, because this is overwhelming. Here's my phone. Take it. You know? I know. I know. It's it's so but it's but I mean, it just comes to show you that, like, technology is moving fast. And the reality is can, you know, qualitative researchers, can we stay can we keep up with what's going on? You know? Can we learn? Can we, you know, in in this particular article, it was saying, you need to really listen, you know, to digest what's going on when you're doing your research. It's not that GPT is gonna fade away tomorrow, but how can you really use it to the best of your ability and use it well, You know? So doctor Tiffany, tell us what else, like, is going on in this article. What else does it tell us? What are good lessons it tells us? Or what are things that, you know, like, we should really know on how it's telling us, to use AI effectively or GPTs effectively. So I think if you have the article on page nineteen seventy okay. Just add nine more years and that's my birthday. Right? I think one of the things that kinda caught me, like, I'm like, oh, okay. This makes sense, was inserting commands, inserting the correct commands to get the the solutions that you need or want. You know? So, maybe, like, in and I have to admit that I I put my name into chat g b t just to see what was out there. There was nothing out there. I'm not as as famous as other people, which is which can be good, for right now. But, I mean, what I did do was put in something like, how would you write a letter write a, a letter for qualitative research position or something like that. And that gave me, like, a lot of really good, stepping stones. And I you know, to just be like, okay. That's that's really important for me. So it was real you know, just getting those key terms. It's kinda like doing a lit review and making sure that when you're doing your lit review that you've got those key terms, like, written down or something like like text and balances. So I thought that was really interesting there. Also, it talks about, you know, what are some of the benefits of AI. So those being, like, generating new knowledge, summary of papers. But, again, with the summary of papers, we have to be very careful. Idea and theory construction, I was like, I never thought about that. Like, I you know, I just was like, I'm just gonna create this on my own. And the idea that I could have a, for lack of better words, air quotes conversation with AI, you know, might be helpful. But then go ahead. But and, like, I guess, I just my question like, my burning question with that is, like, how does that even work? Like, for people who have never opened up chat GPT, like, what do you mean when you say conversation? What does that look like with theory construction? Like, I just can't even imagine how a GPT could help you do that. So for my and, again, I stand to be corrected Sure. On this. So, I mean, I think what it does like, if if I wanted to use the in in my field, like, the leisure constraint theory, you know, how would I like, putting in, like, maybe leisure and other, like, marginalized population or something like that. Like, making sure I've got these key keywords, and then maybe something can come up with, see, you're you're testing me now. Maybe something too. I'm just legitimately curious. No. I mean see. I made I made a okay. Page one. Okay. Here it is. I'm looking. As a result generated the generated content may reflect these bias okay. That's wrong one. See, you put me on you put me on blast there. I'm so sorry. But this is what it did say. It did say idea and theory construction, and I was just like Yeah. No. I do like that I could be attached attached to the theory the idea part because it does, like, help you generate ideas. Mhmm. You know? But I think as far like, I I was kinda I myself was like, well, with the theory construction, maybe it's, you know, asking the question of, like, some of the people who are in our field, like, for narrative inquiry. How how can I mute use narrative inquiry to the best of my abilities with this example or something? Okay. So then it might spit out something like you might wanna consider looking at Bron and Clark or or Or Maxwell or yeah. All those type of people. Yeah. So it's doing a quick, like, Google for you and then spitting out the relevant information like that new Gemini bar in Google that I keep trying to close. Like, no. I wanna look at the OG source, Google. Stop telling me the highlights. I you know what? I haven't even I have not used Gemini yet. It keeps popping up on my phone. I'm just like Yeah. No. The is it not popping up when you Google something? Uh-uh. It's just you. No. I'm kidding. No. I mean, weirder things now. But, I mean, are you is it on Google Scholar? No. It's just, like, in all Google. When you just, like Uh-huh. Go type in, like I don't know, qualitative research into Google, into a fresh Google tab. Okay. We're learning as we go, people. And, like, the AI will give you a little summary of everything it finds. What? Yeah. Oh, now now I realize my mouse wasn't working. Where is the Gemini? I mean, I can't see your screen, so I don't Oh, wait. Okay. Sorry. Yeah. It should be there. Scroll up. So oh, okay. Try what is qualitative research. Oh, this? Yeah. So see what has a little AI overview there for you? Oh. I mean, it's like the first little bar that you see, and then you can see, like, to the side which resources they're pulling it from. Oh. Yeah. So So, folks, if you are trying this, go ahead and do, like, what is qualitative research in your search bar. In Google bar. Yeah. Your Google bar. Yeah. I didn't think about that. Yeah. So, like, I get really annoyed every time I see this, and I'm like, no. I want the OG articles. Like, I don't want you to give me a summary, Google. I wanna do the work myself. And I was complaining about this to another colleague, and they were like, oh my god. Me too. If you figure out a way to get rid of it, let me know. And I was like, cool. We're on it. Like, but, like, because we're both such researchers at heart that we don't want the subs. Right? Like, the summaries. We just want the, like, articles to dig into. That makes sense. And so, like, for me, it's making me not wanna do as many Google searches anymore because I know that it's gonna give me this AI nonsense at the top, and it's gonna be harder for me to dig to the actual stuff that I want. And I feel like it clouds even my research understand or, like, my searching understanding because it's prioritizing these three, like, sources in this specific case. Right? And, like, is it doing the same for everyone? I don't know. But is that also gonna change how we understand knowledge if it's shorthanding it for us even more than it already was? Right? I mean, I think my concern with this is that are there sources are they cited? Oh. Do you see them on the side there? Yeah. I do see them on the side now. I do. Yeah. But you know what? Here's one by what is this? Scribbr? Yeah. Yeah. Which isn't a valid source. You know? Yeah. I I was like I I know people who actually use it. And I'm just like, why are you using like, use Google Scholar. You mean students just to plagiarize? Because that's how I know people use it. Yeah. Yeah. Yeah. And I'm just like, why are people just not using Google Scholar? Like I've also had students, like, just copy paragraphs from Course Hero, and I'm like, you no one can tell you're doing that. Right? Like Oh my gosh. See, I I think this is one of those situations where we have to, like, teach our students how to use technology. You know? I mean, do you ever take what they tell. Did you ever take a, like, a keyboarding class or Oh, yeah. Yes. They don't take those anymore. They don't. No. No. No. No. So I I get I mean, not to I know we're going off topic, folks, but, I mean, it's it's more of, like, peep people actually taught us not just how to type, but also how to use, like, Excel. Like, there were I remember being in a class, and it was, like, an Excel class for an entire semester. Of course. I did too. Yeah. You know, it's like in middle school, we had keyboarding, and you had to learn and, like, you had to do, I don't know, some amount of typing in a certain time period to pass a class. Right? Like You just took me back. And you had a piece of paper put over your, like, can so you couldn't see the keys. And, like You just took me back there. I just I was I was having memories of, like, having crushes trying to sit next to the crush. Like, my keyboarding class. That was what brought that was the memory that you brought me back to. You're welcome. So, I mean, the other thing the other shortfalls to this is the lack of and then you said lack of oversight. Mhmm. Lack of clarity. Mhmm. Privacy and security concerns. So I think that's like I mean, we discussed all those. So, I mean, I I can't think of any other, like, shortfalls of using GPT other than, like let me think that back on let me think back on that. I think it goes it would probably go under lack of oversight, like, just being unclear of how to use GPT, you know, other than a copy and a paste. Well yeah. So, like, I think that's a big piece of it. Right? And so, like, if we are doing this for qualitative research, how, like, how do how do we know what the AI is doing in the background? Right? Like, what is the oversight they are using? Are they using any methods? Like, how do you even type up the method section if you use an AI? Like, I have no earthly clue. Right? I don't Right. Any. But then, like, I also think lack of oversight. Like, who are tech bros? Who do they tend to be well, and even just that name. Right? Like, I'm Right. If that's super offensive. I don't know. But, like, my stereotype of people who work in that field are young white men. Yeah. And, like, I think that, of course, the GPTs are racist because it wasn't created by people of color. Right? Right. And When I think of the oversight, I'm thinking of all of those pieces that are just missing from the web. And then the clarity piece is like, well, how do we know how accurate this is or what all was used in this? And, like, we don't. Because, like, I know how to HTML code, but I don't know how to like, what a GPT is doing. Right? And, like, for me, I haven't used an HTML coding knowledge since, like, for twenty years. Right? Like Right. Or maybe not quite that long, but, like, it's been it's been a hobby. It was last week, Lizzie. It was last week, doctor Lizzie, you used HTML code. I I did it at one point in my life. I took several classes on it, but I don't remember it. You know? And, like, the private privacy and security concerns, like, you talked about at the beginning, like, I think are a huge issue. And I keep thinking of HeLa every time somebody talks about chat GPT, and I'm like, do we even know what information we're pouring into this and where it's going and who's using it? And, like, no. We don't know. You know? And, like, I don't wanna sound, like, paranoid, like the senators talking about TikTok, but also, like, we don't know what these tech companies are doing. And, like, I think that is a valid concern. Right. I mean, especially when you send to the IRB, my information will be, it will be secured. I mean, that's what's going on in my mind, you know, and you send it off to a program to translate for you. Like, how do you know it's unlocked? You know? Yeah. Well and, like, some of those companies have very specific, like, things that you're signing, and you can sue them if they're like, break that violation and they have to take on the repercussions of the IRB. Mhmm. I know because I found that out when I used one of them for my dissertation. But, like and I know with the QDAs, they're largely going to take on that burden as well, but I don't know that all of them use GPT. Do you know about that? I don't know. I don't well, let me back up. So I do know that in vivo has a AI assistant, like, button. I'm a say button. You know? Yeah. And it's you know? But with our license, it's not covered. So if you want to use the AI feature, you actually have to pay extra for the AI feature. Yeah. And that that's because of the license that we have. Yeah. So and I don't know how much it is. You know? But yeah. Yeah. And I haven't like, listeners, we don't use all of the QDAs every week, so we don't know, like, what all updates all of them have. We're only speaking to you from our own experience. I know Dedoose does some auto coding or, like, auto transcripts if you upload things in a certain way. But I know that all of their information is very secure and it's encrypted and, like, it has lots of information on, like, data security on their website that you can find out about. And I think all of the QDAs do as memory serves. Yeah. But, like, that's gonna look different for each of them. And so you're gonna have to do some of your own research and different IRBs are gonna have different rules on what they allow and don't allow. But do double check with if you're going to use one of the GPTs or AI is like, is your IRB gonna allow this? That's a really big concern. Yeah. I mean, I do know with NVivo, you there is a transcription a transcription, program, but you have to pay extra for that. And then if you, you know, if it's and if you use the collaboration cloud, you have to pay extra for that. You know? And, again, this is with NVivo, you know, because I'm an American, and and this is you know, it's not a part of our license. You know? And you're right. Like, the way that, you know, NVivo, Max QDA, Dedoose, they all function very, very differently. You know? And if you're gonna use AI, like, that's a whole another conversation. You know? Yeah. You know? It's I there's so much to be said about bias, and I wonder stepping stepping a little not away from the article, but just kind of actually asking myself, like, how do we improve how do we make these improvements by using AI, like, within I AI? You know? Instead of it citing, like, hey. What is what is qualitative research or research or who are the type, you know, top researchers in qualitative research? You know, instead of it naming, like, Creswell and Creswell, you know, will how do we get it to expand? You know? And so that's kind of where I'm I personally am stuck, you know, because I, you know, I in my in my dissertation, I specifically said I wasn't going to, cite white men or black men. I was my my goal because of my dissertation, I only wanted to cite women as much as I could and women of color, you know, which was very difficult to do. It was very difficult to do, unfortunately. Yeah. Yeah. Of course it was because the academy is what the academy is. Right. Right. So let's I guess, what else should we learn from this article, doctor Tiffany? So under their the analysis section of the article, the one thing and, again, we've said it already. Like, just make sure that you're checking your information. Mhmm. It's, you know, checking the information that is being given to the researcher from AI. Like, yes. You know? I I remember when I looked up, I was in chat GBT. I was like, tell me about narrative inquiry. You know? Mind you, my dissertation, I use narrative inquiry, but I was like, just tell me. Like, I wanted to be, like, wanted to know. And so it cited, like, some people that I definitely knew. And it also like, some of the citations were a little wrong. So just kind of being careful. I think that as qualitative researchers, again, like we said, it's not something to be afraid of, but something for us to, like, really delve into and and explore, like, what are now like, what does the analysis mean? You know? Like, to be unafraid. And then I think it's also, like, how do we get more transparency and responsibility from researchers who are using AI in qualitative research? I think that's you know, I think it's really important with when we identify the specific air quotes, your specific commands, like, when we when we do our literature review. And I don't know I don't know if you do this, but, like, sometimes when, like, when I'm writing an an article or a manuscript, I should say, that I'm trying to get published for an article or I'm looking up something, like, I'll write out all the key terms that I've used to see if it if it's helpful. You know? Mhmm. And and so I think it's almost the same thing in using AI here. Just making sure you keep a list of all of those commands that you use to get the information that you think that you need. Yeah. I, like, I do sort of like, I am a really big fan of evidence tables, which I don't know how well known those are outside of my specific training, or the students I have trained since I learned those. But You'll have to train me on this now. Evidence tables are, like, kind of keeping a running list of different articles you found as you're looking for things, for a specific paper or writing project. And then you kind of, like, do a short citation of, like, first author, last name, abbreviated title of the article, like, big ideas in the article, how you're going to use it in your paper or writing or whatever, and then, like, any other notes you need to have about it. And so that's really helpful to keep in a spreadsheet, right, to, like, as you're going through and so then you can quickly go back and find what you need as you're writing it out. Mhmm. Because I tend to do searches or digs through my end note or, you know, what have you before I'm, like, actually typing up and writing the paper, and that's part of the prework for the paper and the, like, thinking phase of it. And sometimes I'm making notes by hand because, apparently, I'm a Luddite, and I still do things by hand. Yeah. But, like, that helps me know how to write it. Right? And so, like, I will keep some of that documentation. But I know when I'm telling my students, like, I allow them to use AI, but only if they cite exactly how they used it, what commands they used, what dates they used, how it showed up in their paper. And if they don't do that, then they have to go through plagiarism. Responses when I find them using AI. And, like, frankly, I'm not always gonna find them. Right? And, like, I try to be really thoughtful about who, like, I do for that. But, like, the reason I do it that way is because my friends who are in nonprofit worlds or industry worlds or government world, whatever, that's how their companies are handling AI, right, and use of AI in the workplace. And it's like, I know a lot of colleagues who teach or who are doing research who just say, oh, no. No one should use, like, absolute abstinence of AI. And I'm like, well, like, they're gonna use it one way or the other. So, like, let's teach them how. Right? And let's, like, have this thing about it. And it's like, I've been thinking throughout this whole conversation about, positionality statements. Mhmm. And, like, if, like, maybe we add on an AI statement. Like, this is how AI was used in the writing of this article or the analysis of this data and, like, just be really transparent. And I think that's okay. Like, I would be fine with reading any article like that, and I would go, okay. That's valid or that's not valid. But just like we're getting to be more open with our data. Right? And, like, a lot of open published articles, will have, like you can download the tables of the, like, coded data. Right? It's not raw data, but it's like Right. You can download it and try to recreate their analysis. And if you don't get them, then you can publish a article follow-up article that says they did this wrong. Right? Right. Like, this is part of, like, the process of science or it should be. Right. And so, like, I think if we have that AI statement, that's kinda like a positionality statement. That would be really helpful. Yeah. No. I agree with you. I think that would you know, I didn't think about having an AI positionality statement. Like, I never like, that that's a that's a great idea. Maybe that's what we should do for our next paper. Oh, yeah. That was that I never thought about that. I you know, I again, I think I think after reading this article, mind you, I've read it twice now and Mhmm. And being in conversation with colleagues and then, you know, obviously with you, you know, it doesn't make me as hesitant as I was before in using AI. You know? I think for me, it's more of, okay. So how am I gonna use it, air quotes, correctly? Like, how do I how do I, how do I know what that I've gotten the best results that that are given to me? You know? And I I mean, that part of it still requires work. You know? And I think that's the thing is that people don't realize, like, you know, you're still going to have to do a little bit of work. It's kinda like you know? So it's kind of interesting because being here at American, you know, using NVivo, a lot of people will do their well, the the auto coding. Well, not just in Vivo, but other programs. They're like, hey. You know, we're using auto coding. And I'm like, yeah. But you still need to do some work. Like, it can't just stop there. You know? And so I think this is just like that. You know? You can't just stop the research from the output that Chatt GBT or Copilot or whatever you're using is giving you. So Yeah. Well, I think that's right. And I think it's part of the richness of data. Right? Like, we talked about in the trans our transcribing episode, like, how deep you wanna be with the data. Right? And I think let me give you a for instance. So, like, I wrote my dissertation five years ago now. It's a wild thing to say. It's a true thing to say, but it's a wild thing to say because it feels like yesterday. Yeah. Anyway, I had my students read one of my papers from it last semester. For our last reading in the class. It was a discussion based class, and they went through and they were discussing all of the different things like they did with all of our other papers throughout the semester. It was the only one of mine I gave them. Right? And at the end, they said the students who were leading discussions said, doctor Bartelt, can you tell us anything else about this paper and, like, what you wish that we knew from reading it? And I was like, well, you know, this participant, like, cried to me throughout most of the interview, and this participant was just so joyous, and this participant, like, had this. And they all looked at me, and they were like, from five years ago, you remember each one of these 26 participants? And I was like, I sure as heck do because I spent hours and hours with that data. Right? And, like, it became part of my soul. And, like, I remember that data on such a bone deep level, and it changed so much of how I show up in the world hearing those interviews over and over and over again, right, and spending time with that data. And I think Yeah. While I want people to have the shortcuts to be able to make their work take it to the next level or to, like, help them deal with the multiple hats that they're juggling or whatever it is. Right? I also don't want people to lose the joy in dealing with data and knowing data that well. Right? Mhmm. And, like, that's my biggest fear with using AI and qualitative research. Like, I don't know. I don't know about you. What about you? I I totally agree with you. The one word that's resonating for me right now is authenticity. Yeah. You know? Like, if people are you know, like you were saying, you remember the experiences you have with participants. I definitely remember the experiences I had with my participants for sure as well. And that is like, what you just named is is my fear that people, will not feel as hands on with their research or with the data. That's what I'm trying to say, with the data. And and I remember, like, when we were doing our dissertations, one of the reasons that I personally didn't use a QDA program was because I felt too like, it was too much of a distance for me. Like, I I couldn't feel it. You know? So then he walked into my into our house, and there's, like, sticky notes all over the place. Mhmm. So, I mean, it was it was really a good experience, but you have to know, like, your your limits, you know, of you know, you've gotta lay it out. You know, is this a limitation of your study? Is it, you know, is it I don't know. Like, it's just so much. It's It is. But, I mean, again, I think I think it's, like, as long as you like, I think if people practice using, the GPT prior to actually using it on a study would be important. You know? If it's cost Right. And I think knowing what specific parts of it you can use. Right? Like, maybe the worst thing for you about the whole entire research project is coming up with a timeline. Could you go into a chat GPT and say, help me come up with a timeline? And then edit from that and use that as your research study timeline. Absolutely. That's a great idea. Right? Maybe the biggest burden for you in researching is writing emails to people to share your study. Could you use a Chat QPT for that? Absolutely. 100%. Right? Yeah. Maybe your biggest hurdle in researching is editing your paper after you've drafted it. Right. Could you use a Chat QPT for that? Absolutely. No problem. Like, that's a great idea. Right? Can you use it for every aspect of every project? I mean, I don't know. Probably, technically, but should you? I don't think so. Like, I think what we're really coming down to is use it for the specific things that are such a struggle to you that are inhibiting you from getting the work done. And use it in a very limited capacity, but use it to help you get over those hurdles that otherwise would have been almost insurmountable. And it's like, then be really honest with how you used it and why you used it or, like, not necessarily why, but, like, how you used it and when you used it so that Mhmm. If people wanna replicate, they can. Mhmm. Mhmm. I I do know that I used it just to try it. I will say it was just just trying to test the waters, before I completely submerged in the water was I was like, write a cover letter for me. Like, as a qualitative researcher, write a call a cover letter. Can I just tell you that cover letter looked very similar to the cover letter that I had written for this job? So, I mean, it was very it was I was like, okay. I'm on track. Like, this is a pretty good tool. You know? And then there were also things that I could have added, which was really important. And I was like, okay. Like but it was still on track. You know? It was you know? So I I definitely like this idea of, like, testing it testing it out and understanding what you're getting into before you actually, like, use it full throttle. So Yeah. I think that's really smart. And I think I think the biggest thing too, like you said earlier, authenticity and honesty. Like Yeah. Be authentic, be yourself, edit things that it spits out. Don't just take that as the word of God. Mhmm. And, you know, play with it. Have some fun. Don't be completely terrified, but also don't let it take over your brain and still use your good critical thinking and when you should use it. That's you know what? That was something else. I'm glad you said that. That was something that this author said. It's like, you know, researchers not, like, giving up their critical thinking skills. You know? And, like, I remember the the an article was like, you know, you need to keep you need to keep those critical thinking skills skills while you're doing while you're using chat g p t. And I was like or GPT. And I was just like, oh, so you're not giving up. You're just doing it's like an extra skill, which, you know, you can use all these skills and put it on your resume. You know? Mhmm. So, I mean, I think that's really I was like, oh, okay. Like, I started to open like, for me, reading this article was, like, opening myself up to being like, honestly, being vulnerable and and and saying, like, okay. Like, this is these are my assumptions about GBT, but here is somebody who's writing about GBT, and they're, like, saying, don't be afraid. And I'm like, okay. Like, I can I can handle that? You know? So yeah. Love it. Alright. Any last words of wisdom for our listeners from this article? You know what? Just be authentic. Be unafraid. Be unapologetic. And, you know, right in the AI statement. Yes. Oh, I love that idea. I still I'm telling you, doctor Lizzie, we're gonna need to do that for our next article. Like Boom. It's done. It's happening. You heard it here first, listeners. Alright. Well, all you friends out there, don't forget, you can always email us at c0tmpod@gmail.com or check out our website for episode notes and transcripts and sources at c0tmpod.com. And with that, we hope you have a really good one. Definitely. Cheers. Cheers.