AI In The Classroom
Maria |
March 28, 2024

In the latest Business Casual,  the hosts dive into the transformative role of artificial intelligence (AI) in education with Professor Rajiv Garg from Emory University’s Goizueta Business School.

Professor Garg shares compelling insights from his study examining the effectiveness of AI in enhancing learning outcomes. The research intriguingly reveals that courses designed with human-generated content but delivered by AI avatars lead to the most successful student learning experiences. This nuanced finding opens up a discussion on the future of education, where AI could play a significant role in personalizing and delivering content, while human educators focus on content creation and the cultivation of critical thinking and creativity.

The conversation also explores the broader implications of AI in academia and the potential for a symbiotic relationship between AI and educators to foster a more engaging and effective learning environment. This episode offers a thought-provoking look at the intersection of technology and education, suggesting a future where AI supports rather than supplants the human touch in teaching.

 

 

 

 

Episode Transcript

[00:00:04.370] – John

Well, hello, everyone. This is John Byrne with Poets and Quants. Welcome to Business Casual, our weekly podcast. We have a special guest today with my co-host, Maria Wich-Vila and Caroline Diarte Edwards. He is a professor at Emory University’s Goizueta Business School, and he’s done a recent study on artificial intelligence. Now, I don’t need to tell you that AI is odd. Everyone is trying to figure out how is it really going to have an impact on education and in every single way, meaning how should students use it? How should faculty use it? Can it be used in research when it’s used in the classroom? To what effect will it occur? Rajiv Garg, the professor at Emory, has done an interesting study. We wrote about this. Very cool story. You want to check it out on our site. It’s called When AI Helps Students Learn and When It Doesn’t. An Emory professor’s ground A Ground-Breaking Study. Welcome.

[00:01:02.560] – Rajiv

Thank you, John. Thank you. That’s great. I don’t think it’s a groundbreaking, but it’s just- Yeah, we think it’s groundbreaking.

[00:01:09.530] – John

Tell us what you did. Lay it out and tell us how you came to do this study.

[00:01:15.120] – Rajiv

I guess I started the study trying to prove that I should save my job. The alternative AI has gotten really… We are seeing applications in all kinds of domain: education, marketing, advertising, even the content writing. I have literally research projects in almost all of those domains I’m talking about. But what we really wanted to see is because the AI, while it’s exciting, it could be a little blunt. The results could be planned, the delivery could be planned. We wanted to compare that the way the generative AI systems are right now, could they really help in the learnings of the students? When we talk as humans, we can modulate our voices, we can actually communicate better, we can emphasize in some points, we can make things a little bit more in the layman terms rather than making it more sophisticated. The generative AI, you can change the temperature, which is essentially how creative the content is going to be, but it doesn’t always do a very good job at keeping the content factual, but making it more in a layman term at the same time. It’s essentially the factual is more formal, which is maybe writing or reading a research paper.

[00:02:34.360] – Rajiv

Now, if you teach students as if you’re reading a research paper to them, the hypothesis was they’re probably not going to learn as well as if you were talking to them as a human person who can take the same content but simplify in a way that would be easily comprehensible. What we started doing was, I said, Look, the two systems or forces that I have to create, there has to be no talking, no spillover between them. I have to create one instructor or have an instructor who’s going to create the content on her own. This person has to be knowledgeable. I hired a student who is a graduate of the master’s degree in analytics, who has taken the database and machine learning and all kinds of courses. I said, Look, you got to create an intro to sequel course. I’m going to give you the six different topics for six modules that we’re going to talk about. Now, the six topics that we came up with is asking ChatGPT saying, what would be the six different modules you will talk about if you were to create a course on introduction to SQL? ChatGPT came up with six different topics.

[00:03:52.030] – Rajiv

We tweaked them a little bit. We asked them the question, Okay, you need to simplify this. This is too much content, et cetera. We I came up with a single prompt that gave us a simplified six topics that ChatGPT could cover for a one-hour course. Now, I gave these topics to the human instructor and said, Go ahead, create a script on what you would talk, what your slides will be, and I will go over what you’re doing or delivering in that course. I hired another student to essentially create prompts that are essentially standardized. We are not doing a multi-shot. We are doing a zero or a single-shot prompt where we are saying for an intro to sequel course, when talking about this module, what will be the content you will cover? More specifically, can you write a script for 10 minutes? Create the slide content for 10 slides. We provided all the information necessary to make the course structure very similar. Once we had the two courses created, the initial two, which is purely generated by AI, purely generated by human, and this is just the content. We thought, Okay, we can have the AI deliver it, and we can have the human deliver it.

[00:05:08.320] – Rajiv

But what if the human delivery is better? Can we show human delivery is better with the AI content? This is the same human being, same instructor, but not delivering AI content. Can they do a better job compared to just AI delivery? We ended up creating four different courses. The AI-generated course with AI instructor, which is essentially an AI avatar with their own, the text to voice for a speech in there, and a video for essentially the slides with that avatar. The second was the AI content with the human instructor. The human sat in front of the camera, recorded their avatar and spoke, like it literally taught, that slides that the AI created. Then this instructor taught the human generated course, and then we had the same AI avatar teach the human generated course. We ended We ended up having four courses. Then we hired students, randomly assigned a course to each of them. After every module we had, because the module was a similar module across both the courses, we gave them the same quiz to those two different students and said, Look, let’s see what your scores are going to be. We didn’t tell them, and the students didn’t know that any of the course was an AI-generated course.

[00:06:25.530] – Rajiv

We told them, Here are some online courses that we are evaluating effectiveness. That’s what we told the students. After the study, when some students, I met with them and I asked them, What did you find interesting about the course? They said, Oh, this is a very good learning experience. I I was like, Did you know this was created by AI? I said, No, this is so amazing. But at least by not sharing, and the AI was so close to the human that they didn’t have the wow factor that we were initially thinking with the hologram for technology.

[00:07:03.810] – John

The best learning outcomes occurred in which group, though?

[00:07:07.370] – Rajiv

The best learning outcome was the human generated content with the AI delivery. When we had the AI avatar deliver the whole slide and followed the script for teaching what the human generated. The worst was, interestingly, was the AI generated content delivered by human. There are multiple things that we try to reason why. Again, these are some sample points we found. Some students said they actually were listening at the content at the 1.5xp. At 1.5x, I tried listening to the human versus AI, and AI is more comprehensible than the human. When we are speaking, because we’re essentially recording from a microphone, there’s some noise as well. When you speed it up, it’s not as perfect. We need a better maybe microphone, a better speaker who is slow and is able to communicate more effectively. AI, on the other hand, is system generated. There’s no noise. If they’re listening at 1.5x, they can understand. I was actually surprised that students are doing or taking courses at 1.5, 1.25x speeds and still understand the content. This is something I thought maybe if I do the next study, I need to ask in a post-survey, like the E4 survey, that you listen at a higher speed, because they did.

[00:08:38.630] – Rajiv

That could impact their learning as well.

[00:08:41.580] – John

How vulnerable do you now feel as a professor Having conducted this experiment, do you think you’re going to have a job in five years?

[00:08:48.920] – Rajiv

I think, based on my study, I will because we need somebody to create the content, right? I think what I learned is AI is phenomenal in personalizing the content for us. Even if I was a person who is delivering, I can create my own slides. Assuming I created a course today, I have my whole 27 lectures, or sorry, about 27, 28 lectures, and I put it to a ChatGPT and say, Hey, wherever I’m taking case study examples, tweak it for this audience. We can do that. I can actually go deliver the course to master students with the examples of case studies they could understand. I can take the same course material, the slides, the content, and I can customize it for the undergraduate course. Students may be in arts, may be in engineering, may be in business. I can dynamically or using generative AI change the content if I want to deliver. Alternatively, because I created this content, I can have the AI deliver and personalized for the audience as well. I think the job, our job, is going to be creating knowledge. I think AI is phenomenal in creating the patterns in text and videos and audio based on what they have seen in the past.

[00:10:21.200] – Rajiv

But they’re good at communication, but they’re not good at creating knowledge, creating awareness as much. We’re going to realize that. Maybe How did it get better. Maybe if you ask me this question a year from today and say, Do you fear for your job? Maybe, right? At that time. But today, no. I think we can still create, I think, better content for learning and for knowledge diffusion. Yes.

[00:10:52.850] – Caroline

That’s what I was going to ask you, Rajiv. Don’t you feel like this is a temporary advantage that professors have? I have a I’m interested to read John’s article about this, and you talk about how in the future professors will be content creators and the AI will be delivering. But you also talk about how this is changing week by week, month by month, right? So Maybe this is just a temporary advantage that professors have over the AI. In another year or two years, that balance of power will have shifted. I wanted to ask you about that. Then also, do you think, therefore, that there there will be need for fewer professors in the future? Maybe there will be a few guru professors who are brilliant and have an amazing reputation and can still bring that extra input that AI cannot bring. And the AI will leverage their knowledge, and you won’t need as many people actually involved in teaching at universities or schools. It seems like a very concerning potential.

[00:12:00.370] – Rajiv

I think your questions are awesome. I’m going to answer this in two parts. There are two kinds of learnings that we do. One is essentially a skill that we are gaining. Now, if you’re taking courses that could be learned online, I completely agree with you that the need for professors who are creating online content is going to go down because essentially you can have one very talented person create a content, and that could be to personalize in different settings for different audience for the digital delivery of the content. The second part, what we’re going to see is as these things become more advanced. There was an article, I can’t remember the source right now. It’s in one of my papers where we say that the 70% of the jobs will be eliminated in 10 years, sorry, next six years. But 85% of the jobs that will exist are not even invented yet. They It will be these jobs that are needing the invention that will require some complex content to be taught in the classroom, which will require more human beings. Like you said, rightly put, that the sensory element is very critical. A human professor can look into the eye of every student, can determine, are they getting the content or not, where to repeat, where to slow down, where to speed up.

[00:13:28.670] – Rajiv

The technology for AI doesn’t exist yet. It could exist. Maybe that is the next step for AI and education. Over the next year, you have a camera. Literally everybody has a camera looking at them. Now, a professor creates a content, a digital content. They look at the student and you detect that they’re becoming sleepy, they’re becoming distracted, and I can slow down and I can say, Hey, Caroline, do you have any questions? And on the fly. Those things will happen. But as we keep creating this content, we probably will have a more complex content that requires a more personal touch. Now, can AI deliver for that personal touch with that complex content? I don’t know the answer to that yet. We are getting there. We will get there in next year or in five years. That’s a very good question on what we keep innovating over the next month. But you’re right. For digital delivery, for the content that is more skill-based, that is upscaling people’s portfolio, we will see the need for faculty will go down in those settings as we adopt more AI. But the need for faculty or humans in delivering more complex content in the real-world setting in person is going to increase over time.

[00:15:00.340] – John

You would think that it’s most applicable, obviously, for online learning, and less so for in-person learning for a number of these reasons. You mentioned earlier, for example, that when you’re standing in the classroom looking at students, you can tell who’s getting and who’s not. You can tell by their body language who you want to call. You know the backgrounds of your students. If you wanted someone who had a background in marketing, and it was a marketing twist a finance question, you’d call on the marketer. AI really, I don’t even think in other iterations, will be easily able to do that, frankly. The other issue you have is people who are in person are paying very high tuition rates, and there is an expectation that they don’t want to be taught by an algorithm, right? Absolutely.

[00:15:57.600] – Rajiv

Yeah. No, you’re John. Again, the two things that you mentioned, one is the cost. Definitely, if AI is teaching me and the AI is becoming a commodity, then yes, it’s not worth the $3,000 for the course or $2,500 for the course that we are paying for higher education right now. If it is not a commodity, if a university does something unique which is proprietary in personalizing the content for the student, where they’re able to learn from this esteemed faculty, which otherwise they will not get an opportunity to learn from, and this professor’s avatar is being personalized for them, then maybe it’s worth the cost. Again, but the cost is an issue which I can’t really say much about. But I understand that I personally, if I were to learn a digital content, which which is recorded, I probably wouldn’t pay the $2,500 for that one course if it’s delivered online. I’m more willing to pay if there’s a person, if you, John, were to teach and say, Look, Rajiv, I’m going to teach this course. You want to join? The tuition is $2,500. I will probably join.

[00:17:16.270] – John

If I can download all of your expertise and your experience into a proprietary generative AI system, and then I ask it questions that it still can’t answer, it probably will hallucinate an answer for me that will be totally incorrect, don’t you? At this point, it will, right?

[00:17:39.260] – Rajiv

Yes. There are ways to actually talk about, so there are ways to reduce the hallucination You can bound the AI’s response to what I talk about, my slide content, my books, et cetera. You can limit it. That is one area which I can almost guarantee that you’re going to see the the amount of hallucination going down significantly or by almost 90% within this year, because that’s what everybody’s focus is. I ran a workshop last week on large language models, and that was one of the things that one hands-on session we did on reducing hallucination, and people were like, This is awesome. We could actually reduce the hallucination, which is such a big problem. I mean, even if you ask ChatGPT, What is a reference for this content? They’re going to make up an article. They make up a publication completely.

[00:18:33.770] – John

Yes. It is amazing, though. One little experiment that I tried earlier on was asking ChatGPT for to write an essay for Stanford Business School. They have one of these iconic essays, What Matters Most to You and Why. I said, I have a great passion in sustainability, so write the essay for me and write it the way that Hemingway would write. Then I asked it to write it in the way that F. Scott Fitzgerald would write, and in fact, it did. I see. It is remarkable. It’s really… Can you see yourself using this as a research tool as a faculty member?

[00:19:21.610] – Rajiv

So I’m interested in you bring this up. I’m with another faculty member, Jessie Boxer, at the Goizueta. We are working on research where you give a topic, what you want to research on, and we will write you a whole paper, a research paper with results, with analysis, everything in 10 minutes. The whole purpose of this exercise of this research is to educate the community that the generative AI’s hallucination could be used for this bad use. I mean, if If I were to say that the increased use, again, this is fake, don’t quote it, but if increased use of generative AI is going to cause climate change, and I want to write a research paper on it with data, with the analysis properly done and all the charts and literature review, abstract, everything, I can write the whole paper in 10 minutes right now. Would you believe it? We are seeing the fake content in terms of the YouTube and TikTok and Instagram places or news articles. But we’re going to see the research articles as fake moving forward. It’s a problem. How do you control that fake information moving forward? The responsibility on editors is going to be increasing so much.

[00:20:54.860] – Rajiv

I mean, you have points and points. If you were to hire a freelancer to write an article and the person just ChatGPT to write the whole thing, how would you know? It’s very well written. Like you said, maybe you’re in the style of Hemingway, and it’s like, This is amazing.

[00:21:15.540] – John

Oh, that’s terrible.

[00:21:18.550] – Caroline

That seems like an opportunity for the universities, though, that you can be the gatekeepers of knowledge and provide credibility courses and to sources, right? And in this environment where there are a lot of threats to the education industry, that seems like something where you have a tremendous opportunity to play a role in the future.

[00:21:48.980] – Rajiv

Caroline, you just invented another academic job, the gatekeeper of knowledge. Technically, you have professors who would gatekeep the courses that are being created, the articles that are being published, not just the editors and the viewers, but there are people who understand the technology, understand how to interpret the text, whatever you are creating. And is it really novel that a person has done, or is it something, a collection of next predicted word using large language models done by somebody? So definitely that is going to be a new job in the academic institutions.

[00:22:34.320] – John

Now, Rajiv, Caroline has an MBA from INSEAD. Maria has an MBA from Harvard. I’m going to ask them if enrolling in an MBA program today, would they like to be taught by an algorithm? Maria?

[00:22:50.970] – Maria

I mean, if it costs a lot less, sure. Oh, they’re price sensitive. That’s right. I’m a I’m a bargain hunter. I think, look, if the fundamental knowledge is sound, if it’s not hallucinating, and if the content itself is of high quality, then whether it’s an avatar of a professor teaching or giving a particular lecture, for example, or the actual person, I don’t know that I’m super… I don’t really care one way or the other as long as the knowledge is being imparted. But I do think that some of the best learning in business schools actually comes from dialog in between students and from students and professors. I’m not really sure. Professor Garg was making the point that if you’re teaching a skill, that’s one thing. But if you’re trying to do higher-level thinking or higher-level those light-bulb moments in someone’s head for more complex concepts, I’m not sure that the AI would be quite there yet.

[00:23:52.460] – John

I can’t see an algorithm orchestrating a case study discussion. I just don’t see it.

[00:23:59.220] – Maria

That would be really That’s going to be ChatGPT 5.

[00:24:02.380] – Rajiv

I’m on it. I am working on it right now for that personalization, and hopefully I’ll show you. It’s not that much. It’s going to happen. It’s not that much. Really? It’s going to happen. I can show you a prototype in the next month. But it’s going to happen. We’re getting there.

[00:24:21.310] – Maria

It would be interesting if the AI could actually teach professors how to become better professors even when they’re in person. One of the things that you mentioned was that one of the things that made the AI delivery or one of your hypotheses around why the AI delivery was more effective was that it was clearer. There was no accent. I’ve started playing around with some editing software for the videos that I create where it will go through and it will eliminate all of my ums and my likes and my you know’s. It’s still pretty buggy, but I can envision a place where eventually, maybe it is a live professor, but AI catches… It becomes almost like a hybrid. The AI becomes instead of a replacement for the professor, almost like a little assistant that pops up and it says something like, Hey, you know that 33% of the students have tuned out. The eyeballs are not paying attention to you, things like that, or, You should probably restate that in a different way, or, Let’s scrub out that accent that people from other countries might not fully understand. I mean, I almost wonder. I’m sure that it would probably just leapfrog, and I’m just being overly optimistic about the future of the professor.

[00:25:36.280] – Rajiv

Maria, you’re on the spot. We actually at Goizueta have what we call global classroom, where for online learning, we even see, we have the technology to detect the student’s attention. If students are paying attention, if their eyes are closed, et cetera, so as a professor on the screen, I can see signals of those things. If I see multiple students not paying attention or their eyes closed or something, I can then talk to those people directly, and I can do that in a digital setting. In terms of voice that you mentioned, so I actually have another research paper where we look at the effectiveness of voice. We do show that there are certain voices that are actually better for information-seeking behavior that will create the need for engaging with the technology as well. We actually analyzed some thousand voices. We did some experiments. The paper is online, but we created six buckets or clusters of voices, and we said, Okay, this cluster with female voice is actually the best for information-seeking behavior, to create information-seeking behavior in people. Whereas the certain other bucket with male voice is actually better for selling products to users. What you were saying is already happening right now.

[00:27:04.960] – Maria

Holy smoke. Okay.

[00:27:06.570] – Rajiv

I actually have to ask Maria. When you mention about it teaches professor, you probably are thinking of some professor you didn’t like in the business school.

[00:27:16.790] – Maria

No comment. No comment. I do not want to be. No, but I definitely… Who hasn’t had a professor in their lives or even a boss or a colleague where their communication style just isn’t quite hitting for you or things just aren’t gelling. I think we’ve all had experiences with that. Wouldn’t it be great if the AI, instead of replacing it, it doesn’t have to be a binary thing. It doesn’t have to be zero or 100%. There may be some way to layer on AI in a way that helps superpower or turbocharge. True.

[00:27:49.460] – Rajiv

But think about it, if I’m talking, if I’m teaching, and you hate my voice, if that’s the feedback, I’m just saying, I know you love my voice. I’m sorry.

[00:27:59.590] – Maria

I I do. We all do here at Poets and Quants.

[00:28:02.280] – Rajiv

But maybe you love the voice of Tom Hanks better. I am teaching in the real-time. We can change the voice in Tom Hanks’ voice, and maybe that’ll keep you more engaged. So what I was saying, the value of AI is that delivery to connect to people. My own research, which talks about different voices based on the delivery itself. During the delivery, I can change the voice of the avatar to whatever I want, and I can personalize it to every person. Mf4, John, maybe Morgan Freeman, for you, Tom Hanks. I can pick any different voice. It’s the same professor teaching, but in different voices. Will that be more effective? My hypothesis is yes. That’s why the AI delivery is going to Trump the human delivery for these skill-based or more digital content. Like you said, the interaction that A light bulb moment happening digitally is harder. But then you need people who can connect better in their class. They need to be educated, trained. There’s one saying that says, Really smart professors are horrible teachers. Really good teachers are horrible professors. How do you find that balance? Maybe technology can help create that connection that we can have really good professors become really good teachers.

[00:29:28.490] – John

Yeah, really good point. I don’t know if I should be happy that I’m toward the end of my career or sad because of all these exciting changes that are about to come. Somehow, I think the former…

[00:29:42.990] – Rajiv

No, you should be sad. Yes. Yes, indeed. You would love to continue this career for another two decades.

[00:29:50.590] – John

Of course. It’s going to transform what we do and how we do it. Hopefully, it will free us from the more mundane things that we know we have to do that’s part of the daily grind and allow us the time to be more creative and more thoughtful and more innovative about what we do, right? I mean, ultimately, that’s the hope. That’s true. Other than the shorter work week, or what have you. I think that’s the real pipe dream in America. But who knows what can happen.

[00:30:21.730] – Rajiv

I think the AI has open opportunities, like you were saying, to do more creative. I think when we are creative, we are more If you’re happy, we look forward to the Monday, we hate the Friday, that look, I may not be as creative this weekend. I probably have to fix the shed in the back of my house this weekend, but I’d like to create something new That’s exciting, right?

[00:30:46.990] – John

That’s right. Well, the robot might be able to do those household chores for you in the future, right? I mean, already, it can vacuum a floor and do a lot of other things for you, so why not fix a shed?

[00:31:02.090] – Rajiv

Yeah.

[00:31:02.390] – John

Rajiv, thank you so much for being with us today, and we’ll look forward to your other studies on this topic.

[00:31:09.620] – Rajiv

Thank you, John.

[00:31:10.730] – John

It’s been an interesting discussion. I think everybody We’re talking about AI, and not only its immediate applications, but how it’s going to morph into something even more meaningful in the near future. People are afraid of it. People wonder if it will be regulated. People are wondering who the leaders will be and what overall the implications will be for society, never mind just higher education. Whatever it’s going to be, it’s a big challenge for everybody involved. Rajiv, thank you so much for joining us. And shedding so much.

[00:31:46.500] – Rajiv

Thank you, John.

[00:31:47.500] – John

yes. Thank you for joining us. And we hope you’re excited by the changes that AI is about to bring, and not like me, Happy that I’m toward the end of my career and I don’t have to deal with this stuff.

[00:32:05.840] – Caroline

John, we’re never going to let you retire.

[00:32:08.810] – John

No, I’m just trying. You know what? I really don’t want to retire anyway.

[00:32:12.720] – Maria

We’re going to upload your consciousness to a hard drive. It’s going to be fine. Don’t worry.

[00:32:17.130] – John

Who wants to listen to me? I’m thinking if you can do any voice you want, maybe some people want Marilyn Monroe’s voice singing to JFK, Happy birthday.

[00:32:28.160] – Maria

That’s right. That’s how you do a search in the sequel database with Marilyn Monroe’s breathy. You’ve unlocked it, Dawn. That’s the billion-dollar idea right there.

[00:32:41.070] – John

All right, everyone out there, thank you for listening. You’ve been listening to Business Casual, and our professor here from Emory, Goizueta Rajiv Garg. Thank you so much.

AI In The Classroom
Maria |
March 28, 2024

Maria

New around here? I’m an HBS graduate and a proud member (and former Board Member) of AIGAC. I considered opening a high-end boutique admissions consulting firm, but I wanted to make high-quality admissions advice accessible to all, so I “scaled myself” by creating ApplicantLab. ApplicantLab provides the SAME advice as high-end consultants at a much more affordable price. Read our rave reviews on GMATClub, and check out our free trial (no credit card required) today!