Sahil: What exciting new uses for AI do you expect in the next few years?
Professor Parrish: Oh, wow. Well, one of our tech speakers was a Bentley alumnus who works for OpenAI. He shared about the ability to have a voiceover. He asked for a volunteer from the audience, and they talked into the microphone for 10 seconds. Then, using the voice they just heard, he was able to create a film set in 1968 Paris, building the Eiffel Tower, and the person narrated it flawlessly. This technology hasn’t been released yet. I think there’s going to be a lot more voice recognition, and maybe even entire movies created with AI. We’re heading towards concerns around evidence presented in court, as people might question if it was AI-generated video or image. So, I think we’ll see more fake images and videos, and people will need to distinguish what’s real. It’s both concerning and inspiring. It makes me think about how we’ll deal with these repercussions. What solutions will we come up with, especially regarding falsifying information for court trials?
Sahil: Yeah, I do think they’ll need to develop policies and ethics around what’s acceptable. Just like when the internet came out, people were excited before realizing the need for regulation. That’s happening with AI now. People are excited because it can generate and analyze data and produce things previously thought impossible. Later, they’ll need to consider the implications, like fake evidence. They’ll need tools to verify the authenticity of content. As educators and industry professionals, we need to be prepared for AI. Instead of saying “don’t use it,” we should embrace it and figure out how to use it better. In my CS 100 class, I talked about AI as a resource. If you’re stuck or trying to figure something out, AI can assist, but you still need to understand how to do it yourself.
Sahil: That makes a lot of sense. It’s about figuring out our relationship with AI. I’m happy to see you’re embracing it. It can be a tricky subject, but your openness is refreshing. My second question is, how might AI change the types of jobs available?
Professor Parrish: We discussed this in CS 100 as well. I had students research how AI will impact their career paths and industries. One student found that entry-level jobs in finance, specifically around data analytics, might not exist. AI can generate reports based on past trends, a task previously done by entry-level analysts. So, AI will eliminate some jobs, like customer service roles we’ve already seen with chatbots. Entry-level jobs will require additional skills. Instead of just analyzing data, you’ll need to know how to use AI to the next level. The finance industry and others won’t eliminate jobs but will change the skill sets needed.
Sahil: While you’ve touched on this before, what are some important ethical issues we should consider with AI?
Professor Parrish: At Bentley, we include an AI policy in the syllabus, discussing its use in the classroom. For open-computer midterms and finals, we debated if students could use ChatGPT. We decided yes, as they still need to know what to type and how to incorporate the information. It’s a resource they’d have in the business world. However, academic honesty is crucial. Students shouldn’t just copy and paste AI-generated content. They should use it for brainstorming and suggestions, not for doing the work for them. If a student waits until the last minute and asks ChatGPT to write a paper, that’s where ethical issues arise.
Sahil: I’ve heard recently in the news about whether Trump actually visited the area where there’s a hurricane, or if it was an AI-generated image. Now you’re affecting people’s perceptions of things. Was somebody truly visiting there, or was it a fake image? Especially now, what is considered fake news or an AI-generated image is really impacting how people vote in the current election. It impacts more than we might realize. People might say, “Oh, that’s just fake,” or “That’s AI,” when maybe it is a real image, and you don’t know one way or the other because people don’t trust it at this point. There are so many fake things out there that people aren’t trusting when something is real.
Professor Parrish: That is actually kind of scary. Deepfakes were already a problem before AI became more advanced. Now, they can be much more realistic and convincing, affecting something as significant as an election. It’s no longer just a few people who have access to this technology; it’s available to the public. People think it’s cool and try it out, but they might not use it in ways that are necessarily encouraging to others.
Sahil: Do you have any closing thoughts? I feel like this was really insightful.
Professor Parrish: I think we should have been thinking about AI a long time ago, and we should consider it in every class. We should train students on how to utilize it best. Every syllabus should have a blurb on how AI is used, and every assignment should clarify if and how AI can be used. As educators, professors, and leaders, we need to lead the way on what’s acceptable. AI is not going away. We need to embrace it, try out new things, and teach students the right way to use it. I’m excited about where AI is going and the possibilities it brings, from helping students with tutoring to having mock interviews with AI. There are so many positive things that can be introduced, like generating architecture designs based on voice prompts. The generation graduating from Bentley in the next few years will be utilizing and doing a lot of these things. I’m excited about all the neat stuff coming out as a result of AI.