Originally published in Carroll Capital, the print publication of the Carroll School of Management at Boston College. Read the full issue here.
If you’ve read the headlines about artificial intelligence, you might believe it will turn us all into horses. Automobiles, of course, changed horses from essential laborers to luxury purchases in just a few years. AI, doomsayers predict, will do something similar to us humans. It’ll take our jobs and leave us to fill niche roles.
Professors at Boston College’s Carroll School of Management who study AI call predictions like that overblown. Yes, AI will revolutionize the workplace, and, yes, some kinds of jobs will disappear. The McKinsey Global Institute, for example, has estimated that activities accounting for 30 percent of hours currently worked in the United States could be automated by 2030. But Carroll School scholars argue that people who learn to use AI to increase their productivity could end up better off. As they see it, AI-adept folks will be able to work faster and smarter.
“I don’t think our real concern right now is about overall job loss,” says Sam Ransbotham, a professor of business analytics. “What’s going to happen is you’re going to lose your job to someone who’s better at using AI than you are, not to AI itself.”
How do you become an AI ace? It’s doable for many people, says Ransbotham, who’s also the host of the podcast Me, Myself, and AI. You don’t have to become an expert, just the most knowledgeable person in your office.
With curiosity and diligence, most anyone can learn enough to figure out how to apply AI on the job, he says. The way to start is with play. Go online and play around with ChatGPT, Open AI’s chatbot. Try, say, having it write first-draft emails or memos for you. (But fact-check anything you use: ChatGPT and other large language models can sometimes offer up “hallucinations,” information that sounds plausible but is false.)
“AI tools are accessible to the masses,” Ransbotham says. “That’s an interesting change. Most people don’t play with Python code.” He uses AI to generate the background images for slides in his presentations. “For me, images on slides fall into the good-enough category. I want my computer code to be awesome, but the images I use on slides can just be good enough.”
In speaking of “good-enough slides,” Ransbotham was alluding to the peril of leaning too heavily on AI: what he calls the “race to mediocrity.... You can use an AI tool to get to mediocre quickly,” he explains. ChatGPT, for example, can give a draft of an email or memo in seconds. But its prose will be generic, lacking color and context, because ChatGPT “averages” the prose it finds on the web. Stop there, and you’ll end up with average prose.
Another way to tool up on AI is to read and listen. Plenty of established publications, like Wired and Ars Technica, as well as newer ones, like Substack newsletters by Charlie Guo and Tim Lee, cover AI. Ditto for podcasts like Ransbotham’s. As you explore, understand that, despite the hype, the technology does still have real limitations, says Sebastian Steffen, an assistant professor of business analytics. “I tell my students that ChatGPT is great for answering dumb questions,” he says. “For factual questions, it’s quicker than Wikipedia.”
But AI can’t make judgments, which is often what work entails. Your boss may ask you to help formulate strategy, allocate staff time and resources, or determine whether a worrisome financial indicator is a blip or the beginning of something bad. Facts can inform those decisions, but facts alone won’t make them.
Steffen cautions that it may take several decades before we really understand how to use AI and the best ways to incorporate it into our workplace routines. That’s typical of big technological rollouts. Even AI’s inventors may not see the future as clearly as they claim. “Alfred Nobel invented dynamite to use in mining, but other people wanted to use it for bombs,” he says. That troubled Nobel, a Swedish chemist, and was one of the reasons he funded the Nobel Prizes.
Even in an AI world, humans will still likely have plenty to do, says Mei Xue, associate professor of business analytics. “Think about doctors—we still need someone to touch the patient’s belly” to get subtle information that sensors miss, she says. Robots can move pallets in warehouses, but they haven’t learned bedside manner. Xue says humans will likely continue to fill roles that require “talking to clients, meeting with customers, reading their expressions, and making those personal connections—we can gather subtle impressions that AI can’t.”
AI can’t tell whether the crinkles at the corner of someone’s eyes are from a smile or a grimace. So soft skills will still be rewarded. Brushing up on those may pay off.
Even in humdrum workplace communications, like those endless emails and memos, there will likely be a continuing role for us humans, Xue says. “What’s unique with us humans is personality, originality, compassion—the
emotional elements.” ChatGPT can generate jokes, but it can’t know your coworkers or clients and what will resonate with them.
“I don’t think our real concern right now is about overall job loss. What’s going to happen is you’re going to lose your job to someone who’s better at using AI than you are, not to AI itself.”
Similarly, you can let AI write your cover letters for jobs or pitches to clients. But you might fail to stand out, Xue says. ChatGPT “is searching for what’s available on the internet and putting together what’s best based on probability,” she explains. “For now, it can’t provide originality."
Xue adds that one can find the need for a human touch, or voice,
in unexpected places. “This weekend I was listening to some books
on an app in Chinese. I found they offered two types of audiobooks—one read by a real person and one by an AI voice. I didn’t like the AI readings. They sounded fine but had a perfect voice. When you have a real person read, you feel the emotion and uniqueness.”
Teachable Moment
The Carroll School gives professors three options for using AI as a tool.
By Lizzie McGinn
With the launch of ChatGPT in Fall 2022, many educators feared that AI would completely upend academic integrity, a concern that many Carroll School faculty initially shared. “At first [the reaction was] ‘we have to stop this menace,’” says Jerry Potts, a lecturer in the Management and Organization Department. Still, a handful of professors started making a compelling case: AI wasn’t going anywhere—instead, the Carroll School would have to rethink how to use it academically. By the following fall, three new policy options had been presented: Professors could completely prohibit AI, allow free use with attribution, or adopt a hybrid of the two options.
Some faculty members, like Potts, have fully embraced AI as an educational tool. In his graduate-level corporate strategy class, one project tasks students with pitching a business plan for a food truck with only 30 minutes to prepare. Potts has found that while AI often helped with organizing the presentations, it was humans who came up with the most creative ideas overall. Bess Rouse, associate professor of management and organization and a Hillenbrand Family Faculty Fellow, opted for a hybrid AI approach and allows it only for specific class assignments. In one case, she instructed students to use ChatGPT in preparing for peer reviews, which minimized the awkwardness of critiquing other students’ work.
“There is less concern that this will be the ruination of teaching,” says Ethan Sullivan, senior associate dean of the undergraduate program. “We’ve instead pivoted to how AI complements learning.” For his part, Potts is optimistic. He says that if professors stay on top of this technology and adapt their courses accordingly, “We should be able to take critical thinking to another level.”