ChatGPT, Google’s Bard, and other AI tools are all the rage… or are they? For as many companies excited about embracing and using these tools to improve their businesses and their bottom lines, there should be even more of you eyeing them cautiously. Here are two reasons why.
- Artificial intelligence is, well, artificial. Having put ChatGPT to the test by asking it to write a few blog posts about famous yachts, I’m impressed by its speed. However, in every post, the tone was the same, to the point of sounding generic. Writers—good writers—have “voices.” They use words and phrases in ways that others don’t. They sound original and fresh on topics that they write about all day every day, and avoid hyperbole. With ChatGPT, however, every yacht was “spectacular,” “lavish,” and my personal pet peeve, “built to the highest standards.” It was clear pretty quickly that even a yacht with a questionable pedigree would get the glowing treatment.
- Artificial intelligence isn’t always intelligent. In one particular post, errors in regards to length, decor, and passenger capacity were pretty glaring. How could facts so easily found with a simple Google search be missed? The answer, of course, is that artificial intelligence is only as smart as what it finds on the Web, which can be as frustratingly wrong as much as it is right. AI doesn’t have the ability to take a step back, ask itself if the source is properly authoritative, and then double check other sources just in case. My experience isn’t an anomaly, either. A tech journalist who asked Google’s Bard for a chocolate chip cookie recipe got a recipe without any chips. Oops.
In developers’ defense, they’re building in the ability for users of ChatGPT, Bard, and similar tools to point out errors for future accuracy. Furthermore, the developers stress that these tools are simply that, tools. They’re also the first to admit that AI is still pretty experimental.
I’m all for leveraging tools, but that is the point: AI is a tool, one of a few that work hand in hand with each other. Human reasoning, for instance, needs priority right alongside and in conjunction with AI, at least at this point. Just as you’d never pick up a scalpel and simply start surgery without further ado, you shouldn’t use ChatGPT or other tools, as helpful as they are, without some consideration.
Journalists and marketers have been among the first professionals to grow concerned about AI impacting their future employment. I get it. The same employers who regularly slash their salaries as cost-cutting maneuvers (been there, didn’t get the T shirt) are further among the first to think AI can replace them. Why pay salaries, they figure, when the bots can do it, and do it around the clock? Sure, the bots don’t need paychecks, or healthcare, or any other benefits. But, as just mentioned, humans can evaluate sources’ credibility; bots can’t.
Bottom line: Advancing AI over human intelligence will advance errors and mediocrity, and downgrade credibility. Those are the last things the literal bottom line needs.