Ever since the ChatGPT API opened up, all sorts of apps have been strapping on AI functionality. I’ve personally noticed this a lot in email clients: Apps like Spark and Canary are prominently bragging about their built-in AI functionality.
The most common features will write replies for you, or even generate an entire email using only a prompt. Some will summarize a long email in your inbox or even a thread. It’s a great idea in the abstract, but I think integrations like these conspire to make communication less efficient instead of more efficient. You should feel free to try such features—they’re fun!—but don’t expect them to change your life. Here’s why.
The Ouroboros of Communication
We are all overwhelmed with email and communication in general. It’s easy to look at this as a tech problem because it’s happening on screens. It’s not a tech problem, though—at least, it’s not only a tech problem. It’s a social problem.
You could say that you get too many emails, and that might be accurate. Another way of saying the same thing is that more people are trying to contact you than you feel mentally capable of responding to. Trying to solve a social problem with tech often only creates new social problems.
For example, instead of writing an email myself inviting you to come over and have some beers, suppose I asked ChatGPT to write that email. The result is 220 words long, including an introduction (“I hope this email finds you well!”), an explanation of the reasons people might want to have beers together (“It’s the perfect opportunity to catch up, share stories, and simply have a good time”), and a few oddly-worded details made up out of thin air (“I’ll make sure to create a comfortable and welcoming atmosphere, complete with some snacks to complement our beer tasting experience.”)
Most people, seeing an email this long, are going to feel too overwhelmed to read it. Maybe they’ll use AI on their end to summarize the message. I asked ChatGPT to summarize the long email into a single sentence, and it essentially gave me back my initial prompt: “Would you like to come over for beers?”
The American philosopher Homer Simpson once called alcohol “the cause of, and solution to, all life’s problems.” AI, in this context, serves a similar function: It creates a problem (the emails are too long) and then solves them (summarizing the emails). It’s an ouroboros, a snake eating its own tail, a technology that exists in part to solve the problems it is creating.
It’s better, in my opinion, to look at the cultural assumptions instead of reaching for unnecessarily complicated technological ones. What cultural forces are making me think I can’t just write a one-sentence email? Can I ignore that, if it makes communication better?
Cultural problems, of course, are harder to grasp than technological ones. You could start sending one-sentence emails right now, but some people might interpret that as rude, or at the very least odd. But any individual—or organization—looking to become more efficient should think about these things. Unless, of course, you want a bot pretending to know that you have beers “ranging from local brews to classic favorites” in your fridge right now.
We Don’t Know The Contexts in Which AI Will Work Best
My friend Kay-Kay and I, for months, had an in-joke that became a ritual: tapping LinkedIn’s conversational auto-recommendations. This social network, for some reason, offers suggested replies to messages. It was never not hilarious.
Source