Chasing the AI Bogeyman

I spend a lot of my time in the halls of academia, where angst about AI has ramped up considerably in recent months. I see more and more instances of individual responses to student AI use, and as the numbers rise, so does the incoherence of our collective messaging. I decided to chase this bogeyman, primarily to provoke some conversations about what our messaging should be, and perhaps even to reveal something about the bogeyman itself.

Professors all know that students use the internet to complete assignments, and it’s a natural progression for students to use AI. But let’s also face some unpleasant realities: There are plenty of people in community and business leadership — and academia, even in my own institution — who  freely admit to using AI to support their work, so this bogeyman haunts all of us, not just academics. In fact, one of my colleague deans in another institution is establishing an entire course on writing AI prompts, on the premise that knowing how to leverage the resources of ChatGPT is a “marketable skill,” of practical use in just about any job. Perhaps it will soon be more of a survival skill.

There are many unresolved issues concerning intellectual property and ownership, and I’m not pretending to resolve any of them here. On the contrary, if anything, I’d like to do what I can to make them even worse. Let’s start with this: If I write a prompt that generates a grant proposal, am I the author? If not, what is my creative/intellectual relationship to the proposal? Do we credit a large language model as the author? And if we do, are we in some sense endorsing the projection of agency onto that model? And if you’ve ever read an email from a human being spewing nonsense, does it make sense to draw fine distinctions about whom or what we attribute rational agency to?

It’s easy to mislead our thinking by intoning, “If you didn’t write these words in the order they appear on the page, you’re not the author.” Let me illustrate the problem with that simplistic, linear thinking: Have you ever had a research assistant? Have you ever asked an administrative assistant to write an email and send it out in your name? Are those cases of plagiarism? Have you asked someone to give you a draft of a letter of recommendation that you tweak and sign? Where, exactly, is the authorship line? Who decides — the author or the “author”?

I know I’m just a philosopher, but isn’t it time for us to face these technologies head on and do some serious thinking? Shouldn’t we ask questions like: What do these technologies mean for our humanity? How do we equip ourselves to navigate this “brave new world”? Because, friends, we are never going back.

None of this is new. We invent things that depersonalize us: the steam engine, the assembly line, corporations, governments — the list goes on. Let me propose an analogy to break up some conceptual logjams that I think, are largely responsible for chasing the AI bogeyman.

I am a classically trained organist. Playing the organ is a beautifully complex human activity that involves pushing a lot of buttons and keys with my feet and fingers in such a way that music results. 

J S Bach, one of the great organ virtuosos, was once asked how he managed to play such intricate music on this instrument, and his response sets the stage for our philosophical reflection: “It’s actually quite easy,” he is alleged to have quipped. “If you hit all the right keys in the right order, the organ plays itself.”

Now consider: When I play the organ, I do not directly make the music that emerges; if I did, that would be a different art form that we call singing. Bach’s joke turns on the insight that the organ is an instrument in that other sense of the word, like a scalpel. Or a fork. It’s a tool that we use to make something else happen. There is no inherent relationship between pulling this stop or depressing that key and the sound that results: We invented all those relationships, mechanical and otherwise, for the sake of producing music.

Why isn’t AI an instrument like the organ?

Being a philosopher, of course I will entertain critiques of my analogy, as long as they don’t involve merely repeating platitudes or assertions that AI is “different.” If it is “different,” instrument-wise, then at least have the courage to spell out the relevant differences, so we can all see the bogeyman in the light of day.

Meanwhile, what interests me is the grey area between playing a musical instrument with your own body parts and making music by means of software. If you don’t know, those tools have gotten astoundingly sophisticated, and more and more of the music you hear will be generated not by people holding instruments with their body parts, but by software operated by “digital” composers, often with the same body parts, like fingers.

And it may not be the demise of music, after all. In the old days, composers would write their creations on pieces of paper, and deliver that paper to people with instruments who would make the music audible. Now, composers can cut out the middleman, so to speak, and go directly from creation to audible sounds, in a few clicks. Without a doubt, something is lost and something is gained, but do we have the historical perspective to decide whether, on the whole, the loss is greater than the gain? Or what it means, for that matter?

This isn’t the first time these questions have been asked. At one time, brass instruments had no valves and therefore could play only in a few closely related keys without changing a “crook” (a piece of tubing that changes the column length of the vibrating air, thus changing the key). I can imagine purists sitting around arguing that the introduction of valves meant The End. “Who on earth would want to play in multiple keys anyway?,” they would ask each other. Earnestly, no doubt.

What if we’re in the same position right now, with respect to the tools AI is affording us? And if we muster the wherewithal to ask that question without immediately invoking the bogeyman, then — as scholars and professors whose charge is to expand and equip minds for their futures — what should we say to students? As leaders and managers, what should we say to workers? As human beings, what should we tell ourselves about our creative and intellectual work?

Is telling people we won’t allow AI at all in their work analogous to forcing them to sing instead of pick up an oboe as the only music they’re allowed to make? No doubt there’s a value in singing: But is singing the only music we should allow?

But what’s the limit in the other direction, the path that leads to creative and intellectual work — or any work — done by AI?

Let me pose the problem like this: Suppose I give an organ recital. Suppose I invite you. Suppose you go to the performance hall for the recital. Suppose I come out, stand in front of the organ, and say:

“I’d like to welcome each of you to my organ recital. For this evening’s performance, I’m playing a selection of 17th and 18th century chorale preludes by composers from the north and south German baroque. I will be performing this evening on a three-manual, fifty-rank Viscount custom organ. I have hired Conduxia Musicus to play these pieces for me. Again, welcome, and enjoy the music.”
Matthew Daude, Organist
Tweet

Leave a Reply