Princess Mary and the camel
And if that isn't enough to lure you into a rabbit hole about AI and higher education, I don't know Arkansaw.
Many institutions seem to be tootling the rope on the callyope these days about our "responsibility" to teach AI to students. At mine, the current sermon includes, among "important considerations," some notes about AI output:
- Output should always be checked carefully
- AI hallucinations can create inaccurate results
- Watch out for biases and stereotypes in results
I don't think it's unfair to suggest that, by the time you've learned to check your output, recognize hallucinations and be alert to biases, you're past needing the advice of an LLM -- and if you're not, you really shouldn't be using one.
I was primed for this before the slides showed up in my email because -- as one does in These Parlous Times -- I was trying to track down some information on the Nazi hierarchy's religious affiliations because of Something Somebody Said Online. Specifically, I was looking for an article about Hermann Goering's US chaplain at Nuremberg. One headline about the Nuremberg executions* -- "How did he learn his hour?" -- stood out in what passes for my memory these days, so I put it into a Google search on the chance it might turn up quickly.
The "AI overview," of course, topped the results. I've found that unimpressive since it told me in February that Fat Tuesday had occurred in March 2025, but its suggestion on Goering was beyond a little time travel:
Hermann Goering learned his "hour," meaning his time to be executed, after his suicide attempt failed in the Nuremberg prison. He was condemned for war crimes and crimes against humanity and hanged.
Well, no. The headline writer's question was about whether Goering had been tipped off in time to take his cyanide pill, which worked.
This brought to mind a Great Moment in Editing: T.E. Lawrence's exchange with an editor about spelling. Transliteration systems were fine, Lawrence said, as long as you knew Arabic to begin with. So after the AI For Teachers session came and went (had another meeting, sorry), I thought it might help to track down the exact Lawrence quote. So, again, I asked Google for a line that stuck out: "I spell my names anyhow, to show what rot the systems are."
The results were ... OK. It was the publisher raising the proofreader's concerns, but whatever. And "place names" seemed a bit off; I had thought the quote had to do with a camel, so I added another string: "She was a splendid beast." And in the distance, sirens:
Right, the queries were about all sorts of proper names, not just places, but Princess Mary? His biographer? A splendid beast? But it came from the Columbia Journalism Review (and the source is right there with the summary), so it's got to be true, right? Here's the estimable Merrill Perlman, in CJR, quoting from Seven Pillars:
Proofreader: Slip 47. Jedha, the she camel, was Jedhah on Slip 40.
Lawrence: She was a splendid beast.
As the kids say, yeah no. Asked about biographies of Lawrence (with the string "Princess Mary" added), AI Overview had this to add:
His personal life and relationships, including any with figures like "Princess Mary," are not directly documented or discussed in his biography.
I'm not seeing a future for generative AI in journalism, but comedy seems wide open:
You don't get in unless you say the password. I give you a hint. It's the name of a camel
Mary?
Mary? Ha ha. That's no camel.
No, but she drinks like one.
I'll be here all week, folks. Don't forget your server.
* Comparative gloating about death porn is one of those things I write about sometimes.