MET workshop report

I asked ChatGPT to give me a one-paragraph summary of this workshop and it didn’t have a clue. It knew that it was held in Valencia on 21 April 2023 but got the time wrong. It had told me on the day that Tottenham had qualified for the Champions League, which would have made me happy – almost as happy as I was to be at MET’s first post-pandemic in-person workshop. We spent the session after coffee playing with ChatGPT and QuillBot. Keeping a room full of translators and editors on task must have been a nightmare for our trio of highly competent presenters: herding CATs (Computer-Assisted Translators)? But we had a lot of fun. I learned that ChatGPT is extremely hit-and-miss, but pretty good when it hits (e.g. making a good stab of rewriting a rather poetic text about Lisbon in the manner of Irvine Welsh, lots of fucks and shites). QuillBot was new to me and strikes me as a very useful paraphrasing tool, tireless in its suggestions, and I will definitely add it to my bag of tricks.

Elina Nocera opened the session with a detailed definition of AI and a survey of some of the legislative responses to it (a ban in Italy, plans involving a lot of abstract nouns like “transparency” in the UK, looming EU-wide restrictions). Presiding superpower Google has tweaked its guidelines to exclude automatically generated content intended to manipulate rankings. Reassuring? Not very. Concerns were raised about “untruths, hate speech and other garbage” that chatbots absorb, leading to implicit bias, and what is delightfully termed hallucination (“convincing language that is flat-out wrong”, “irrelevant, nonsensical or factually incorrect answers”: see Spurs and Champions League qualification above). Elina asked the question on all of our minds: is AI going to take our jobs? The short answer is no, unless we work for bottom-feeders, write like robots and compete solely on volume and price. That’s alright, then: doesn’t sound very MET to me.

Theresa Truax-Gischler then took over to provide some historical context. This kind of massive disruption has happened before, you know: handwritten manuscripts, printing, the Sinclair C5 (no, not that). The technology is always new, but the story remains the same: humans producing human texts. We’ve moved from corpora to LLMs (large language models), from millions to billions to trillions of words, parameters and “tokens”, and AI-assisted writing is here. What can we do about bad actors taking advantage of it? Well, there’s legislation, there’s UNESCO, the EU, data hygiene practices such as digital watermarking. Good old Turnitin is on the case as regards plagiarism. But they’ll all have their work cut out. One upside: the essay mills are going bust. As individual practitioners, we can – should, must – make conscious choices, strive for AI writing and data literacy, adopt confidentiality policies, choose our tools wisely and control the datasets we use. Warning: nothing online is ever 100% confidential.

Finally, it was Allison Wright’s turn to guide us through a detailed analysis of her own working process and use of tools. She uses them extensively – Microsoft Word, Infopedia, QuillBot, Grammarly, memoQ, Google, DeepL, Linguee, a nice little Moleskine notebook – but stressed the always-human heart of her activities. It was fascinating to see a colour-coded breakdown of a text she had translated in terms of what had come from where – the human at the centre, DeepL, QuillBot, client feedback. Quite a kaleidoscope.

And then it was playtime. How lovely to be in Valencia, back at MET among my fellow language professionals, sharing AI wonders and absurdities. Hats off to the three presenters and the verve and thoroughness with which they attacked this potentially overwhelming theme. I don’t know how to do digital watermarking, but I promise you this report was written by a human being. Can you tell?

©David Ronder May 2023