Digitalisation does not stop at medical writing: AI drafts headlines, summarises studies and juggles technical terms. This often cuts down on time and even includes spell checking. Quality is also often impeccable.
Yet the devil lives in the details:
😱 When AI has clearly missed the mark, it sometimes struggles to respond to criticism. Attempts to convince it can then take longer than doing the job yourself.
🪓 Empathy and individual attention to target groups are sometimes lacking, and overly literal translations quickly come across as clumsy. On the other hand, pompous, patronising phrases are also frequently found.
🤢 What’s more, AI can’t teach what it hasn’t learned – it’s a case of “garbage in, garbage out”. If the training data is unclear, it’s like playing with a black box and all its unknowns and risks.
⚠️ But one of the most important prerequisites for working with AI is: only believe what you have checked yourself. This is tricky because AI often makes mistakes in completely different places than humans do – in places where you might not look so closely. And once artificial intelligence lets its imagination run wild, neither references nor figures nor quotes are safe from its inventiveness.
After all, AI only wants to give you what you want…
The Columbia Journalism Review’s Tow Center for Digital Journalism published an interesting and rather disillusioning study on AI and citations just this March:
AI Search Has A Citation Problem
So our experience is: AI is a gift for routine tasks, but human judgement is still needed for everything else. Medical writing is more than just processing data into text – experience, imagination, critical thinking and ‘common human sense’ are still needed – from us.