loader image
Home » Blog » Automatically brilliant – or brilliantly flawed? Medical writing and AI

Auto­ma­ti­cally bril­liant – or bril­li­antly flawed? Medical writing and AI

Digi­ta­li­sa­tion does not stop at medical writing: AI drafts head­lines, summa­rises studies and juggles tech­nical terms. This often cuts down on time and even includes spell checking. Quality is also often impeccable.

Yet the devil lives in the details:

😱 When AI has clearly missed the mark, it some­times strug­gles to respond to criti­cism. Attempts to convince it can then take longer than doing the job yourself.

🪓 Empathy and indi­vi­dual atten­tion to target groups are some­times lacking, and overly literal trans­la­tions quickly come across as clumsy. On the other hand, pompous, patro­ni­sing phrases are also frequently found.

🤢 What’s more, AI can’t teach what it hasn’t learned – it’s a case of “garbage in, garbage out”. If the trai­ning data is unclear, it’s like playing with a black box and all its unknowns and risks.

⚠️ But one of the most important prere­qui­sites for working with AI is: only believe what you have checked yourself. This is tricky because AI often makes mistakes in comple­tely diffe­rent places than humans do – in places where you might not look so closely. And once arti­fi­cial intel­li­gence lets its imagi­na­tion run wild, neither refe­rences nor figures nor quotes are safe from its inventiveness.

After all, AI only wants to give you what you want…

The Columbia Jour­na­lism Review’s Tow Center for Digital Jour­na­lism published an inte­res­ting and rather disil­lu­sio­ning study on AI and cita­tions just this March:

AI Search Has A Cita­tion Problem

So our expe­ri­ence is: AI is a gift for routine tasks, but human judge­ment is still needed for ever­y­thing else. Medical writing is more than just proces­sing data into text – expe­ri­ence, imagi­na­tion, critical thin­king and ‘common human sense’ are still needed – from us.