There have been great developments in artificial intelligence (AI) lately, with generative text engines such as Generative Pre-trained Transformer (GPT) 3 and its successor GPT 4 becoming popular for all sorts of uses. Some people have suggested that AI can be used for editing, and Grammarly has recently unveiled a new AI-driven feature for assisting with editing and revising documents with an appropriate tone of voice.

We ran a few tests on ChatGPT to see how it got on with copy-editing. We found that it could reliably correct spelling mistakes and obvious grammatical errors. But it was less good at spotting when the text was unclear, and it could not ask for missing information.

Generative text engines such as GPT don’t actually edit the text. They take the text for editing as input and spit out the most likely text, based on the inputs they have been trained on, to follow the phrase ‘please edit the above text’. They don’t actually understand the content at all.

Combined with AI’s tendency to ’hallucinate’ – just make up facts that sound vaguely correct – there’s a danger that, particularly in longer and more complex documents, AI editing could actually introduce factual errors. In contrast, our editors work to correct factual errors and internal contradictions, for example by checking that data in tables and figures matches the discussion in the text. We can also fact check against external sources if requested.

The fact that the output text is newly generated also means that the changes between the output and the original aren’t marked in any way. When we edit text here at Prepress Projects, we use Word’s Tracked Changes feature so we can review our edits, and our clients appreciate the chance to see exactly what changes we have made.

Applying the correct editorial style is a hugely important part of editing. In our tests, ChatGPT could follow simple instructions, but if we added a lot of specifics about the writing style we wanted, some of them got ignored.

Many of our clients supply us with style guides which set out a consistent way to handle common variants of words or phrases for all documents for the client. These guides cover things like specific hyphenation rules (is it decision maker or decision-maker?), variant spellings (yoghurt or yogurt?), and number formatting (should digit groups be separated by commas or spaces?), as well as preferred phrasings (autistic people or people with autism? should data be singular or plural?). On top of that, we create and maintain our own guide for each client, which we add to whenever we encounter something not covered in the client’s own style guide so that it can be presented consistently in future documents.

These guides are too long and complex to be included in prompts to an AI system. In some cases, existing commercial AI systems may have enough examples of an organisation’s output to be able to imitate the style if instructed to, but they won’t necessarily be able to make the right decision if something new comes up. And for many, perhaps most, organisations, there won’t be enough samples available to the AI in the first place.

Finally and importantly, there is currently a potential confidentiality issue with AI editing. Many current AI systems save the material submitted to them and use it in training for future outputs. This means that anything you ask them to edit could be revealed later to other users of the same system. It’s not suitable for anything that needs to be kept confidential or under embargo.

While AI can do a lot, it’s not yet anywhere near the level of a professional editor. Perhaps this will change in the future. We will keep an eye on developments.

In the near future perhaps we can use AI tools with human guidance to enhance and speed up editing. But right now, AI-based editors have a long way to go before they even come close to the things human editors can do with thought and judgement.