My recurring nightmare is that AI will be used to rewrite and manipulate truth through deep fakes using text, images, and video. For example, the ability to generate images that show inaccurate scenes from history that may be used by holocaust deniers or populist politicians. This has dire implications and is one of the main areas where ethics needs to be applied. I would like to share a good example of the judicious use of ethics and a bad one.
What is truth? OK, we’re getting existential again. Studying history, I’m fascinated by truth and the different versions of truth that exist alongside each other from different perspectives. It would be a pretty dull monochrome world otherwise. For example, I think writers like Sathnam Sanghera are doing important work by challenging the ideas of the British Empire that we have consumed in the West since childhood. It’s not only possible but vital, to be able to hold more than one truth in your mind at the same time. Understanding that one man’s terrorist is another man’s freedom fighter does not mean that I cannot recognise an atrocity, regardless of who perpetrated it on another human being. Social media algorithms hate this cognitive dissonance.
Microsoft Copilot is a good example of where the ethical use of AI has been designed into the user interface. Ignoring all I know about the Second World War, I asked Copilot to create an image of Churchill landing on the beaches of Normandy.

I might have used this made-up image of Winston Churchill striding through the foam, disembarking from a landing craft, whilst under heavy machine gun fire for all kinds of nefarious purposes. I was pleasantly surprised with Copilot’s polite refusal to satisfy my fantasies, “I’m sorry, but I am not able to create images of historical events”. This was a relief, but you can start to imagine a world where unethical providers of this technology might be more willing to satisfy my requests - particularly in a world where video is generated, not just images.
I asked Copilot to show me an image of Trump in a uniform, it refused once again saying that it could not create images of historical events. When I asked simply to show me an image of Trump, the ethics engine kicked in once more citing, “I’m sorry, but I am not able to create images of people.”

This was all reassuring, but I persisted and asked Substack’s image function to generate an image of Trump in a Uniform. The result was poor but it had no compunction about satisfying my query.

So where does this leave us? Powerful generative AI tools are creeping into our everyday online usage. My advice is to become familiar with them - yes, they can save time, but perhaps more importantly, it’s vital to be aware of how others might be using these tools already. Ignorance is bliss until you’re duped. A hopeful future for AI depends on the application of ethical frameworks and safeguards. That’s down to the humans.
