NUJ photographers’ council condemns lack of transparency with Gen-AI images
Images of the Israel-Hamas war generated using artificial intelligence without clear labelling risk the erosion of public confidence in photography.
The National Union of Journalists and its Photographers’ council has condemned the creation of images generated using artificial intelligence to depict the Israel-Hamas war, without being clearly labelled as such. Images published and sold in this way risk eroding public confidence in photography encountered online, and the union is calling on all platforms including Adobe to withdraw images from its library that could mislead audiences into believing that they are genuine images of the conflict.
Although the images sold on the Adobe stock library are labelled as AI-generated, this labelling is not always present on images when published elsewhere. No systems yet exist for ensuring that AI-generated work is labelled wherever it is used, which poses huge dangers for journalism.
The NUJ is concerned recent practices further exacerbate the proliferation of disinformation, fuelling inaccuracies throughout the war. Without transparency at the core of approaches adopted by platforms selling AI imagery, misinformation is also likely to spread, promoting false narratives contrary to the truths from journalists reporting from frontlines whilst in grave danger.
Séamus Dooley, NUJ assistant general secretary, said:
“Serious ethical questions are raised about the use of AI to produce images during such conflict. At a time where the public must be able to trust the accuracy of images published, the failure of companies to label photography poses a threat to the credibility of photographers who adhere to high ethical standards including the NUJ's Code of conduct.
“As platforms seek to profit from images from the war, journalistic outlets are rightly taking action to verify the authenticity of images and the NUJ encourages platforms to consider the wider harm to society if trust in imagery is weakened.”
Natasha Hirst, NUJ president, said:
“The NUJ has raised alarms about the dangers of AI-generated images being used in a way that misleads the public into thinking they reflect real life events and scenes. Audiences must be able to trust the images and videos that they see as being an accurate reflection of reality.
“If AI-generated images can be mistaken for real life, editorial integrity is undermined across the board. We urgently need regulation to protect journalism from being tainted by an influx of falsified images. In the meantime, we call on stock libraries to exercise greater responsibility and prevent misleading images from being circulated through their libraries.”
The NUJ code of conduct applies to all forms of journalism including news photography and videography. Images and footage of current news events should not be manipulated, and illustrative or AI-generated images have no place in newsgathering.