
Brief overview of results from an informal survey done by a PSC student.
Lately, artificial intelligence has been on the rise with people finding more and more beneficial uses for it. As AI is trained with a wider range of information, it will eventually become increasingly difficult to decipher whether something is really human-made. This poses the question: are people, particularly PSC students and faculty, able to correctly identify whether news articles and photographs are human-made or AI-generated?
The answer: It’s tricky.
How is AI used now?
So far, well-established and respected news organizations such as The New York Times and The Washington Post have begun to find uses for AI in the newsrooms. For example, The New York Times uses AI to automate voices to read the article, generate headlines while on initial drafts, as well as locate data to be used in an investigative article. Meanwhile, The Washington Post personalizes user’s web pages with articles the user is more likely to scroll through. Less responsible uses of AI can be seen as having the article AI-generated. In other words, the article is completely written by AI or having some aspect of an article AI-generated, such as the headline, lead, or the photograph. Hoodline is an example of these unethical uses of AI as has been reported by CNN and the Neiman Lab. Both articles show that Hoodline generates articles using AI and, for the byline, uses a fake name.
To keep up with the news, people might depend on smartphones, laptops, and other devices. Although it’s a good thing that people are updated, it doesn’t necessarily mean that the news is created by a human, nor that it is entirely accurate. Content that is AI-generated can risk misinformation and can plagiarize content that already exists –whether this be text or images. This, of course, can cause widespread false news throughout social media.
Informal Survey Format
To find whether people are able to tell the difference between AI-generated and human-made content, I conducted an informal study as a project for one of my classes. To do this, I presented people with surveys which asked a couple of questions on their perceptions on the use of AI in journalism for one section. For another section of the survey, I included snippets of news articles, and asked participants to identify whether the creator is artificial intelligence or a human. To follow up, I asked about how confident participants were of their answers.
Of the questions that ask participants to identify its creator, one question included the headline and lead. The second question included the headline and a photograph. The final question had a side-by-side of an AI-generated article, and one that is human-made. The questions were made so as to look into how easy it is to identify human or AI-generated language or photographs, both of which are aspects of news articles.
Half of participants correctly noticed AI-generated language
I found that an average of 50% of participants were able to correctly identify the human-made text in both questions that looked into language. This was the area in which participants seemed to struggle the most with. The majority of participants seemed more inclined to look into the format in which the information and content was presented. They expressed that bolded and bulleted lists were AI, rather than really look at what exactly the article was stating. The lack of people correctly noticing AI-generated language is important to take notice. While AI can sometimes create accurate content, it’s important to note the language and diction of an article because AI might not correctly give the right and factual information, leading to misinformation.
The vast majority of participants correctly identified an AI-generated image
An overwhelming number of participants were able to correctly distinguish an AI-generated image from human-made. Around 86% of participants were correct in their responses, identifying the image as AI-generated. Apart from the clear distortion of the faces and images in the clothing, no participant noticed the type of phone pouch. While this is a tiny detail, it is one very significant to notice as it is incorrectly depicted. This is largely important because attention spans of people might not last long enough to look past the image. If the image is not depicted accurately, people may incorrectly assume different aspects of a news story. In this case, it was the inaccurate depiction of the phone pouches which, from the AI-generated image, was worn around the neck rather than a magnetic fabric pouch that students can carry anywhere. While the majority of participants correctly identified the AI-generated image, it could be because artificial intelligence has not advanced much as to create realistic and accurate
More AI literacy among students and faculty is needed
Misinformation and disinformation is a real problem in a world where technology is constantly advancing. We can’t just rely on technology to do our work, or to replace us entirely. Limitations and transparency with the use of AI is immensely important for both users and news organizations. This is so that users know that the information users consume is accurate and factual, and so news organizations don’t rely too heavily upon artificial intelligence. Evidently, there needs to be more awareness in users of the use of AI in news articles, and how this could lead to false information.