
Artificial intelligence (AI) has become a popular topic of conversation in recent years (Dehman, 2023), and as advancements in AI technology continue to grow, so do the list of questions and concerns about the future of AI-driven work (Engawi et al, 2021). New generative-AI tools become available to the public each year with companies, such as Google and Adobe, advancing technology that allows anyone to create full, complex images from simple, text-based prompts within just a few clicks. As many of us have seen, these generated images can often lead to disastrous and sometimes humorous results while other times creating unbelievably detailed and realistic images. In an industry that is riddled with misinformation and fake news and is often mistrusted by the public (Horton et al. 2023), false portrayals of agriculture created by generative-AI tools can be especially detrimental to the credibility and transparency the industry is working so hard to build. In attempt to better understand and address this potential concern, we tested the ability of undergraduate students in an agricultural communications course on their ability to identify AI-generated versus real images within a sample of photos depicting agricultural practices. Throughout the course, we then developed and taught students strategies and tips for identifying AI-generated images and hallmarks to watch for to determine the accuracy of image authenticity. We then tested them again later in the semester with a new set of images after practicing AI image detection several times. Students’ ability to identify AI-generated images improved after their in-class training, and they helped us further refine strategies for determining image authenticity. Having buy-in on the teaching and development of these practices also improved students’ self-efficacy and confidence in this task. We have plans to expand the courses and programs these strategies are taught to in the future.
10135 100 St NW
Edmonton AB T5J 0N7
Canada