Share this article on:
Adobe released the ANZ version of its Future of Trust Study last week, and it paints a picture of a community keen on what AI can do for good but wary of how it could be misused.
A survey of consumers from Australia and New Zealand has revealed concerns over the maturity of generative AI (GenAI) technologies, the danger posed by deepfakes, and the spread of misinformation.
Adobe’s Future of Trust Study 2024, Australian and New Zealand edition, polled 1,005 ANZ consumers over the age of 18 and found that people have mixed views on AI as part of a wider global study.
For instance, 66 per cent of respondents felt that GenAI would make it easier to find information, while 55 per cent expected the technology to make them more productive. Only 9 per cent of people said they use AI regularly, but 63 per cent have plans to use it more in the coming year.
However, the ability of AI to create misinformation and deepfakes is of particular concern.
Eighty-two per cent of those polled said they were concerned about content being altered to create misinformation, with 38 per cent feeling that video and images are particularly at risk of being altered. That being the case, 32 per cent have either stopped using social media or slowed down their use of it, while 78 per cent feel that deepfakes and misinformation could have a negative impact on elections.
Despite those fears, respondents felt there was a clear need for more work to be done. Eighty-seven per cent expect that governments and tech companies should be working together to protect the electoral process, while 80 per cent that political candidates should be barred from using AI-created content to promote themselves – especially when there is a lack of fact-checking tools available.
“Generative AI impacts a vast array of our society,” said Chandra Sinnathamby, director of digital media B2B strategy and GTM at Adobe Asia-Pacific, at a launch event for the study. “Whether it is the economic output that we produce here, it has the potential to have a real positive impact.”
However, Sinnathamby noted that this is only as long as the right guardrails and regulations are in place.
“[AI] has the potential to impact significant events in our society, like an election process,” Sinnathamby said, “and we know this year is quite a special year. More than half the world’s population, more than 4 billion people, are going to cast their vote across 64 countries, which is quite outstanding, and the risk of misinformation impacting the outcomes of that is quite significant.”
“It’s something for us to really think about – of how we can minimise those impacts.”
Jennifer Mulveny, Asia-Pacific director of government relations at Adobe, said in a statement that the study “underscores the importance of building media literacy among consumers, where they are not only alert to harmful deepfakes but have the tools to discern fact from fiction”.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.