In the wake of increasing scandals related to artificial intelligence (AI) and its misuse, a new study reveals a growing distrust among American adults towards AI tools such as ChatGPT. The survey, conducted by the MITRE Corporation and the Harris Poll, indicates that public sentiment may be tilting towards the need for AI regulation, a shift fueled by widespread concerns over AI-created malware, disinformation, and the potential for misuse of advanced technology.
The survey findings offer some startling insights: a mere 39% of the 2,063 U.S. adults polled consider today’s AI technology as "safe and secure," marking a 9% decline from the previous survey conducted in November 2022. The public’s unease extends to specific AI applications as well, with 82% expressing worry over deepfakes and "other artificial engineered content," while 80% fear the potential deployment of this technology in malware attacks. The survey’s results not only highlight the growing apprehension towards AI, but also underscore the urgency for regulatory measures to ensure its safe and secure use.
American Distrust in AI Grows, New Survey Reveals
In light of frequent scandals involving artificial intelligence (AI), a new survey reveals that most American adults lack trust in AI tools, such as ChatGPT, and express concerns over their potential misuse. The MITRE Corporation in collaboration with the Harris Poll conducted the survey, suggesting that the public might be increasingly open to the idea of AI regulation.
Dwindling Faith in AI Safety
Based on the survey’s results, just 39% out of 2,063 U.S. adults polled believe that AI technology is safe and secure. This marks a 9% dip from the results of the previous survey conducted in November 2022. The perception of AI’s safety and security appears to be in decline, reflecting the growing concerns among the general public.
Concerns Over Deepfakes and Malware Attacks
The survey also highlights specific fears, with 82% of people expressing anxiety about deepfakes and other artificially engineered content. Additionally, 80% of the respondents expressed fear over the potential use of AI technology in malware attacks. There are also concerns about AI’s role in identity theft, the harvesting of personal data, and the replacement of humans in the workplace. This suggests a widespread apprehension about the potential misuse of AI technology.
Across Generations: Increased Skepticism
The survey indicates a rising wariness of AI’s impact across various demographic groups. While 90% of boomers are worried about the impact of deepfakes, 72% of Gen Z members share similar concerns. Notwithstanding younger people’s greater familiarity and use of AI, concerns about its potential misuse and the need for more industry protection and regulation remain high.
Strong Support for AI Regulation
The public’s dwindling support for AI tools might be attributed to widespread negative news about AI tools like generative AI tools, ChatGPT, Bing Chat, and others. When asked whether the government should regulate AI, an overwhelming 85% of respondents agreed to the idea. The same percentage concurred that making AI safe and secure for public use should be a nationwide effort involving the industry, government, and academia. Additionally, 72% believe the federal government should dedicate more time and funding to AI security research and development.
The Future of AI Regulation
Despite some cybersecurity experts asserting that AI is not a particularly strong tool for malware at the moment, the growing skepticism surrounding AI could shape the industry’s efforts. Companies like OpenAI might be prompted to invest more in safeguarding the public from the products they release. Given the widespread support for regulation, it won’t be surprising if governments start implementing AI regulation sooner rather than later.
The increasing distrust and concern about AI among American adults reflect the broader societal unease about the technology’s potential misuse. While AI has potential for great benefits, there is a clear demand for greater regulation and security measures. Companies in the AI industry need to take these concerns seriously and invest more in ensuring the safety of their products. It’s not just about technological advancement, but also about maintaining public trust and ensuring AI serves everyone’s best interests.