How Generative AI is Creating New Classes of Security Threats

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more

The promised AI revolution has arrived. OpenAIs ChatGPT set a new record for fastest growing user base, and the wave of generative AI has spilled over to other platforms, creating a huge shift in the tech world.

It’s also dramatically changing the threat landscape, and we’re starting to see some of these risks materialize.

Attackers use AI to improve phishing and fraud. Metas Linguistic model with 65 billion parameters it leaked, which will undoubtedly lead to more and better phishing attacks. We see new rapid injection attacks on a daily basis.

Users often feed sensitive business data into AI/ML-based services, leaving security teams busy supporting and controlling the use of these services. For example, Samsung engineers insert proprietary code into ChatGPT to assist in debugging, leaking sensitive data. A survey by Fishbowl showed that 68% of people who use ChatGPT for work don’t tell their bosses about it.


Transform 2023

Join us in San Francisco July 11-12, where top executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.

subscribe now

Misuse of AI is increasingly on the minds of consumers, businesses and even government. The White House has announced new investments in AI research and upcoming assessments and public policies. The AI ‚Äč‚Äčrevolution is moving fast and has created four major classes of problems.

Asymmetry in the attacker-defender dynamic

Attackers will likely adopt and design AI faster than defenders, giving them a clear advantage. They will be able to launch sophisticated AI/ML-powered attacks on incredible scale at low cost.

Social engineering attacks will be the first to benefit from synthetic text, voice and images. Many of these attacks that require manual effort like phishing attempts impersonating the IRS or realtors pushing victims to wire money will become automated.

Attackers will be able to use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to quickly generate polymorphic code for malware that evades detection by signature-based systems.

One of the pioneers of AI, Geoffrey Hinton, recently made headlines when he told The New York Times that he regrets what he helped build because it’s hard to see how bad actors could be stopped from using it for bad things.

Security and artificial intelligence: further erosion of social trust

We’ve seen how quickly misinformation can spread thanks to social media. A University of Chicago Pearson Institute/AP-NORC survey shows that 91 percent of adults across the political spectrum believe disinformation is a problem, and nearly half are concerned that they have spread it. Put a car behind it and social trust can erode cheaper and faster.

Current AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and when they don’t know how to respond, they invent things. This is often referred to as hallucinatory, an unintended consequence of this emerging technology. When looking for legitimate answers, lack of accuracy is a big problem.

This will betray human trust and create dramatic mistakes that will have dramatic consequences. A mayor in Australia, for example, says he could sue OpenAI for defamation after ChatGPT wrongly identified him as being jailed for corruption when he was actually the whistleblower in a case.

New attacks

Over the next decade, we will see a new generation of attacks on AI/ML systems.

Attackers will affect the classifiers that systems use to influence models and control outputs. They will create harmful models that will be indistinguishable from real models, which could cause real harm depending on how they are used. Rapid injection attacks will also become more common. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to reveal its internal directives.

Attackers will kick off an arms race with contradictory ML tools that trick AI systems in various ways, poison the data they use, or extract sensitive data from the model.

Since more of our software code is generated by AI systems, attackers may be able to exploit inherent vulnerabilities inadvertently introduced by these systems to compromise applications at scale.

Scale externalities

The costs of building and operating large-scale models will create monopolies and barriers to entry which will lead to externalities that we may not yet be able to predict.

Ultimately, this will have a negative impact on citizens and consumers. Misinformation will become rampant as large-scale social engineering attacks target consumers who will have no means to protect themselves.

The federal government’s announcement that governance is imminent is a good start, but there is so much ground to be made up to address this AI race.

AI and security: what comes next

The non-profit Future of Life Institute has released an open letter calling for a pause in AI innovation. It’s gotten a lot of press coverage, with the likes of Elon Musk joining the crowd of interested parties, but hitting the pause button is simply not viable. Musk knows this too, apparently he changed course and started his own AI company to compete.

It has always been false to suggest that innovation should be stifled. Attackers will certainly not comply with such a request. We need more innovation and more action so we can ensure AI is used responsibly and ethically.

On the bright side, this also creates opportunities for innovative approaches to security using artificial intelligence. We will see improvements in threat hunting and behavioral analytics, but these innovations will take time and investment. Any new technology creates a paradigm shift and things always get worse before they get better. We’ve had a taste of the dystopian possibilities when AI is used by the wrong people, but we need to act now so security professionals can strategize and react when large-scale problems arise.

At this point, we were woefully unprepared for the future of AI.

Aakash Shah is CTO and co-founder of oak9.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data engineers, can share data-related insights and innovations.

If you want to read cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing your own article!

Read more from DataDecisionMakers

#Generative #Creating #Classes #Security #Threats

Leave a Reply

Your email address will not be published. Required fields are marked *