Deepfakes and other sophisticated ploys are coming fast and require heightened defenses.
At 2 am, you receive an urgent call from your grandson. He tells you he’s been arrested and needs bail money – and he pleads with you not to tell his parents. Only the call is not from your grandson. It’s not even from a person. It’s a “deepfake,” a type of artificial intelligence that cyberthieves are using to commit fraud by cloning the voice of someone you trust.
This type of attack has become so common the Federal Trade Commission sent out a warning about it. Cybersecurity experts say companies need to be on alert for similar techniques, such as an employee receiving a call from the CEO telling him to wire $500,000 to a vendor immediately.
“The pace of the lifecycle of these threats has increased enormously,” says Steven Jones, Chief Information Security Officer for Memphis-headquartered First Horizon Bank. “In the past, you would hear about a threat. Several months later, you might hear the threat’s been exploited, and several months after that, companies would start doing something about the threat. But now the threats are being realized immediately, and they need to be addressed immediately.”
When cyber experts talk about the new threat landscape, they often bring up generative AI. “AI has the same promise for bad actors to improve their productivity as it has for companies,” Jones says. “Because these AI models can bring in larger amounts of data, they can provide more context, making the emails and other communications bad actors send seem more like the person they are trying to impersonate.”
“This problem is managed at a business level, not a technology level.”
– Steven Jones, Chief Information Security Officer, First Horizon Bank
Security Implications of Generative AI
The Cloud Security Alliance (CSA), an industry trade group, recently produced its first white paper assessing the implications of this popular technology. The general public sees generative AI as a way to draft emails, answer questions and translate documents. In the hands of bad actors, the tool can wreak havoc. It can quickly access a network to determine the greatest vulnerabilities to attack. Indeed, the white paper recounted an exercise where generative AI was able to identify a weakness in a network by simply analyzing 100 lines of the base code, allowing the software to understand how to bypass certain security measures.
Generative AI can also provide “foothold assistance,” letting bad actors establish an initial presence on a network. It can perform reconnaissance, gathering personal information about executives or employees to impersonate them more effectively, as well as details about internal processes and technology.
This can enable bad actors to do things like produce legitimate-looking emails, devoid of spelling and grammar mistakes, filled with convincing details. For example, generative AI can be asked to create an email telling employees that passwords will be reset this week, and they should expect a link to do so.
“AI can execute faster than human beings can think,” says Illena Armstrong, President of CSA. “However, we bring our creativity, critical thinking, problem-solving skills, emotional intelligence and ability to make more nuanced calls to address a situation or problem that AI cannot.”
Business leaders, not technology, bear the responsibility of defending organizations from cyberattacks. Yet according to WSJ Intelligence’s recent survey of midmarket decision-makers, enhancing cybersecurity ranks a mere sixth in priority for 2024.
Building a Strong Defense
A strong cyberdefense in the AI era starts with business leaders. “This problem is managed at a business level, not a technology level,” Jones says. “The executive management team and the board need to understand the problem, because these types of challenges are happening much more quickly than people are used to.” And each company must have a deep understanding of its most critical data, be it customer data, health data or intellectual property.
Jones says technology solutions are a critical part of the equation. For example, managed service providers can augment the security capabilities of a small business, providing round-the-clock protection, without requiring the company to stand up its own security infrastructure.
“Employees and clients need to be aware of these more sophisticated attacks across all communication channels,” he says. Jones says that cybersecurity training should be targeted, since the threats the procurement department faces could be much different than the threats the legal department encounters. Allowing employees an easy way to report suspicious activity is important and empowering. “Establish a reward-and-recognition system for people who are following procedures,” Armstrong says. “Training needs a different spin given how quickly both good and bad actors are using AI. We have to foster a cybersecurity culture, where there are clear expectations, directives and policies in place.”
Jones notes that phishing attacks – which deceive people into revealing information – have risen significantly. In November, a startup received $15 million in seed funding to detect deepfakes and other AI-generated content. All this points to a need for greater awareness of, and a better response to, AI-driven cyberscams. “Banks and private industry have always been on the front lines of this cyberwarfare,” Jones says. “However, the landscape is changing, and even small businesses are going to have to safeguard themselves, because this isn’t science fiction. It’s happening now.”
Need advice for your business?