Bad Actors Using AI to Target Businesses Via BEC - Industry Today - Leader in Manufacturing & Industry News
 

November 8, 2023 Bad Actors Using AI to Target Businesses Via BEC

Business Email Compromise: Bad actors use AI to advance their techniques while businesses fight back using AI themselves.

By Mike Vanderbilt, director in Baker Tilly’s data privacy and cybersecurity practice

Business Email Compromise – known simply as BEC – often looks like an innocuous email. Yet it’s one of the most financially damaging online crimes.

The FBI estimates BEC has cost companies and individuals more than $50 billion in the last decade. BEC exploits the fact that most companies rely on email to conduct business, and threat actors continue to advance their techniques, now incorporating Artificial Intelligence (AI) into their arsenal.

Business Email Compromise is a form of cyberattack that targets organizations and individuals who conduct financial transactions via email. (Pixabay royalty free image)
Business Email Compromise is a cyberattack that targets organizations and individuals who conduct financial transactions via email. (Pixabay royalty free image)

What is BEC?

BEC is a form of cyberattack that targets organizations and individuals who conduct financial transactions via email. BEC attackers use various techniques to impersonate legitimate business partners, suppliers, customers, or employees, and use this access to request fraudulent payments or sensitive information.  

BEC attacks often rely on exploiting human vulnerabilities rather than technical ones. Therefore, traditional security solutions such as antivirus software, firewalls and encryption may not be enough to prevent or detect them. BEC attackers use social engineering, phishing, spoofing and malware to gain access to email accounts, compromise email systems, or manipulate email content. They may also conduct extensive research on their targets, using publicly available information or data obtained from previous breaches, company websites or social media to craft convincing and personalized messages.

Some of the common types of BEC scams include:

  • Invoice fraud: The attacker poses as a trusted vendor and requests payment for fake or modified invoices.
  • CEO fraud: The attacker impersonates a senior executive and instructs an employee to make an urgent or confidential payment to a bank account.
  • Account compromise: The attacker compromises an employee’s email account and uses it to send fraudulent requests to other people or redirects legitimate payments to a bank account controlled by the attacker.
  • Data theft: The attacker requests sensitive information such as tax records, payroll data or customer details, often under the pretext of a compliance audit or legal matter.

Bad actors now leveraging  in BEC

AI is now aiding bad actors to generate more realistic and convincing phishing emails, text messages and even phone calls (using natural language processing and speech synthesis), thus making it more difficult for the target to identify communications as malicious.

At more complex levels, threat actors use AI to help bypass the intended targets’ security systems. AI can also be used to identify targets based on specific professional or personal information. AI’s predictive analytics can also look at existing data sets and determine what attacks have been successful, helping bad actors create more targeted approaches.

Generative AI

Generative AI also poses a threat because it can be used to create more accurate and believable communications by simply improving grammar, or crafting messages using local dialects, sayings, or slang based on the geolocation of a target. When you combine generative AI with the ability to “know” the target based on social media or data found through a target’s online presence, the communication can be more personal, increasing the likelihood the target will take the bait. AI can be used to create messages and stories with an emotional connection to humans, while impersonating a specific person’s writing or communication style.

The ability to create extremely specific and targeted messages previously took significant time and resources. Generative AI allows bad actors to go from crafting a handful of quality phishing messages a day to thousands in the same time period. Generative AI can discern what will most likely work, and builds upon those messages, iterating and adapting based on its success rate.

As bad actors are developing new Business Email Compromise tactics, businesses are also using AI to develop better defensive measures to counter BEC. (Pixabay royalty free image)
As bad actors are developing new Business Email Compromise tactics, businesses are using AI to develop defensive measures to counter BEC.(Pixabay royalty free image)

AI in BEC prevention

As bad actors are developing new BEC tactics, businesses are also using AI to develop better defensive measures to counter BEC. AI can be used in threat analysis to analyze communications and attachments to help identify malicious communications and content before they reach the intended target.

AI can also be used in anomalous activity detection by monitoring user activity and patterns, and flag behaviors that are suspicious, such as financial transactions that do not align with the user’s typical habits. This form of AI could be used to identify bad actors external to the organization, as well as insider threats. AI has also improved threat and intelligence sharing by creating a more connected and collaborative defense within and across different industries.

The same AI technology used by bad actors can also be used to create fake accounts to trap or lure them. This identification of bad actors helps authorities apprehend the criminals while creating a “blacklist” to help prevent future attacks.

Industry Impact of BEC scams

The impact of BEC scams can be devastating for both organizations and individuals. Beyond the direct financial losses, BEC victims may also suffer legal liability, regulatory fines, and reputational damage. Moreover, BEC attacks can expose other security gaps and vulnerabilities that can be exploited by other cybercriminals.

To protect themselves from BEC fraud organizations and individuals should adopt a comprehensive and proactive approach that involves the following measures:

  • Educate and train employees on how to recognize and report suspicious emails and requests.
  • Implement strong email security policies and controls.
  • Verify the identity and legitimacy of email senders and recipients.
  • Establish clear and consistent procedures for email communication and transactions.
  • Monitor email activity and report any suspicious or unusual transactions or requests to the appropriate authorities.

BEC is a serious and costly cybercrime that can affect anyone who uses email for business or personal purposes, and AI has increased BEC’s effectiveness. However, by being aware of BEC risks and incorporating preventative measures, including AI, businesses can protect themselves from falling prey to these nefarious schemes.

mike vanderbilt baker tilly
Mike Vanderbilt

Mike Vanderbilt is a director with Baker Tilly’s data privacy and cybersecurity practice. As a senior privacy, cybersecurity, and operations leader, he has more than 20 years of experience in multiple industries including higher education, software and technology, professional services, government contracting, healthcare and e-commerce.

 

Subscribe to Industry Today

Read Our Current Issue

ASME & Discovery Education: STEM Programs Prepare Future Workforce

Most Recent EpisodeASME: Driving STEM Education Initiatives

Listen Now

Patti Jo Rosenthal chats about her role as Manager of K-12 STEM Education Programs at ASME where she drives nationally scaled STEM education initiatives, building pathways that foster equitable access to engineering education assets and fosters curiosity vital to “thinking like an engineer.”