Cybercrime & Weaponised AI
As artificial intelligence transforms industries, it is also arming cybercriminals with powerful tools for hacking, fraud, ransomware, and deepfakes—posing unprecedented risks to global security
24-08-2025As artificial intelligence transforms industries, it is also arming cybercriminals with powerful tools for hacking, fraud, ransomware, and deepfakes—posing unprecedented risks to global security
24-08-2025Lately, it feels like everywhere you look, there’s talk about how AI is making life easier; helping us work faster, smarter, and with less effort. It’s all about boosting efficiency and saving time. But only now is it starting to hit us: in our rush to digitise everything, we may have overlooked a few lurking dangers. Back when technology was still finding its feet, no one worried too much. But now? We’re standing at the very peak of innovation. And with that comes a new set of problems; cybercrime being one of the biggest.
Ultimately, AI is still software. It’s built on code, algorithms, and digital frameworks. Everything about it is computerised. That shiny new AI tool you’re using? It’s powered by the same devices that cybercriminals love to target: your laptop, desktop, servers, you name it.
Cybercrime isn’t new; it encompasses activities such as phishing emails, online scams, and the notorious ransomware attacks. For a while, these felt like rare occurrences. However, over the last two or three years, especially with AI becoming our go-to assistant for almost everything, these threats have become increasingly prevalent. It’s a reminder that while we’re busy making things smarter, we also need to get a little smarter about the risks.
A ‘TRM’ blog titled “The Rise of AI-Enabled Crime: Exploring the evolution, risks, and responses to AI-powered criminal enterprises” articulates accurately that the very transformative technology of AI, that has powered many global industries from healthcare, climate modelling and improving efficiency and security in workplaces, that very same technology is now being leveraged for criminal purposes that pose a critical and significant threat to global security society stability. AI is being utilised for carrying out hacks, conducting frauds, not just financial ones, but verbal and imagery ones using deepfake technology and carrying out cyber-attacks at a massive scale compared to earlier. “As AI technology becomes more sophisticated, so will the ways in which criminals leverage it.”
But before we go into detail on what these crimes are and what deep role AI is playing, let’s clear what Cybercrime means and what all types of crimes come under its purview, including how it operates and the different elements of it. The Council of Europe’s Convention on Cybercrime of 2001, Article 1(a) defines a computing system as ‘any device, or a group of interconnected or related devices, one or more of which, pursuant to a programme, performs automatic processing of data.’ The League of Arab States Arab Convention on Combating Information Technology Offences of 2010, Article 2(3) defines data as ‘All that may be stored, processed, generated and transferred by means of information technology, such as numbers, letters, symbols, etc.”
There are predominantly two types of cybercrime: Cyber-dependent crimes and Cyber-enabled Crimes. The former is defined as ‘crimes that can only be committed using computers, computer networks or other forms of information and communication technologies (ICTs). The latter is defined as ‘traditional crimes facilitated by the internet and digital technologies (i.e. crimes that can be committed without a computer but are enabled by ICTs). The key distinction between the two: cyber-dependent crimes are crimes where the ICTs are the main target, and in cyber-enabled crimes, ICTs are the means to commit the crime and as such part of the modus operandi.
Categories of Cybercrime:
1. Hacking: the unauthorised access to systems, network and data
2. Denial of Service (DoS) attacks: the use of a computer to conduct a coordinated attack with the intention of overwhelming a server.
3. System Interference as defined in the Budapest Convention Article
5: “intentional, serious hindering without right of the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data.”
4. Malware: a computer code or software that is introduced into computers and computer systems in many different ways, which can be inconvenient, harmful or even destructive or irreparable effects.
Types of Malicious Programmes:
1. Virus and worms (infectious malware): infect other files and programs, modifying them to create copies of themselves on the infected device; can be programmed to perform other actions such as deleting files or displaying advertising; viruses require user intervention in order to spread; worms replicate themselves automatically, spreading from one machine to another.
2. Trojans (hidden malware): disguised as ‘benign’, they invite the user to run it, creating a gateway for other harmful programs; are unable to self-replicate and cannot spread on their own so user intervention is required; carry out their actions unnoticed; collect information or control the host remotely, compromising the confidentiality and safety of the users of hindering their network.
3. Spyware (data-collecting malware): collects information from the device where it is installed and relays it to the person who had planted it; can spy on the user’s behaviour on the internet – data, contacts, usage habits, visited pages, apps run, connection details, applications installed on the device, etc; its ultimate consequences may be serious, as it can escalate to identity theft.
4. Rogue Software: usually downloaded and installed from the internet. They deceive the user into believing that their device is infected, offer to install a free trial version of anti-malware and a paid version to remove the alleged infection.
5. Ransomware: programs that encrypt (cypher or encode) important files for the user, making them inaccessible, to then request payment of a large ‘ransom’ in order to receive the password to regain access to the ‘blocked’ or ‘encrypted’ information.
6. Cryptoransomware: a form of ransomware that infects a user’s digital device, encrypts the user’s documents and threatens to delete files and data if the victim does not pay the ransom.
7. Doxware: a form of cryptoransomware that the perpetrators use against victims, by which user data is released (i.e. made public) if ransom is not paid to decrypt the files and data.
A Digital Footprint is the data left behind by ICT users, which reveals information about what they did pr about their person, including age, gender, race, ethnicity, nationality, sexual orientation, thoughts, preferences, habits, hobbies, medical history, concerns, psychological disorders, employment status, affiliations, relationships, geolocation, routines and any other information that is shared or managed by ICTs. A digital footprint can either be passive or active or both. An active digital footprint is created by data actively provided by the user such as personal information, videos, images, and comments posted on apps, websites, bulletin boards, social media and other online forums. Passive digital footprint, on the other hand, is data that is obtained by the device or system without the user taking any direct action or without being aware of it (e.g., internet browsing history).
As has been identified with time, cybercrime can permeate across multiple borders, which causes significant challenges in determining the location of the crime. Therefore, establishing the relevant jurisdiction(s) is of prime importance to allow law enforcement and criminal justice authorities to act on it. Such authorities include National Cyber Crime Unit under the National Crime Agency in the UK, Criminal Investigative Department under the Financial Crime Unit at the Cyprus Police Headquarters, The Police Cyber Crime Prevention Unit in Sierra Leone, The Technological Crimes Investigations Unit of the National Directorate of the Judicial Investigative Police in Ecuador, INTERPOL’s Cyber Fusion Centre, NATO, The National Cyber Forensics and Training Alliance in the US and more. This is where the concept of ‘National Sovereignty’ comes into play. It is the basis of the legal framework and the state authority’s application and granting of rights and protections. It is the basis by which states agree to be part of any international convention and consent to international standards on international cooperation. The second twin here would be that of ‘Jurisdiction’. In the context of criminal law, jurisdiction translates into the power of national authorities to investigate, prosecute, adjudicate and sentence over crime, protecting rights and sanctioning illicit conduct. Jurisdiction is the criminal justice’s expression of national sovereignty.
There are many examples globally where this principle of national sovereignty has been applied to cyberspace.
1. Malaysia: The Computer Crimes Act of 19997 established the state’s jurisdiction over cybercrime based on nationality wherein Article 9 of this Act holds that the “provisions of this Act shall, in relation to any person, whatever his nationality or citizenship, have effect outside as well as within Malaysia and where an offence under this Act is committed by any person in any place outside Malaysia, he may be dealt with in respect of such offence as if it was committed at any place within Malaysia.”
2. Kenya: Under Section 66 of the Computer Misuse and Cybercrimes Act of 2018, jurisdiction is established by the nationality of the perpetrator or victim.
3. Tanzania: Article 30 of the Cybercrimes Act of 2015 states that “courts shall have jurisdiction to try any offender under this Act where an act or omission constituting an offence is committed wholly or in part – (a) within the United Republic of Tanzania; (b) on a ship or aircraft registered in the United Republic of Tanzania; (c) by a national of the United Republic of Tanzania; (d) by a national of the United Republic of Tanzania who resides outside United Republic of Tanzania, if the act or omission would equally constitute an offence under a law of that country.”
4. UK: The Court of Appeals in R v. Sheppard and Anor (2010) upheld the application of the UK Public Order Act of 1986 to racially inflammatory material posted on a website hosted by a US server, and the conviction of two UK residents for posting this material.
In October 2021, UNODC’s Global Programme on Cybercrime supported El Salvador’s specialised cybercrime police unit to connect to INTERPOL’s Operation Tantalio/Guardian Angel that was investigating a network of online child sexual abuse and exploitation distributing material via social media in Ecuador, Ghana, Guatemala, Indonesia, Mexico, Pakistan and Vietnam. While the rate of cybercrime committed is increasing, States’ abilities to address them is not, especially now with AI taking the lead. The main reason for this deficit in national capacity is limited human, financial and technical resources. The TRM Blog diagnoses the problem in a slightly detailed manner explaining the following:
“Cybercriminals, scammers, and even nation-state actors are increasingly incorporating artificial intelligence (AI) into their operations, making cyberattacks more scalable, deceptive, and effective. Recent reports by the U.S. Treasury Department highlight several alarming trends in how AI is being weaponised, particularly within the financial sector.”
The automation of cyberattacks has reached an alarming level with the integration of AI-driven tools. These technologies are now enabling the large-scale execution of phishing campaigns, producing highly personalised and convincing messages that can easily deceive recipients. Furthermore, malware has become significantly more advanced, using artificial intelligence to adapt dynamically and in real-time, thereby evading detection by conventional cybersecurity systems. A U.S. Treasury report published in March 2024 highlights that generative AI is increasingly being exploited by threat actors to develop sophisticated malware and enhance their cyberattack capabilities—tools that were once accessible only to highly resourced actors. This evolution has also lowered the barrier to entry, allowing even relatively unskilled cybercriminals to conduct potent and damaging attacks with minimal effort.
The rise of deepfakes and synthetic media has introduced a new and dangerous dimension to cybercrime. Criminals are increasingly leveraging AI-generated deepfakes—both audio and video—to impersonate executives, public officials, or other trusted individuals. These sophisticated fabrications are being deployed in a variety of fraud schemes, including Business Email Compromise (BEC), financial extortion, and social engineering scams. In response to this growing threat, the Financial Crimes Enforcement Network (FinCEN) issued an alert in November 2024, noting a sharp increase in deepfake-related suspicious activity reports filed by banks and financial institutions. Additionally, artificial intelligence is being used to create synthetic identities by merging real data from multiple individuals to construct entirely fictitious yet highly believable personas. These synthetic identities are then exploited to fraudulently open bank accounts, apply for credit cards and loans, and establish cryptocurrency wallets, enabling a wide range of financial crimes.
Artificial intelligence is significantly enhancing the sophistication and effectiveness of cyberattacks. In the case of ransomware, AI is used to analyze targeted systems and identify the most valuable or sensitive data to encrypt, thereby maximizing the attackers’ leverage when demanding ransom payments. Additionally, nation-state actors are increasingly incorporating AI into their cyber-espionage strategies. Through machine learning, they are able to bypass traditional security protocols and conduct stealthy, long-term infiltrations of critical and sensitive systems, making detection and prevention far more challenging for cybersecurity teams.
A ‘Cutter’ article mentions the recent defence mechanisms being employed globally to overcome these challenges. It articulates the following: As deepfake technologies become increasingly advanced and accessible, cybersecurity vendors are responding by integrating AI-powered detection and mitigation tools across their platforms. These integrations span multiple dimensions of defence, from real-time monitoring to forensic investigations and behavioural analysis.
1. Multimodal Detection Engines
* Function: Analyze multiple data types simultaneously — video, audio, text, and behavioral cues — to detect inconsistencies that signal manipulated or synthetic media.
* Technology: Utilizes neural networks and cross-referencing algorithms to improve accuracy and reliability of deepfake detection.
* Use Cases: High-stakes scenarios such as identity verification, corporate communication, and legal proceedings.
* Example: o Reality Defender offers real-time deepfake detection across formats (video, text, audio, image), targeting enterprises, governments, and platforms.
Accessible via API/Web interface, with continuous AI model updates to keep pace with evolving threats.
2. Real-Time Threat Mitigation
* Function: Stops deepfake attacks during live interactions, preventing them from escalating.
* Features: o CAPTCHA-style AI challenges o Anomaly detection during live video/audio sessions o Seamless integration with browsers and video conferencing tools
* Use Cases: Preventing impersonation in Zoom/Teams calls, protecting financial/IP data, and secure customer interactions.
* Examples:
McAfee Deepfake Detector: Uses transformer-based neural networks for browser-based real-time scanning.
TC&C Deepfake Guard: Monitors live sessions, analyzing audio/visual/linguistic cues, and deploying CAPTCHA-style interventions when suspicious activity is flagged.
3. Forensic & Investigative Tools
* Function: Provide post-incident analysis and digital media validation to determine whether content is synthetic.
* Techniques:
Pixel-level analysis o Audio and voice forensics
File structure & metadata inspection
* Use Cases:
Fraud investigations o Legal proceedings
Verification of digital evidence
* Examples:
Sensity AI: A multilayered cloud-based platform that examines uploaded files for signs of manipulation (e.g., face morphs, lip syncs, face reenactments).
pi-labs (India): Offers tools like Authentify and pi-sense for synthetic media detection and digital evidence authentication — used by BFSI sectors and law enforcement.
4. Behavioral Biometrics & Voice Analysis
* Function: Detect deepfakes by analyzing human behavioral and physiological traits (e.g., voice patterns, facial expressions, micro-movements).
* Techniques:
Voice biometrics: Analysis of pitch, cadence, and speech patterns
Facial biometrics: Real-time assessment of expressions and eye movements
Conversational biometrics: Natural language processing of grammar and vocabulary usage
* Use Cases:
Secure identity verification
Fraud prevention in call centers
Remote onboarding
* Examples:
Nuance Gatekeeper: AI-powered platform replacing traditional authentication with biometric and linguistic analysis — widely used in finance, telecom, and healthcare.
iProov: Uses “dynamic liveness detection” to distinguish live human presence from deepfakes or video replays. It’s employed by governments and security-sensitive industries.
We’re in an era where threats are outpacing solutions at a terrifying speed. It's a relentless avalanche of problems—each more sophisticated, more elusive than the last. These crimes slip through the cracks with such ease, so flawlessly executed, they often go undetected until the damage is done.
It’s the age-old dilemma: to defeat the devil, one must think like the devil. But the truth is, too few people truly grasp the depth of what we’re up against. And as AI accelerates at a pace we can barely comprehend, the gap between what’s possible and what’s protectable grows wider by the day. We’re racing against a machine that never sleeps—and right now, we’re falling behind.
In today’s innovation-driven economy, a strong intellectual property portfolio can determine wheth
Read More
As Bollywood debates fixed working hours, the industry stands at a crossroads between protecting peo
Read More
Why India’s New Data Protection Law Restores Personal Sovereignty
Read More