20 June,2024 10:27 AM IST | Mumbai | Asif Rizvi
Representative image
Subscribe to Mid-day GOLD
Already a member? Login
The recent surge of deep fake videos has become a cause of concern for prominent personalities including celebrities, politicians and even social media influencers. Both the public and law enforcement are concerned about the recent increase in the misuse of Artificial Intelligence (AI), especially by cyber fraudsters and scammers.
Experts believe that the potential for abuse of artificial intelligence technology grows as it develops. AI could be used by scammers more and more to carry out elaborate fraud schemes, which presents serious difficulties for people, companies, and government agencies.
The usage of deepfake technology is among the most concerning developments. AI-generated audio or video recordings that remarkably resemble actual humans are known as deep fakes. Using this technology, con artists may produce phoney videos of famous personalities or company leaders, frequently with the intention of scamming and making easy money.
A deepfake is typically defined as an extremely lifelike but phoney image, video, or audio that purports to show someone saying or acting in a way they have never actually said or done.
Also Read: Surge in demand for AI ethics, machine learning and data analysis, must-have AI skills revealed
An official said, "Artificial Intelligence is making phishing attacks more effective. AI algorithms are used by scammers to create phishing emails that are persuasive, tailored, and hard to differentiate from real emails."
He said that AI creates spear phishing emails that look extremely authentic by analysing internet activity and social media profiles. These deceive their recipients into disclosing private information or opening harmful links leading to the beginning of a potential cyber fraud.
Ritesh Bhatia, founder and director of V4WEB explained, "In recent years, the misuse of artificial intelligence has revolutionised the methods employed by cybercriminals, enabling them to commit crimes with unprecedented ease, speed, and precision. One particularly insidious tool, "deepnudes," allows individuals to generate realistic nude images of women from their photos within seconds. This has become a powerful weapon for blackmail, causing significant emotional and psychological harm to victims."
He added, "Additionally, AI-driven voice cloning scams are on the rise, with several individuals transferring large sums of money to fraudsters impersonating their relatives or children with startling accuracy. The proliferation of deepfakes extends beyond the political realm, infiltrating the corporate sector as well. Employees are being deceived into transferring funds or divulging sensitive information, believing they are following the instructions of their superiors. These deepfakes are meticulously crafted, making it nearly impossible to distinguish them from genuine communications.
Also Read: Deepfake explained: Misuses, impact on victims and precautions needed
Cyber crime investigator Bhatia added, "The rapid advancement of AI technology, while offering numerous benefits, also poses severe risks. It is imperative to address these challenges through robust regulations, improved detection technologies, and public awareness to mitigate the potential harms and safeguard individuals and institutions from the nefarious exploits of cybercriminals."
Meanwhile, the scammers may use AI to mimic the writing style of business executives and send fraudulent emails to employees, requesting wire transfers or sensitive information eventually leading to a major cyber scam or an online fraud.
The AI-powered technologies are able to synthesise individual voice clones by analysing voice recordings. These clones are used by con artists to pose as victims over the phone, fooling loved ones or coworkers into parting with private information or cash.
Experts believe that Retrieval-based speech Conversion (RbVC) is an AI tool that is suspected to be used by cyber fraudsters to clone voices by extracting speech samples from social media posts. This makes it simpler for them to trick people into fraud.
They suggest that to combat AI-driven scams or frauds law enforcement agencies, governments, tech companies, and cyber security firms must invest in advanced detection methods and public awareness campaigns.