Can AI Steal Your Face? The Laws Fighting Back

Home | Can AI Steal Your Face? The Laws Fighting Back

Can AI Steal Your Face? The Laws Fighting Back

The issue, "Can AI steal your face?" is no longer only theoretical; rather, it is rapidly becoming a reality. It entails gathering biometric information, gathering pictures from the internet, and creating deepfakes. Legislators are battling to create new legislation pertaining to consent and data protection, but detractors caution that they are out of step with technology. But we're here today to talk about how AI can steal your data, faces, and legal protections.

What is Artificial Intelligence (AI)?

The technology known as artificial intelligence (AI) allows computers to mimic human thought processes and behaviors, such as learning, problem-solving, and decision-making. Better medical diagnoses, driverless cars, intelligent manufacturing, and Cybersecurity are just a few of the numerous advantages AI offers society. It has been used for many years in sectors including marketing, entertainment, and finance. It is the technology that allows travel applications to choose the most efficient route or smart assistants like Alexa to understand our preferences and behaviors. AI is helping thieves develop more successful frauds even as it increases productivity, lowers human error, and facilitates new discoveries.

How Are Cybercriminals stealing face Using AI for Scams?

Cybercrime is a serious and expanding problem. By 2028, the worldwide cost of cybercrime is expected to reach $13.82 trillion, or almost €12.6 trillion, according to Statista.

The development of fraudsters using AI to their advantage is largely responsible for that surge. Actually, the problem of AI-powered frauds is starting to drastically change the online fraud scene. Three Canadian guys were duped by deepfake videos of Elon Musk and Justin Trudeau in a real-life instance. These individuals invested $373,000 [€342,000] and lost everything because they genuinely thought the movies were authentic. As an additional example, Santander produced deepfake films using lesser-known individuals more recently to highlight the realism of these videos.

These deepfakes are not amateur attempts; they are quite complex. Furthermore, these are not isolated events; rather, they are a part of an increasing number of AI-enhanced frauds that are seriously harming people and undermining our confidence in what we see and hear.

Furthermore, a hazard in today's environment is voice cloning, sometimes known as audio deepfakes. The CEO of a UK energy company in 2019 authorized a €220,000 transfer because he thought he was speaking with the parent company's CEO.

Furthermore, this technology is only improving. All it takes to impersonate someone is a two-minute recording, albeit the longer the tape, the more realistic the outcome. Chatbots and AI-generated writing are also becoming frighteningly sophisticated. They are able to create phishing emails that mimic your colleague's voice. They are able to pose as customer support agents.

Then there is manipulation on social media. You may make an army of accounts, and each one will appear to be from a genuine individual with a unique personality. By sharing and interacting with messages meant to do harm, cybercriminals may then utilize them to give their schemes more legitimacy.

However, organized thieves elevate the legitimacy of their hoaxes to a new level when they combine these strategies. In order to safeguard individuals from biometric data, voice cloning, and other threats, governments from all over the world have created new laws and modified those that already existed.

Laws Fighting Back Against AI Facial Theft

Governments throughout the globe are responding to these threats by amending current laws and enacting new ones to better secure biometric data and address damages unique to artificial intelligence.

European Union

AI Act: AI systems are categorized by danger level under this extensive regulation. Certain biometric identification techniques and other high-risk technologies are prohibited. The statute establishes stringent guidelines for law enforcement's use of remote biometric identification and forbids the untargeted scrape of face photos from the internet.

GDPR: Biometric data is already treated as a unique category that needs to be handled carefully under the General Data Protection Regulation. People have the right to ask for their data to be deleted, and they must expressly consent to its usage.

United States

Federal regulations: There isn't a single comprehensive federal legislation in the US pertaining to AI or data privacy. To make sure businesses don't utilize biometric data in unfair or misleading ways, regulatory organizations like the regulatory Trade Commission (FTC) are closely examining the usage of AI. Additionally, transactions involving sensitive data in bulk with "countries of concern" are restricted by the Department of Justice (DOJ).

Illinois: One of the most stringent legislation is the Biometric Information Privacy Act (BIPA), which mandates informed permission prior to the collection of biometric data.

California: Customers have the right to know what biometric information is gathered and to request that it be deleted under the California Consumer Privacy Act (CCPA).

Texas: Facial geometry and other identifiers cannot be captured without informed permission under the Capture or Use of Biometric Identifier Act (CUBI).

Denmark

Legislation that would specifically grant citizens legal rights over their voice, face, and image is being proposed by Denmark. If enacted, the law would prohibit AI from using a person's biometric information without that person's express consent.

India

Digital Personal Data Protection Act (DPDPA): The DPDPA, which went into effect in 2023, offers a framework for processing personal data, including biometrics. Organizations are required by law to get clear, informed permission for a "lawful purpose" prior to gathering biometric information.

Puttaswamy Judgement: By establishing privacy as a basic right under the Indian Constitution, this historic 2017 Supreme Court ruling strengthened the legal foundation for contesting inappropriate face recognition use.

Controlling how AI uses our faces is a continuous battle that strikes a balance between the fundamental right to privacy and the demands of innovation and security. The rules and moral principles governing AI must change along with it, necessitating constant attention from the public, businesses, and legislators.

Conclusion

The question, "Can AI steal your face?" is not science fiction anymore—it's a pressing reality. From deepfakes to voice cloning, AI has blurred the lines between what is authentic and what is fabricated, creating new avenues for fraud, identity theft, and manipulation. While governments across the globe are introducing laws like the EU's AI Act, Illinois' BIPA, and India's DPDPA to safeguard biometric data, legislation often struggles to keep pace with rapidly advancing technology.

The challenge lies in striking a delicate balance—harnessing AI's transformative power for innovation while protecting individuals from its misuse. This requires not only stronger legal frameworks but also active collaboration between policymakers, businesses, and citizens. As AI continues to evolve, so too must our vigilance, ensuring that human dignity, privacy, and trust remain protected in the digital age.