imageArtificial Intelligence (AI) is strong advanced in detecting bank card fraud attempts. Most of us have received messages asking for confirmation of purchases made by cybercriminals. To compile “synthetic personal data” that copy typical behavior patterns of bank card holders, machine learning is used. Thanks to him, financial institutions can in real time record behavior that is different from the norm. Unfortunately, cybercriminals also use AI to create their own synthetic personal data. They are realistic enough to fool AI banks.

This battle of artificial intelligence, fraud and cybersecurity, is conducted not only in the banking sector. Fraudsters fake news, videos, audio recordings. So the arms race began: AI vs. AI.

Steffen Sorrell of Jupiter Research says synthetic credit is easy to get with credit cards. According to the latest Jupiter Research online payment fraud report, by 2024 a year will be able to avoid the loss of $ 200 billion in this type of fraud. By this time, the fraud recognition market should reach $ 10 billion, compared with $ 8.5 billion this year.

“Internet fraud takes place in a highly developed ecosystem with a division of labor,” said Josh Johnston, director of AI Science at Boise, Kount Inc . to prevent fraud based on personal data. Johnston claims that cybercriminals specialize in various types of crimes, ranging from manual skimming cards " to the creation of synthetic personal data with using AI. "Some scammers check stolen card numbers and credentials by contacting charities or digital goods stores. Arguing that they need to make sure their order has not been canceled, they easily check the validity of the stolen data. Johnston says the numbers credit cards with the exact name, address and CVV (card verification code) in darknet can be purchased for less than a dollar.

“A fraudster can buy a whole list of these verified cards and monetize them using a lot of different online schemes,” says Johnston. “Fraudsters are actively using AI for their own purposes. Like regular developers, they exchange tips and software on forums. ”

All types of AI and optimization methods are involved in creating fake personal data. Small, simple programs generate and register mailboxes, combining real names with different sets of numbers. Large programs create synthetic personal data using machine learning. According to Johnston, combining information about several people can get the perfect combination. If a fraud detector checks artificially created personal data, it often finds that the fraudster has created not only a fake email account, but also a page on Facebook and other Internet portals.

Thus, the fraud detection skills possessed by cybersecurity programmers are opposed to the skills of the fraud itself carried out by black hats .

Fraudsters use their skills not only in the banking sector, but also in the field of image and speech recognition, where their software tools are used to create fake news, videos and sound. In fact, fraudulent money transfer transactions that use fake audio are developing much faster than online payment transactions, says Nick Maynard of Juniper Research.He said that cases of money loss in this way will increase by 130 % by 2024 .

“Machine learning is becoming more and more in demand to prevent fraud,” says Maynard.

This war of artificial intelligence is similar to the popular Whac-A-Mole game. From time to time, one or the other AI takes over. Johnston compares this situation with a game of cat and mouse. He measures success and failure in a single variable, which he calls "friction." "Friction" slows down one or the other side, until a new form of "lubrication" helps a particular camp to come forward.

“Fraudsters respond to friction, just like ordinary users on the Internet. When we get an advantage by creating too much friction for fraudsters, they move on to a simpler goal, not protected by fraud detectors. ”

Good fraud protection increases friction for criminals and reduces friction for cybersecurity programmers. Achievements on the one hand, however, cause a change in the strategy of the other, ”explains Johnston.

“When the Internet first appeared, there wasn’t much to steal, so basically scammers checked credit cards online, and then bought goods from them in stores.” - continues Johnston. Today, online shopping is available to everyone, including criminals. With the distribution of bank cards with a security system, it is no longer possible to go to the store and spend the entire amount from the stolen card. Therefore, this type of fraud has gone online. To detect it, artificial intelligence in fraud detection systems needs a more detailed analysis.

“For 2020, our most successful anti-fraud methods are based on the large amount of model data commonly used by cybercriminals.” - says Johnston. “Attackers can steal all your secrets, but they cannot imitate your tastes, behavior and history. In the end, fraudsters have to deal with fraud in order to get something valuable. Using the correct data, we can determine the difference between a fraudster and an ordinary client. ”

Fake news/video/audio


AI has already been used to automatically create fake news. For example, the prototype text generation system OpenTI GPT-2 uses machine learning to translate text, respond to questions and writing fake news. When the proposal “Russia declared war on the United States after Donald Trump accidentally...” was introduced in GPT-2, the system created fake news:

Russia declared war on the United States after Donald Trump accidentally fired a rocket into the air. Russia stated that it had "determined the flight path of the rocket and will take the necessary measures to ensure the security of the Russian population and the country's strategic nuclear forces." The White House said it was “extremely concerned about Russia's violation of the treaty banning medium-range ballistic missiles.”


For more information on creating fake news using GPT-2, please find on the site OpenAI.

“Open source consortia such as OpenAI show us what the future of fraud can be: using automated tools that can scale to massive attacks,” says David Doermann , professor of science and technology at the University of Buffalo. “At the moment, fraudsters have an advantage, but we must keep this gap small in order to quickly overtake them. The situation is much like malware, where every new virus created by a hacker is destroyed thanks to cybersecurity programmers. Maybe one day this game will become too expensive for scammers and they will stop, but most likely the situation will remain similar to a tug of war without a clear winner. "

At the same time, according to Doermann, cybercrime fighters should educate people so that they perceive information on the Internet with some doubt.If something sounds too good (or too bad) to be true, then it probably is. “You might think that this is an impossible task, but it is not. For example, most people now know that they don’t need to click on links from unfamiliar sources, and email services know how to detect spam and filter it before it reaches your inbox, ”Doermann explains. Famous fakes and even possible fakes can be marked with a corresponding sign or inscription so that people do not take them seriously. In some cases, fake content may be filtered out completely without violating the First Amendment Act.

Irakli Beridze, Head of Center for AI and Robotics at the United Nations International Research Institute for Crime and Justice ( UNICRI ), I agree with this. “Faking information on the web is only a new branch of the problem of manipulating news. It has existed for a long time, but only recently has become so affordable. You don’t have to be a technical genius to create fake content, ”says Beridze. “The distribution of fake news poses problems that could threaten national security, including elections and the safety of citizens. In addition, they undermine diplomacy, democracy, public discourse and journalism. ”

According to Beridze, many organizations are trying to come up with software that makes it easier to recognize fake content. Beridze and Doermann argue that such software is already available, it only needs to be finalized. Meanwhile, both agree that a lot of work needs to be done with the credulity of the average user. At one time, people were told what spam was and how to recognize it. It's time to do the same with fake news. We need to develop critical thinking in people.

Last year, at the The Hague Hackathon, dedicated to peace, justice and security , UNICRI put forward a challenge on content fabrication. Participants needed to create a tool to identify fake videos. This tool was supposed to provide support to law enforcement, security agencies, the judiciary, the media and the general public.

“The winning team proposed a neural network architecture for image classification and a web application that simplifies user interaction,” Beridze said. “This solution was subsequently improved at technical seminars during 2019. In 2020, we are actively working to ensure that this technology goes into full use. ”

Beridze warns, however, that there is no quick fix to the problem of fabricating news. The ever faster pace of technology change requires more holistic solutions. Need to track technological advances and fabricated materials. Using them, you need to anticipate what problems will arise in the future and understand what technologies will be needed to solve them.

“This is a cyclical process that requires cooperation. That is why one of the goals of the UN Center is to facilitate the exchange of knowledge and building relationships with stakeholders. Specialists in the public sector, industry, academia, as well as security agencies, intelligence agencies and counter-terrorism agencies are helping the authorities keep up with the times, ”says Beridze.

“At the same time, we provide recommendations on the development, implementation and use of measures to combat content fabrication. These decisions are legal, trustworthy and do not violate human rights. ”

Fake video and audio is one of the new innovations in the issue of fraud, which demonstrates the use of AI for bad purposes. People began to understand what fake news was during the elections in 2016, then this phrase was heard by everyone. Most political videos were clearly fake. They had a very intelligently superimposed sound track, tuned to the movement of the lips. The content of these videos was more like an April Fool’s joke. Using machine recognition tools through machine learning, scammers created fake videos that even the most sophisticated viewer believes.

Doermann is a former Program Manager at MediFor (Media Forensics) for the Department of Defense's Advanced Research Projects (DARPA). He claims that DARPA has already developed automated tools for forensic phototechnical examination of images obtained by government agencies. Once these tools were manual, but then they added AI, which allowed us to automate the process.

“We developed AI tools to detect fake content long before it became a problem for society. We were worried about terrorists and misinformation sources in foreign governments. Our goal was to fully automate the methods used by experts to detect fake news. We wanted all the government-collected materials based on image analysis to go through our authenticator. ”Says Doermann.

MediFor is still operational, but has entered the stage when the results of its fundamental research are incorporated into the final automated tool. At the same time, a new program called Semantic Forensics (SemaFor) took the baton of basic research. SemaFor collects images that have been flagged as fake and applies attribution algorithms to determine where did the media content come from. Characterization algorithms help to understand whether fake materials were created for malicious purposes, for example, for disinformation campaigns or for entertainment.

The first fake videos that will be indistinguishable from genuine videos are likely to appear during the 2020 US presidential campaign. Fake audio has long been successfully used by cybercriminals in money transfer fraud. For example, The Wall Street Journal reported that a fake phone call imitating the voice of the chief executive officer (CEO) helped the criminal to get rich by $ 243,000. In the end, it became clear that the call was the work of fraudsters, but the money had long passed through a network of electronic transfers that the authorities could not track.

Ideally, we need a verification tool that can instantly identify and flag fake content in real time. Unfortunately, most likely, scammers will create an AI that can deceive us and steal our money in real time.

The author of the article is R. Colin Johnson is a Kyoto Prize Fellow who has worked as a technology journalist for 20 years.

Translation: Diana Sheremyeva

image

Learn the details of how to get a sought-after profession from scratch or Level Up in skills and salary by completing SkillFactory paid online courses:



Read more


.

Source