Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Since its launch, Google’s reCaptcha verification system has been cracked frequently, so Google has had to iteratively upgrade it again and again. Now, reCaptcha has been upgraded to v3, which is directly upgraded from the original user interaction to scoring users. But no matter how strong the system is, there will be loopholes. Researchers from Canada and France have taken a different approach and used reinforcement learning to “crack” this latest verification system.

Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Google’s reCAPTCHA verification system

For users of Google Chrome, the above picture must not be unfamiliar. This is reCaptcha, a verification code system developed by Google, designed to confirm whether a visitor is a human or a program, and prevent the intrusion of malicious programs.

The reCAPTCHA project is a system created by Carnegie Mellon University and acquired by Google in September 2009. reCAPTCHA v1 will Display text scanned from books that cannot be accurately recognized by OCR in CAPTCHA questions to determine whether the visitor is a program or a human. This version was cracked by Bursztein et al., who used a machine learning-based system to segment and recognize text with 98% accuracy.

For anti-cracking, Google introduced reCAPTCHA v2 based on audio and images. The system uses some advanced analysis tools to determine whether a user is a human or a robot.

They used a variety of elements, including cookies, problem solving speed, mouse movement, and problem solving success rate. But despite this, there are still researchers who claim to have cracked ReCAPTCHA, the most famous of which is unCapture developed by four researchers at the University of Maryland in the United States.

Breaking reCAPTCHA with unCaptcha

The unCaptcha project was first created in April 2017 and achieved an 85% ReCaptcha confrontation rate at that time. Later, Google released a new ReCaptcha that enabled better browser auto-detection and started using phrase voice for verification. These improvements were initially successful against the first version of unCaptcha, but this improved version was quickly broken by the second version of unCaptcha.

Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Cracking ReCaptcha is easier than ever since ReCaptcha adds voice-based captcha recognition. The cracker said, “Because we only need to call a free speech recognition API, the recognition accuracy of all verification codes can reach about 90%.” In January this year, the cracker also open-sourced ReCaptcha’s cracking code.

“Breaking” reCAPTCHA v3 with reinforcement learning

Of course, Google has not been idle either and has been iterating on its own verification system. In October 2018, Google officially released reCAPTCHA v3. Google’s big move this time is to remove all user interfaces.

The first two versions of reCAPTCHA had available text, images, or audio that could be used as input to train neural networks. But reCAPTCHA v3 removes all the user interface, without taking apart the garbled text or street signs, or even ticking the “I’m not a robot” box.

It analyzes a series of signals and uses machine learning techniques to return a risk assessment score between 0 and 1 (this score represents the trustworthiness of the user, the closer to 1 the more likely it is to be human). Compared to the previous two versions, this scoring is done entirely in the background, with no human interaction at all, making it more difficult to crack.

Where to start cracking?

Such a difficult project will certainly attract all kinds of “hackers” eager to try. Recently, researchers from France and Canada claimed to have cracked Google’s reCAPTCHA v3, and published an article entitled “Hacking Google reCAPTCHA v3 using Reinforcement Learning” based on their research results. paper. The difference from the previous study is that they used reinforcement learning and achieved a test accuracy of 97.4%.

In fact, this reinforcement learning technique is not for unseen scores in reCAPTCHA v3, but for mouse movement analysis first introduced in reCAPTCHA v2. That said, this research does not actually break reCAPTCHA v3, but uses machine learning to trick secondary systems (ie, the old “I’m not a robot” tick operation) to bypass reCAPTCHA v3.

Wait, hasn’t the “I’m not a robot” interface been removed in v3? In theory it should do this, but in practice it doesn’t.

Akrout, the first author of the paper, stated that in reCAPTCHA v3, websites set their score thresholds to determine whether a user is a bot. If a visitor falls below a threshold at a certain set point (such as when they enter a comment or login details), the site can choose to immediately condemn the visitor as a bot, though doing so would be embarrassing if the visitor were a real person.

Imagine shopping online and the page you’re viewing suddenly disappears, followed by a screen full of “you’re a robot” condemnation. Just ask if you are bad? From a user experience point of view this is too…emmm…

As a result, Akrout said, many sites will choose to ease the process more friendly. If a site visitor falls below the score threshold, the site will Display the old “I’m not a robot” checkbox page, which was used to discover bot analytics behavior, including mouse movements.

This would give users a better understanding of why their online shopping or other activities were being interrupted, and would give them a chance to prove their human identity.

“Most programmers I know add checkboxes because they don’t know how to choose the right moment to ask the v3 system’s judgment.”

It was the existence of this checkbox that led Akrout and his colleagues to discover the possibility of bypassing reCAPTCHA v3.

How to crack?

Akrout and colleagues used reinforcement learning to fool parts of the reCAPTCHA v3 system, where software agents try to find the best possible path and are encouraged with rewards for each step in the right direction.

Their system places a grid of squares on the page, and the mouse moves diagonally across the grid to the “I’m not a robot” button. If successful, positive reinforcement is given; if failure, negative reinforcement is given. The system learned to control the correct movement method to fool the reCAPTCHA system. The paper says it has an accuracy rate of 97.4 percent. Google declined to comment on the paper after it was published.

Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Is it really possible to break it like this?

This approach did not convince Nan Jiang of Bournemouth University, who was not involved in the study. “Theoretically, any captcha method that relies solely on checking user behavior could be cracked with a custom machine learning algorithm, such as one that can easily simulate user interactions on a page.

However, Google’s ReCAPTCHA combines other techniques to predict a user’s trustworthiness and then attempt to whitelist that user. Once you are whitelisted, whatever you do will pass the test. ‘ he said.

Jason Polakis, an assistant professor of computer science at the University of Illinois who cracked reCAPTCHA version 2, noted that reCAPTCHA version 3 was more work than described in the paper.

“The attack that this paper tries to show is simply moving from a random starting point in the page to a checkbox,” he said. “This is a very specific and limited subset of how the user interacts with the actual page in practice (such as filling out a form, interacting with multiple pages, etc.) element interactions and spanning more complex patterns, etc.).”

He also added: “If Google has also improved the use of more advanced techniques such as browser/device fingerprinting (which we have uncovered during our extensive in-depth analysis and cracking of ReCaptcha version 2), it will actually become more difficult to launch an attack. complicated.”

Akrout agrees that mouse movement-based attacks have limitations, but these also reveal a little bit about how reCAPTCHA version 3 works. “If you connect to your Google account through a regular IP, the system thinks you’re human most of the time,” he said. If you connect to your Google account through a TOR or proxy server, the system usually thinks you’re a bot.

Knowing this makes it easier to force the reCAPTCHA system to display the “I’m not a robot” button if the website being tested already has this default setting.

Akrout said the attack needed to appear neutral to Google — so no logins, no entry through proxy servers or using browser control tools like Selenium. “It’s like I’m asking the system to go directly to a second page just to get a lot of motion detection,” he said.

Akrout thinks Google could use this technology (specifically based on how long it takes users to click a button) to use an easier way to protect reCAPTCHA. Akrout said, “The agent spends more time clicking on the checkbox than a human. Without any interaction, any user usually does not interfere with reCaptcha’s work in the background.”

Shujun Li, a professor of cybersecurity at the University of Kent, had previously designed his own system to crack early versions of reCAPTCHA, but was not involved in the project. He said the work seemed technically feasible, but also believed that Google could easily update its systems to avoid such attacks.

“It’s not clear how well this attack method can be retrained to catch up with Google’s system,” he said. “A potentially more robust approach would be to collect responses from real human users to reCAPTCHA and build a machine learning model to simulate this. Class responses. These models are easy to retrain and are guaranteed to be useful unless reCAPTCHA is unavailable to regular human users.”

Li says there are indeed many other ways to crack these systems. While this particular attack is limited, the fact that reCAPTCHA will continue to fall prey to AI systems is not surprising.

“Cracking captcha is nothing new,” Li said. “Recent AI advances have greatly increased the success rate of automated attacks. In principle, captcha technology has proven to be incapable of resisting advanced attacks.” version reCAPTCHA, but this is a start.

Paper: Hacking Google reCAPTCHA v3 using Reinforcement Learning

Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Paper link: https://arxiv.org/pdf/1903.01003.pdf

Abstract: This paper proposes a reinforcement learning method that can fool Google reCAPTCHA v3. We treat reCAPTCHA v3 as a grid world where the agent learns how to move the mouse and click the reCAPTCHA button to get a high score.

We study the performance of the agent when changing the grid size in the grid and show that the performance of the agent degrades significantly as the agent makes strides towards the goal. Finally, we use a divide-and-conquer strategy against arbitrary grid resolutions to break the reCAPTCHA system. Our proposed method achieves a 97.4% win rate on a 100 × 100 grid and a 96.7% win rate on a 1000 × 1000 screen resolution.

Experimental results

The researchers trained a reinforcement learning agent on a grid of a specific size. Their approach is to use the trained policy to select the best action in the reCAPTCHA environment. The experimental results are obtained after 1000 epochs of training.

If the agent gets a score of 0.9, they consider the agent to have successfully breached reCAPTCHA. The policy network is a new two-layer fully connected layer network. The parameters are trained with a learning rate of 10^(-3) and a batch size of 2000.

The figure below shows the results obtained by the agent on a 100 × 100 grid. The method successfully broke the reCAPTCHA test with a 97.4% win rate.

Next consider testing the method on a larger grid. If you increase the size of the grid, the dimensionality of the state space increases exponentially, in which case it is not feasible to train the reinforcement algorithm. This is another difficult problem addressed by this study: how to break the reCAPTCHA system without retraining the agent for each resolution mesh?

To this end, the researchers propose a divide-and-conquer approach that can break reCAPTCHA systems of arbitrary grid size without retraining reinforcement learning agents. The central idea is to further divide the grid into sub-grids, and then apply the trained agent to these sub-grids to find optimal policies for larger screens (see Figure 2). Figure 3 shows the effectiveness of the method, with over 90% winning rates on grids of different sizes.

Google’s latest verification system has been “cracked” again, this time with reinforcement learning

Figure 2: Illustration of the divide-and-conquer approach: the agent operates on the purple diagonal grid world. The red grid world has not been explored.

Figure 3: Reinforcement learning agent win rates at different grid resolutions.

The Links:   VI-234-IV IPS320480-52-E IDELECTRONIC

99% of website JavaScript plugins are at risk of attack DeepRoute Releases New Sensors to Accelerate Development of Self-Driving Cars