Start-Up Claims Its AI Can Reliably Break CAPTCHAs

This week may mark the beginning of the end for the CAPTCHA, those intentionally-hard-to-read images that challenge you to prove you are a human and not a bot. An artificial-intelligence start-up claims to have developed technology that can accurately read CAPTCHAs 90 percent or more of the time.

CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart, and it requires that you, who we currently assume is human, properly read and type the numbers or letters shown in a distorted image.

On Sunday, San Francisco-based start-up Vicarious said that its algorithms can "reliably solve modern CAPTCHAs," including ones from Google, Yahoo, PayPal, Capcha.com and others. If the claim is accurate, the sudden vulnerability of this anti-bot test could become a significant problem for countless logon-protected sites.

As High as 90 Percent

Vicarious said that a CAPTCHA can be considered broken if interpreting software has a precision of at least 1 percent, while the company said its success rate can be as high as 90 percent for Google's reCAPTCHA, which is the most widely used version. For each letter, the company claims 95 percent accuracy. A recent Microsoft Research paper said that no algorithms it had reviewed could reliably solve CAPTCHAs, even part of the time. Whether or not this potential leaves CAPTCHAs vulnerable to break-ins, Vicarious said its test results do mean that CAPTCHAs are no longer valid as Turing tests.

A Turing test, named for computer pioneer Alan Turing, is a computer-based test that is intended to tell the difference between a human or a computer program.

Vicarious co-founder D. Scott Phoenix said in a statement that modern artificial-intelligence systems like IBM's famed Watson and deep neural networks "rely on brute force," using massive computing power to work on massive data sets. By contrast, he said, Vicarious'...

Comments are closed.