Skip to Content Facebook Feature Image

Facebook improves how blind can 'see' images using AI

TECH

Facebook improves how blind can 'see' images using AI
TECH

TECH

Facebook improves how blind can 'see' images using AI

2017-12-20 11:27 Last Updated At:12-22 15:21

When Matt King first got on Facebook eight years ago, the blind engineer had to weigh whether it was worth spending an entire Saturday morning checking whether a friend of his was actually in his friend list. Such were the tools at the time for the visually impaired — almost nonexistent.

In this photo taken Monday, Dec. 18, 2017, engineer Matt King, who is blind, demonstrates facial recognition technology via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

In this photo taken Monday, Dec. 18, 2017, engineer Matt King, who is blind, demonstrates facial recognition technology via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

Today, thanks to text-to-audio software, it just takes a few seconds for him to accomplish the same task. And because of a new face recognition service the social network is rolling out Tuesday, he can now learn which friends are in photos, even those who haven't been tagged by another user.

The facial recognition technology, which uses artificially intelligent algorithms, doesn't appear to have changed much since Facebook began using it in 2010 to suggest the identities of people in photos. But after incorporating feedback from billions of user interactions, Facebook felt confident enough to push its use into new territory.

In this photo taken Monday, Dec. 18, 2017, Jeff Wieland,, right, Facebook director of accessibility, watches as engineer Matt King, who is blind, demonstrates facial recognition technology via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

In this photo taken Monday, Dec. 18, 2017, Jeff Wieland,, right, Facebook director of accessibility, watches as engineer Matt King, who is blind, demonstrates facial recognition technology via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

"What we're doing with AI is making it possible for anybody to enjoy the experience," says 52-year-old King, who lost his sight in college due to a degenerative eye disease and now works at Facebook as an accessibility specialist. In addition to the improved facial recognition, Facebook has in recent years also automated descriptions of what's happening in a photo. (Those remain relatively primitive, as they're limited to only about 100 or so concepts and roughly a dozen action verbs.)

For the sighted, the new facial recognition settings will also help crack down on imposters. Starting Tuesday, Facebook will notify you if someone has uploaded your face as their profile picture. And just in time for alcohol-laden holiday parties, you can also be notified if someone in your friend network has posted a compromising picture of you without explicitly tagging you.

In this photo taken Monday, Dec. 18, 2017, new facial recognition technology is demonstrated via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

In this photo taken Monday, Dec. 18, 2017, new facial recognition technology is demonstrated via a teleconference at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

That check is not perfect. If you don't want a drunken karaoke picture of yourself online, for example, you can't veto the resulting photos posted by your friends, although you can ask them to take it down. Serious violations or disputes can be flagged to Facebook staff, which they will review according to their community standards.

Facebook lets the person posting a photo have the final say this way in order to deal with situations like public speaking events, where the speaker shouldn't have control over what's shared.

In this photo taken Monday, Dec. 18, 2017, a mural of a tree grove covers a wall inside a campus building at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

In this photo taken Monday, Dec. 18, 2017, a mural of a tree grove covers a wall inside a campus building at Facebook headquarters in Menlo Park, Calif.  (AP Photo/Eric Risberg)

For the visually impaired, the company is also working on identifying text in manipulated photos like memes, though its technique isn't quite ready for primetime. Even small accuracy problems can ruin the punchline, according to Facebook accessibility director Jeff Wieland.

For Janni Lehrer-Stein, a volunteer who advises Facebook, the new facial recognition capability will help her enjoy more of her daughter's recent graduation photos. Lehrer-Stein had lost her sight by the time she was 46, and has long used audio-to-text software on Facebook. Knowing who is in each photo makes a big difference, says the sixty-something mother of three.

The technology gives her enough information to "engage" in special family moments captured in photos, she says. "Even though I can't see them."

HARTFORD (AP) — The Connecticut Senate pressed ahead Wednesday with one of the first major legislative proposals in the U.S. to rein in bias in artificial intelligence decision-making and protect people from harm, including manufactured videos or deepfakes.

The vote was held despite concerns the bill might stifle innovation, become a burden for small businesses and make the state an outlier.

The bill passed 24-12 after a lengthy debate. It is the result of two years of task force meetings in Connecticut and a year's worth of collaboration among a bipartisan group of legislators from other states who are trying to prevent a patchwork of laws across the country because Congress has yet to act.

“I think that this is a very important bill for the state of Connecticut. It’s very important I think also for the country as a first step to get a bill like this,” said Democratic Sen. James Maroney, the key author of the bill. “Even if it were not to come and get passed into law this year, we worked together as states.”

Lawmakers from Connecticut, Colorado, Texas, Alaska, Georgia and Virginia who have been working together on the issue have found themselves in the middle of a national debate between civil rights-oriented groups and the industry over the core components of the legislation. Several of the legislators, including Maroney, participated in a news conference last week to emphasize the need for legislation and highlight how they have worked with industry, academia and advocates to create proposed regulations for safe and trustworthy AI.

But Senate Minority Leader Stephen Harding said he felt like Connecticut senators were being rushed to vote on the most complicated piece of legislation of the session, which is scheduled to adjourn May 8. The Republican said he feared the bill was “full of unintended consequences” that could prove detrimental to businesses and residents in the state.

“I think our constituents are owed more thought, more consideration to this before we push that button and say this is now going to become law," he said.

Besides pushback from Republican legislators, some key Democrats in Connecticut, including Gov. Ned Lamont, have voiced concern the bill may negatively impact an emerging industry. Lamont, a former cable TV entrepreneur, “remains concerned that this is a fast-moving space, and that we need to make sure we do this right and don’t stymie innovation,” his spokesperson Julia Bergman said in a statement.

Among other things, the bill includes protections for consumers, tenants and employees by attempting to target risks of AI discrimination based on race, age, religion, disability and other protected classes. Besides making it a crime to spread so-called deepfake pornography and deceptive AI-generated media in political campaigns, the bill requires digital watermarks on AI-generated images for transparency.

Additionally, certain AI users will be required to develop policies and programs to eliminate risks of AI discrimination.

The legislation also creates a new online AI Academy where Connecticut residents can take classes in AI and ensures AI training is part of state workforce development initiatives and other state training programs. There are some concerns the bill doesn't go far enough, with calls by advocates to restore a requirement that companies must disclose more information to consumers before they can use AI to make decisions about them.

The bill now awaits action in the House of Representatives.

FILE - OpenAI's ChatGPT app is displayed on an iPhone in New York, May 18, 2023. The Connecticut Senate pressed ahead Wednesday, April 24, 2024 with one of the first major legislative proposals in the U.S. to reign in bias in AI decision making and protect people from harm, including manufactured videos or deepfakes. (AP Photo/Richard Drew, File)

FILE - OpenAI's ChatGPT app is displayed on an iPhone in New York, May 18, 2023. The Connecticut Senate pressed ahead Wednesday, April 24, 2024 with one of the first major legislative proposals in the U.S. to reign in bias in AI decision making and protect people from harm, including manufactured videos or deepfakes. (AP Photo/Richard Drew, File)

Democratic state Sen. James Maroney of Connecticut explains a far-reaching bill that attempts to regulate artificial intelligence during a debate in the state Senate in Hartford, Conn. on Wednesday, April 24, 2024. The bill marks one of the first major legislative proposals in the country to reign in bias in AI decision-making and protect people from harm. (AP Photo/Susan Haigh)

Democratic state Sen. James Maroney of Connecticut explains a far-reaching bill that attempts to regulate artificial intelligence during a debate in the state Senate in Hartford, Conn. on Wednesday, April 24, 2024. The bill marks one of the first major legislative proposals in the country to reign in bias in AI decision-making and protect people from harm. (AP Photo/Susan Haigh)

Recommended Articles