Launching products related to accessibility in the tech industry this week | Engadget

Every third Thursday in May, the world celebrates Global Accessibility Awareness Day, or GAAD. And, as has become typical in the past few years, major tech companies are using this week as an opportunity to share their latest accessibility products. From Apple and Google to Webex and Adobe, the biggest players in the industry have launched new features to make their products easier to use. Here’s a roundup of this week’s GAAD news.

Apple releases and updates

First up: Apple. The company already had a huge stack of updates to share, which makes sense since they usually release the most accessibility-focused news around this time each year. For 2023, Apple is introducing Assisted Access, an accessibility setting that, when turned on, changes the iPhone and iPad home screen to a layout with fewer distractions and icons. You can choose from a row-based or grid-based layout, the latter of which will result in a 2×3 arrangement of large icons. You can decide what they are, and most of Apple’s first-party apps can be used here.

The icons themselves are larger than usual, and feature high-contrast labels that make them more readable. When you click on an app, a back button appears at the bottom for easier navigation. Assistive Access also includes a new calling app that combines the features of Phone and FaceTime into one personalized experience. Messages, Camera, Photos, and Music have also been modified for the simpler interface and all feature high-contrast buttons, large text labels, and widgets that, according to Apple, “help trusted supporters personalize the experience for the individual they support.” The goal is to offer a system that is less distracting or confusing for those who might find the typical iOS interface overwhelming.

Apple also launched Live Speech this week, which works on iPhone, iPad, and Mac. It will allow users to type in what they want to say and have the device read it out loud. It not only works for personal conversations, but for phone calls and FaceTime as well. You’ll also be able to create shortcuts for phrases you use often, such as “Hey, can I have a tall vanilla latte?” or “Excuse me, where is the bathroom?” The company also introduced Personal Voice, which allows you to create a digital voice that looks like your own. This may be useful for those who are at risk of losing their ability to speak due to conditions that may affect their voice. The setup process involves “reading along with random text prompts for about 15 minutes on your iPhone or iPad.”

For those with visual impairments, Apple is adding a new Point and Speak feature to the Magnifier’s detection mode. This will use the iPhone or iPad’s camera, LiDAR scanner, and on-device machine learning to understand where a person has placed their finger and scan the target area for words, before reading them to the user. For example, if you hold up your phone and point to different parts of your microwave or washer controls, the system will determine what the labels are—such as “add 30 seconds,” “defrost,” or “start.”

The company made a slew of other small announcements this week, including updates that allow Macs to pair directly with Made-for-iPhone hearing aids, as well as voice suggestions for editing text in voice typing.

Google’s new accessibility tools

Meanwhile, Google is introducing a new Visual Question and Answer (or VQA) tool in the Lookout app, which uses artificial intelligence to answer follow-up questions about photos. The company’s accessibility leader and senior director at Products For All Eve Andersson told Engadget in an interview that VQA is the result of a collaboration between the Inclusion and DeepMind teams.

Google

To use VQA, you’ll open Lookout and launch Photo Mode to scan an image. After the app tells you what’s in the scene, you can ask followers for more details. For example, if Lookout says the photo depicts a family taking a picnic, you can ask what time of day it is or if there are trees around them. This allows the user to specify how much information they want from the image, rather than being limited to the initial description.

It’s often difficult to know how much detail to include in an image description, since you want to provide just enough to be useful but not so much that it confuses the user. For example, “What is the right amount of detail to give our users at Lookout?” Anderson said. “You don’t actually know what they want.” Anderson added that AI can help contextualize why someone is asking for a description or more information and provide the appropriate information.

When it launches in the fall, VQA could offer a way for the user to decide when to ask for more and when to learn enough. Of course, since it is powered by AI, the data generated may not be accurate, so there is no guarantee that this tool works perfectly, but it is an interesting way that puts the power in the hands of the users.

Google is also expanding Live Captions to work in French, Italian, and German later this year, as well as making wheelchair-friendly labels for places in Maps available to more people around the world.

Microsoft, Samsung, Adobe and more

Plenty of companies had news to share this week, including Adobe, which is rolling out a feature that uses artificial intelligence to automate the markup process for PDFs that makes them friendlier to screen readers. This uses Adobe Sensei AI, and will also indicate the correct reading order. Since this can really speed up the process of tagging PDFs, people and organizations will likely use the tool to browse through stocks of old documents to make them more intuitive. Adobe is also launching the PDF Accessibility Checker “to enable large organizations to quickly and efficiently assess the accessibility of existing PDF files at scale.”

Microsoft also had a few small updates to share, specifically around the Xbox. It has added new accessibility settings to the Xbox app on PC, including options to disable background images and disable animations, so users can minimize potentially annoying, confusing, or triggering components. The company has also expanded its support pages and added accessibility filters to its webstore to make it easier to find optimized games.

Meanwhile, Samsung announced this week that it’s adding two new levels of surround sound settings to the Galaxy Buds 2 Pro, bringing the total number of options to five. This will allow those who use earphones to hear their environment have more control over how loud they want it to be. They will also be able to select different settings for individual ears, as well as choose clarity levels and create custom profiles for their hearing.

We also learned that Cisco, the company behind Webex video conferencing software, is teaming up with speech recognition company VoiceITT to add transcription to better support people who speak non-standardly. This builds on Webex’s existing live translation feature, and uses VoiceITT’s artificial intelligence to recognize a person’s speech patterns to better understand what they want to communicate. Then, it will base and transcribe what is said, and captions will appear in the chat bar during calls.

Finally, we also saw Mozilla announce that Firefox 113 will be more accessible by improving the screen reader experience, while Netflix revealed a great reel showcasing some of its latest assistive features and developments over the past year. In its announcement, Netflix said that although it has “made strides in accessibility, [it knows] There is always more work to be done.”

This sentiment applies not only to Netflix, nor to the tech industry alone, but also to the entire world. While it’s nice to see so many companies taking the opportunity this week to release and highlight accessibility-related features, it’s important to remember that universal design shouldn’t and can’t be a once-a-year effort. I was also happy to see that despite the current fervor around generative AI, most companies didn’t seem to add the buzzword in every helpful feature or ad this week without good reason. For example, Anderson said that “we typically think about the user’s needs” and take a problem-first approach rather than focusing on where to apply the type of technology to the solution.

While it’s possible that ads about GAAD are part of the public relations and marketing game, in the end some of the tools launched today can improve the lives of people with disabilities or different needs. I call this a net win.

All products recommended by Engadget are selected by our editorial team, independently of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publication.

#Launching #products #related #accessibility #tech #industry #week #Engadget

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top