With Apple’s artificial intelligence, the iPhone can give a voice to those who are losing it

With Apple's artificial intelligence, the iPhone can give a voice to those who are losing it

[ad_1]

Ahead of World Accessibility Awareness Day on May 18, Apple unveiled several new software features for cognitive, visual, auditory, and motor accessibility. The news will come with updates to all the operating systems of the company’s various products, and will arrive soon, one imagines after the World Developers Conference scheduled for June 5, or perhaps even with iOS 17 next September. In Apple Stores, both physical and virtual, and in the assistance service, from May 18 SignTime will also be available in Italy, with sign language interpreters on request.

All choices that are further steps forward in the long journey that the Cupertino-based company has undertaken since its inception so that technology does not exclude anyone, but rather can become a powerful, effective and even economic tool to allow everyone to live and express themselves to the best.

“Accessibility is an integral part of everything we do at Apple,” said Sarah Herrlinger, senior director of global accessibility policies and initiatives. “These groundbreaking features were designed with feedback from members of the disability community in mind every step of the way, to support a diverse range of users and help people connect in new ways.”

Three system apps redesigned for maximum accessibility: Music, Photos, Phone

See, hear, understand

Apple will introduce streamlined versions of its core apps as part of a feature called Assistive Access, designed for users with cognitive disabilities. The feature is designed to “strike down apps and experiences to their essential characteristics in order to lighten the effort required.” There’s, for example, a built-in version of Phone and FaceTime, as well as modified versions of the Messages, Camera, Photos, and Music apps that feature high-contrast buttons, large text labels, and additional accessibility tools.

The disability gap: why technology cannot fail to be accessible

by Caroline Milanesi



Also coming is a new detection mode in Blind and Low Vision Magnifier, designed to help users interact with physical objects through numerous text labels. With Point and Speak, by pointing the camera at a label, such as a microwave keypad, the iPhone or iPad will read aloud as the user moves their finger over each number or setting on the device. It can be used with other Magnifier features, such as people detection, door detection, and image descriptions, to help users navigate their physical environment.

Among the features that will be available on the Mac, we highlight the ability for deaf or hard of hearing users to pair Made for iPhone hearing aids with a Mac. The company is also working on an easier way to adjust the size of the text in Finder, Messages, Mail, Calendar and Notes on Mac.

Plus, you’ll be able to pause GIFs in Safari and Messages, customize how fast Siri speaks, and use Voice Control for aural suggestions when editing text. This all builds on Apple’s existing accessibility features for Mac and iPhone, such as live closed captioning, VoiceOver screen reader, Door Detection and more.

Innovation

Thus artificial intelligence can help blind people to explore the world

by Andrea Nepori



Personal Voice, on the left, and Live Speech, on the right

Personal Voice, on the left, and Live Speech, on the right

Speak

However, the innovations that will be discussed more are related to the voice. With Live Speech, users can type whatever they want to say to have iPhone, iPad and Mac speak aloud during phone and FaceTime calls or in face-to-face conversations. You can save frequently used phrases, to quickly jump into conversations with family, friends and colleagues. Live Speech was designed for the millions of people around the world who are unable to speak or have lost their ability to speak over time.

Strategies

God of War: Ragnarok, how accessibility is built in a great video game

by Alessandra Contin



So far, after all, it is a very refined evolution of the current Voice Over. But Personal Voice is a completely new feature: it allows you to create a synthetic voice based on your own, recording 15 minutes of audio on your iPhone or iPad. It is designed for those at risk of losing their ability to speak – such as those with early stage ALS (amyotrophic lateral sclerosis) or other conditions that can progressively affect speech function. Personal Voice is integrated with Live Speech; it can therefore also be used for telephone calls, via Facetime and with other compatible third-party apps. But be warned, it will initially only be available in English, support for other languages ​​will come at a later date.

The creation of the voice takes place in such a way as to avoid or at least minimize the risks of scams and jokes in bad taste: in fact, it is necessary to read a series of words which are different each time, existing audio or video recordings cannot be used, and all the processing via machine learning is performed on the device. The information therefore remains private and is not shared with Apple. However, it is possible to download the already processed voice to multiple devices registered with the same Apple ID.

A good news

Google Maps becomes more inclusive with the Accessible Places option

by Emanuele Capone



[ad_2]

Source link