
An Apple store in South Korea on March 31. Photo: SeongJoon Cho/Bloomberg via Getty Images
Apple unveiled new features this week designed for people with cognitive, vision and speech impairments.
Driving the news: The forthcoming "Personal Voice" feature was made for users who are nonspeaking or at risk of losing that ability to create a synthesized voice that sounds like them, the company said.
- They can use that synthesized voice to connect with loved ones, per Apple.
- The features are slated to be launched later this year on iPhones, iPads and Macs.
How it works: For Personal Voice, users will read along with a randomized set of text prompts to record 15 minutes of audio.
- The speech accessibility feature will use on-device machine learning to create a voice that sounds like the user.
Meanwhile, another new featured called "Live Speech" will allow nonspeaking users to type to speak during calls and conversations.
- For users who are blind or have low vision, a new detection mode in Magnifier will offer "Point and Speak," which will allow them to point toward text that it will read it out loud.
What they're saying: "These groundbreaking features were designed with feedback from members of disability communities every step of the way, to support a diverse set of users and help people connect in new ways," said Sarah Herrlinger, Apple’s senior director of global accessibility policy and initiatives.
Go deeper: The voice note boom