fbpx
HomeGadgetsSmartphonesNew iPhone feature can clone your voice in just 15 minutes

New iPhone feature can clone your voice in just 15 minutes

According to a recent announcement from Apple, if you own an iPhone or iPad, you will soon be able to hear it talk in your own voice.

“Personal Voice,” a forthcoming feature, will provide users with randomly generated text suggestions to produce 15 minutes of audio.


There will also be a brand-new feature called “Live Speech” that enables users to enter a phrase and save frequently used ones so that the device can talk during phone and FaceTime calls or in-person interactions.

In order to develop the voice on the device itself rather than elsewhere, Apple claims it will employ machine learning, a sort of AI, making the data more secure and private.

It may appear odd at first, but it is actually a part of the business’s most recent accessibility initiative. Apple cited illnesses like ALS as ones where people can lose their capacity to talk.


Also Read


Apple’s CEO Tim Cook said that “Apple, we have always believed that the best technology is technology built for everyone.”


In-person talks, phone calls, FaceTime, and audio calls will be possible on iPhones and iPads thanks to the new “Personal Voice” feature, which is anticipated to be included in iOS 17.

Personal Voice, according to Apple, will produce a synthetic voice that sounds like the user and may be used to communicate with family and friends. Users with conditions that could eventually impair their ability to talk are the target audience for the functionality.

15 minutes of audio can be recorded on a user’s smartphone to create their Personal Voice. According to Apple, the function will maximise privacy by using local machine-learning technologies.

It’s a part of a wider package of accessibility upgrades for iOS devices, which also includes a new Assistive Access feature that makes it simpler for people with cognitive disabilities and those who care for them to use iOS devices.

Apple also unveiled a new point-and-speak-supported Detection Mode to complement its existing Magnifier feature, which is powered by machine learning. In order to announce the text on the screen, the new functionality combines camera input, LiDAR data, and machine learning technology.

At WWDC, Apple normally introduces software in beta, which means that the features are initially accessible to developers and the general public who choose to opt in. When new iPhones hit the market in the fall, those features will often still be in beta during the summer.

The first day of Apple’s 2023 WWDC is June 5. Along with other software and hardware developments, the business is anticipated to introduce its first virtual reality headset.

RELATED ARTICLES

Advertisement

- Advertisment -

Most Popular