For almost a decade now—since the first iPhone popularized the smartphone and basically saved the computer industry from stagnation—people have been wondering what the next major device would be that would offer the same paradigm shift in computing.
Some thought it would be tablets, but those turned out to be evolutionary products instead of revolutionary: they fulfilled many of the tasks our smartphones and laptops already could. Next up was wearable technology. Wearable tech launched into the limelight with the now-failed Google Glass line of eyewear and it hasn’t fared much better now that the smartwatch has taken up the mantra.
Apple Watch Sales have been lackluster—and by all reports the Apple Watch is the most popular smartwatch. Next up was VR tech, which legitimately looks to be more of a bigger hitter as far as new device technology goes than wearables. But VR tech still requires bulky headsets, which limits their usefulness and practicalities to the areas of industry, medicine, and gaming (that last of which is an area they will be truly revolutionary).
So if none of those things will be the next big device that changes the way people compute and will be as prolific as the laptops on our desks and the smartphones in our pockets, what will? It will most likely be smart speakers with a digital artificially intelligent assistant living inside them. These devices have all the qualities necessary to radically alter the way we interact with the computer and the Internet at home. And that’s exactly why Amazon has been working on its Echo for years and Google is launching its Home smart speaker later this year.
Smart speakers like Echo and Home allow users to perform tasks they would usually need to tap away at their computer for by simply using their voice. The AI’s that power smart speakers—especially the one shown off in Google’s Home—have gotten so advanced it can follow multiple queries along the length of a conversation; it’s no longer limited to answering one plainly-spoken question at a time.
Siri is about to get A LOT better as well. According to Tech Insider, Apple has acquired VocalIQ, a UK based company, who’s technology makes Google Now and Cortana’s voice recognition look practically remedial. VocalIQ’s technology can not only better understand what you’re saying, but it can also understand and remember context like, for example, if you’re a vegetarian.
It can also handle multi-layered queries: “For example, imagine asking a computer to ‘Find a nearby Chinese restaurant with open parking and WiFi that’s kid-friendly.’ That’d trip up most assistants, but VocalIQ could handle it. The result? VocalIQ’s success rate was over 90%, while Google Now, Siri, and Cortana were only successful about 20% of the time,” said the report.
It added: “VocalIQ remembers context forever, just like a human can. That’s a massive breakthrough. Let’s go back to the Chinese restaurant example. What if you change your mind an hour later? Simply saying something like ‘Find me a Mexican restaurant instead,’ will bring you new results, while still taking into account the other parameters like parking and WiFi you mentioned before. Hound, Siri, and any other assistant would make you start the search session over again. But Vocal IQ remembers. That’s more human-like than anything available today.”
Think of the Scarlett Johansson-voiced “Samantha” AI assistant from the 2013 film Her or JARVIS from the Iron Man movies. Assistants like those who can carry out complicated or time consuming tasks that you can command with just your voice—and speak to them as you would another human being, using natural language—will indeed represent a paradigm shift in computing.
So the question is: where is Apple in all of this? Well, it turns out, if the latest rumors are to be believed, Apple has been preparing for this paradigm shift for years and the Siri we have in our iPhone right now is nothing compared to the Siri they have been working on behind the scenes that will power their Amazon Echo competitor. Here’s everything we know about it so far…
It will be open to third party developers
According to the Information, which broke news of the unannounced device, Apple isn’t going to mess up the product by keeping it off limits from developers.
“Apple is upping its game in the field of intelligent assistants. After years of internal debate and discussion about how to do so, the company is preparing to open up Siri to apps made by others. And it is working on an Amazon Echo-like device with a speaker and microphone that people can use to turn on music, get news headlines or set a timer.
Opening up its Siri voice assistant to outside app developers is the more immediate step. Apple is preparing to release a software developer kit, or SDK, for app developers who want their apps to be accessible through Siri, according to a person with direct knowledge of the effort.”
The Information reports that this SDK could be announced as in few as two weeks, which is when WWDC begins.
Apple’s Siri Speaker might have facial recognition tech
A report from CNET says Apple’s Siri Speaker could have a camera or cameras in addition to a microphone to detect users in a room. “The device would be ‘self aware’ and detect who is in the room using facial recognition technology. That would let the device automatically pull up a person’s preferences, such as the music and lighting they like, the sources said.”
CNET goes on to say the hardware could be released by year’s end, but a 2017 launch is more likely. The site’s sources cautioned, however, that Apple could kill the device before it ever launches.
The new Siri AI could be powered by tech from a company Apple bought in 2015
Last year Apple acquired a UK-based company called VocalIQ. The company was reportedly working on a digital assistant app that was so good it put not just Siri to shame, but Google Now, Microsoft Cortana, and Amazon’s Alexa too. Matter of fact it was so good Apple bought the company before they ever released the product. Now TechInsider is saying that VocalIQ’s tech will be the backbone of the new Siri AI—and it’s perfect for a device with no screen:
“Because VocalIQ understands context so well, it essentially eliminates the need to look at a screen for confirmation that it’s doing what you want it to do. That’s useful on the phone, but could be even better for other ambitious projects like the car or smart speaker system Apple is reportedly building. (VocalIQ was being pitched as a voice-controlled AI platform for cars before Apple bought the company.) In fact, VocalIQ only considers itself a success when the user is able to complete a task without looking at a screen. Siri, Google Now, and Cortana often ask you to confirm tasks by tapping on the screen.”