New Apple HomePod Set To Tell Whether You Are ‘Happy’ ‘Sad’ Or Just ‘Sarcastic’
As the networked speaker market heats up and as both Google and Amazon and premium brands such as Harmon Kardon and Denon Heos, strip market share away from both Sonos and Apple the big iPhone maker has moved to develop a new HomePod speaker, that will recognise whether you are a sad or happy individual or simply a sarcastic person.
The initial Apple HomePod which uses the Siri voice software was a failure with Apple consumers describing it as being “too expensive” and limited when compared to what other manufacturers offered.
While the patent does not mention Apple’s smart speaker by name there are enough hints in the application to identify it as a major HomePod upgrade.
References to “cylindrical housing structures with a cylindrical surface,” and “fabric that overlaps the sidewall” are mentioned.
There is also mention of “light from a visual output device that passes to a display on the middle region of the sidewall of the housing.”
What is a big difference from the first HomePod is a reference to an “array of light-emitting diodes” that can display moving visual information which includes text?
It specifically says that the LEDs will be in the sidewall behind the fabric. There will be at least 100 LEDs to display moving visual information and the text, specifically, “comprises a song title”.
This has got observers guessing that Apple has something else coming that will work with the new speaker that looks like it will have a display screen wrapped around the cylindrical curve.
Sony has a smart speaker where LEDs are used to display the time through the fabric of the speaker sides. It looks like Apple has something similar in mind as the LEDs will “display alphanumeric characters that change depending on time of day.”
The patent also says it could display weather, sports scores and news.
ChannelNews understands that the new device will recognise gestures like waving or clapping.
The patent says it could contain “a sensor that detects three-dimensional hand gestures”. This would be an optical proximity sensor and a capacitive proximity sensor.
It’s not clear how this might work, but it could mean you could end music playback by holding your hand up in a stop gesture, for instance.
Like both Amazon Alexa and Google Voice, Apple is designing the new HomePod to be able to turn on a TV with the patent specifying it could transmit instructions to a television, heating and lighting controllers.
It will also have environmental sensors built in to detect carbon dioxide.
The patent also indicates that the device will have facial recognition capability which means that a camera has to be built in.
If this is the case this is still only taking the taking the device up to do what Alexa and Google can do with voice and face activated video screens from Lenovo, JBL and with the all new Amazon Echo Show.
Also revealed is an all-new emoji that shows through the speaker fabric. Guess what you can now show a smiley emoji when you’re feeling sad, for instance.
While the word HomePod is not used Apple has referred to the new device as ‘Device 10’.
The description given in the patent claims that the new Device 10 may analyse a user’s voice (e.g., when a user is supplying voice commands such as database queries and/or other commands to device 10). If vocal stress is detected in captured voice information, device 10 may adjust content being presented to the user.
For example, if voice loudness or stress patterns indicate that a user is stressed, the colour and/or brightness of a lighting pattern on a visual output device in device 10 may be adjusted accordingly (e.g., to reflect elevated stress or to try to alleviate stress by creating a calming environment).
In arrangements in which device 10 is displaying an avatar representing a digital assistant, the avatar (e.g., a humanistic avatar, a simplified graphical representation of a digital assistant such as an emoji-based avatar, etc.) may be adapted depending on the user’s mood.
If, for example, sadness is detected in the user’s voice, the avatar may change accordingly (e.g., to a sad emoji to represent user sadness or to a happy emoji to counteract the detected sadness). The avatar can also be changed depending on the nature of content currently being presented to a user. If, for example, a user asks a digital assistant for information on purchasing a birthday gift, the digital assistant may use a happy emoji to present results.
The process is being described as subtext recognition.