Apple’s Watch and iPhone hold clues to its future AR glasses


    While playing around with the new Watch Series 7 that Apple loaned me, I stumbled on a surprising feature. You can now interact with the device in a whole new way—using hand gestures.

    Apple presents this as an “accessibility” feature for people who have trouble using the Watch’s buttons, for which it will surely be useful. But it’s hard to miss the future possibilities in using your hands—as opposed to your fingertips or voice—to communicate with your Watch. Imagine walking up to your front door and unlocking it by double-tapping your thumb and finger together.

    However, controlling your Watch with hand gestures gets even more interesting in the context of the next major wearable computing device Apple is likely to release: its long-awaited augmented reality (AR) glasses.

    Big Tech companies have long believed that the next big personal computing device after the smartphone will be some form of eye wearable that integrates digital content—including a graphical user interface—with the features of the world we see in front of us. This, the thinking goes, will create more immersive digital experiences and obviate the need to crane our necks down toward the little screens of phones, tablets, and smartwatches.

    This accessibility feature is just one of a number of technologies that exist in current Apple products that could be crucial to its future AR glasses. Features like frictionless user interfaces, a voice assistant that’s smart enough to truly be helpful, and spatial audio to create that feeling of immersion are all part of products like the iPhone, AirPods, and Watch now—and provide clues as to how Apple is laying the groundwork for its next big consumer tech product.

    Gesturing to the air

    One reason AR glasses aren’t here already is because tech companies are still trying to figure out the best ways for users to navigate and control a user interface that lives on your face.

    Hand gestures will likely be an important input mode. Microsoft’s Hololens already uses hand gestures as one of three main input modes (along with eye gaze and voice commands), relying on hand-tracking cameras on the front of the device. Facebook, which has been quite noisy about its development of AR glasses, is developing a wrist bracelet that detects hand gestures from electrical signals sent down the arm from the brain.

    Unlike Facebook, Apple already has a fully developed and extremely popular wrist sensor device in the Apple Watch. This year, Apple’s decision to add hand gestures as a new accessibility option could eventually play a major role in the operation of the company’s AR glasses.

    The motion sensors in the Apple Watch 7 can detect four different kinds of hand gestures—a finger-and-thumb touch and release (Apple calls this a “pinch”), a double-pinch, a fist clinch and release, and a fist double-clinch. These gestures, which are found among a branded set of “AssistiveTouch” features in the Accessibility section of Settings, can be used to navigate through action menus and make selections, confirm Apple Pay payments, and more. But the Watch’s sensors could be tuned to detect a wider set of gestures in the future.

    Hand gestures might be particularly useful in controlling a device with no touchscreen at all. A person wearing AR glasses might see elements of the UX (icons, menus, etc.) floating in front of them and use hand gestures—detected by their Apple Watch or some other wrist wearable—to select or navigate them.

    It’s very possible that the AR glasses will also use eye tracking to follow the user’s gaze over the interface and perhaps select items on which the gaze rested for a few seconds. So far, eye tracking hasn’t shown up in any Apple products, but the company has filed a number of patents related to the technology.

    Counting on Siri

    Siri will likely be extremely important in Apple’s AR glasses. The voice assistant could not only be a key way of communicating with the glasses, but could also serve as the AI brain of the device.

    It’s a big job because AR glasses will put more sensors, cameras, and microphones closer to your body than any other personal tech device Apple has ever created. Siri will probably collect all that data, along with signals from your emails, texts, and content choices, to proactively offer useful information (transit or flight information, perhaps) and graphics (like traffic or weather) at just the right time. And Siri will likely act as a concierge that guides you through the kinds of immersive, spatial computing experiences that AR makes possible for the first time.

    Siri will need to improve to rise to the task, and Apple is already pushing the assistant to do more things within the context of some of its existing products.

    A recent example is the just-announced Apple Music Voice plan, a new subscription tier that’s half the price of the normal tier but requires the user to use voice, and only voice, to call up songs and playlists. This tier, which is likely aimed at people who want to tell Siri to fire up playlists on smart speakers around the house, doesn’t allow subscribers to use a normal search bar to find music at all. Pushing users to use only their voice may help Apple build up more voice-command data to improve Siri’s natural language processing or its music domain knowledge. (Currently, Apple uses recordings of what people say to Siri to improve the service, but only if users consent and opt in.)

    Siri is becoming more ubiquitous in other ways. Apple’s new third-generation AirPods offer “always-on” Siri support, which means you can call on the assistant at any time without waking it up with a button push. Then you can use voice commands to play music, make calls, get directions, or check your schedule.

    Apple has already taken a stab at proactive assistance with its Siri watch face for Apple Watch, which arrived a few years ago with watchOS 4. In this case Siri can collect “signals” from Apple apps like Mail and Calendar running on any of the user’s Apple devices—desktop, mobile, or wearable—then present reminders or other relevant information on the watch face. Currently, the usefulness of the content is limited by the fact that Apple can collect signals only from its own apps. A future AR glasses product from Apple would almost certainly display this kind of information in front of the wearer’s eyes, but would likely access far more sensor and user data to do it in more personal and timely ways.

    Spatial audio for spatial computing

    Augmented or mixed reality is often called “spatial computing” because digital imagery appears to be interspersed within the physical space around the user (picture a Pokémon hiding behind a real-world bush in the AR game Pokémon Go). But visuals aren’t everything. Those digital images also make sounds, and the sounds need to seem like they’re coming from the location of the digital object for the experience to be realistic.

    Apple is already bringing this type of audio to its products. The third-generation AirPods support Apple’s new Dolby Atmos-powered Spatial Audio, which can create the effect of sounds coming to the listener’s ear from all directions. In the AirPods, this will be useful for listening to spatial audio mixes of music from Apple Music, or for watching movies that are produced with “surround sound” audio, like in a movie theater.

    But Apple also points out in its promotional materials that the AirPods spatial audio support will make “group FaceTime calls sound more true to life than ever.” By this, the company means that the placement of the voices of FaceTime call participants will vary according to their place on the screen. Spatial audio’s impact on FaceTime calls on phones or tablets might be subtle. But when such calls are experienced in AR, where the participants may be represented as avatars or holograms sitting around your kitchen table, the placement of the voices will be crucial to the believability of the virtual experience.

    To ensure all this technology works in the future, Apple may see value in getting the technology out in the market in the context of FaceTime well before the eventual release of the glasses. This isn’t so different than Apple’s decision to launch its ARKit framework to developers long before AR has escaped the screens of phones and tablets.

    Unlike these more subtle hints, Apple has made some far more open and obvious moves toward AR. The company says its ARKit development framework is the biggest AR platform in the world. Right now ARKit AR experiences can run only on iPads and iPhones, but they will become much more compelling when they jump to AR glasses, as Apple knows. The company also added a LiDAR depth camera to its high-end iPads and iPhones in order to enhance photographs and improve AR experiences. The same kind of cameras may be used on the front of Apple’s AR glasses to measure the depth of field ahead of the wearer to situate digital content correctly.

    It seems like many of the foundational technologies needed in AR glasses are already showing up in other Apple products. Now the question is when can Apple overcome the other technical challenges and bring all the pieces together in a design that people will want to wear—a pair of glasses that might become as commonplace as the AirPods we see on the street every day.





    Source link

    Previous articleWhat is Student Coin (STC)? – CryptoMode
    Next articleBitcoin, Ethereum Fluctuate, SHIB Rallies