visionOS 2 invests in core technologies
Some observations after an hour with visionOS 2:
- Even in the first 20 minutes, the eye/hand tracking are noticeably more accurate. I made fewer misclicks & was less frustrated. The imprecision has been a major factor in the difficulty of working on Apple Vision Pro compared to iPad.
- The 3D object placement/persistence is noticeably better, too. Walking across the room from a 3D model on v1, it’d shift in place a few inches, it’s now less than half that, maybe a centimeter or two visually. Snapping models to surfaces with the little plunk noise is superior to guessing where the boundaries of objects are. I still want to snap windows onto walls, which seems like it shouldn’t be too far away.
- New home/control gestures seem super useful, even if they take some time to become intuitive. I can’t believe Control Center has an even worse design/hasn’t become faster to navigate this year, especially given the work in iOS 18.
- Spatialize Photos blew me away. I like that Apple is not trying to AI-generate parallax where you see behind subjects in photos; they’re making the photos appear 3D. I had assumed it visualized 2-3 layers, but it’s infinite depth: heads are round in space. Photos was already one of Vision’s strengths, but it now has to be seen to be believed.
- A few scattered updates with Safari profiles, Apple Music Sing, more writing tools in Freeform, Math Notes in Notes, but no new apps—not even Weather, Find My, or Journal in compatibility, and Calendar, Voice Memos, iWork etc apps haven’t become native. What are we doing! I want to use the same Apple apps across my devices. iPad compatibility apps running in dark mode is certainly a relief, though.
- There’s a lovely layer of polish across the OS. The animations for editing the Home Screen with the 3D jiggling & motion curves on swiping between pages are gorgeous. There’s subtle visual effects added into the onboarding too.
visionOS 2, like watchOS 2, is the OS that should have shipped originally; it is not a new take on the OS, it simply fills in some of the original. Notably, it doesn’t change the value proposition of the device or what I do/how much I use it whatsoever. There are so few new productivity features, and no iteration on the app/windowing model it shipped with.
It’s notable that Apple is investing in the core technologies that make spatial computing work long term: room scanning, 3D content placement, 3D content creation, Spatial Audio, video calls without cameras, more seamless interaction. None of these are short-term sales boosters for this overpriced device; they’re all long-term bets on what must be built up for an AR (glasses) future, the essentials for combining spatial computing (Apple Vision Pro) with ambient computing (Humane) to make the Google Glass we deserve down the road. This OS grows my confidence in the platform long-term, even if it doesn’t feel like a generous gift of features for current users.