5 strategic moves that make Meta a key player of the next compute platform

René Kasperek
10 min readAug 19, 2024

--

Quest 3 (left) and Apple Vision Pro (right) | Based on images courtesy Meta, Apple, first seen at Road to VR

XR continues to struggle with its niche existence but has recently garnered more attention — thanks to the generational shift in XR headsets. The new devices, particularly the Vision Pro from Apple and the Quest 3 from Meta, can incorporate the space around us, enabling not only increasingly better virtual reality experiences but, above all, new kinds of augmented reality and mixed reality experiences.

The collective term for VR, AR, and MR is XR, or Extended Reality. Additionally, manufacturers are promoting the term “Spatial Computing.” According to them, XR technology is meant to usher in the era of the third platform in personal computing. But who will come out on top?

Desktop computers, smartphones — and now what?

Since the introduction of Apple’s Vision Pro, which Apple even refers to as a “spatial computer,” comparisons have been drawn with other XR devices, such as the Meta Quest 3. Apple’s headset came out on top in many reviews. But the question remains: Is new hardware alone the crucial turning point for the future of computing, or is there more to it?

The rapid developments in artificial intelligence have opened up new possibilities, and the real key lies in shaping the operating system. Platform providers realized that more factors are needed to put a company in pole position for the next computing platform. That’s why both Apple and Meta initially focused on AI — with “Apple Intelligence” and “MetaAI” — while the long-awaited “iPhone moment” for XR has yet to materialize, despite Vision Pro and Quest 3.

To understand the bigger picture, we need to go back to the beginning and consider all the components that will be crucial for a paradigm shift in computing:

A computing platform consists of several key elements: processors (e.g., CPU, GPU), operating systems (e.g., iOS, Android, Windows), connectivity (e.g., Internet, social media, cloud), output (e.g., display, sound), input (e.g., mouse, touch screen, or camera), and, not least, as many application possibilities as possible — no wonder the App Store has become an integral part of every operating system.

The first personal computing platform emerged with the desktop PC, dominated by Microsoft with Windows. The key innovations of the time were the visual operating system (OS), the mouse, the HTTP protocol, and the browser. These innovations made computing easy, affordable, connected, and personal. The PC made many tasks easier and became a standard, both at home and in the office.

The second computing platform emerged with portable devices, especially smartphones. Mass adoption was triggered by the introduction of Apple’s iPhone — the “iPhone moment.” A paradigm shift occurs when a new platform introduces a fundamentally different way of interacting with technology and applications. Smartphones did this by focusing on mobility, touch interfaces, personalized app design, individualized services, new forms of content consumption, and continuous connectivity. Applications were no longer purchased on physical media but personalized through centralized marketplaces or online stores.

This second computing platform, even more so than desktop PCs, consists largely of closed ecosystems — and is dominated and driven by Google and Apple today. Conceptually, however, the second platform is just an evolution of the first, as it essentially introduced very location-independent, account-based personal computing. Both platforms can still coexist today. However, the second one made a much greater impact: While in the year 2000, roughly 500 million households were estimated to have a PC used by several people, in 2022, there were about 4.7 billion individual smartphone users. (Source: Statista)

The next paradigm shift could be even more profound.

The paradigm shift associated with the third wave of computing has significantly more potential to replace old devices or add devices that were previously not considered computers. Today’s computing hardware mostly revolves around a 2D display. However, when things like refrigerators or cars become “smart” through integrated AI systems, devices will also become actors in computing, so the entire platform will include not only personal computing but also “things computing.” More importantly, the trend of application personalization will continue.

Imagine waking up in the morning to your favorite song playing softly through the speakers, a holographic display with your personalized news feed appearing on the wall, and your smart coffee machine preparing exactly the coffee you love on a Monday morning. Your virtual assistant in the car then greets you with a friendly “Good morning, there’s light traffic today. We have some time left over. Would you like to go over Friday’s results on the way to the office?”

What sounds like a scene from a science fiction movie will likely become reality in the era of the third computing platform. AI and connectivity will drive an ever-growing automated network and are the key components of the next computing platform.

The electronic devices and machines around us are beginning to learn our preferences. Personalization will continue to be pushed to the extreme to deliver the best possible output from applications. Today’s displays will no longer be the only way for input, output, or interaction because all the information wouldn’t fit on small screens, and fiddling with a multitude of apps and password-protected accounts is already stressful. The solution: Operating systems will gain spatial awareness, and devices will “know” their own state thanks to many integrated sensors.

Machine learning, or deep learning, is the basis of natural language processing and transformer architectures, which are the algorithms behind AI applications like ChatGPT. These enable natural language interactions and the training of highly personalized agents. These agents can interact with interfaces, control access to suitable applications for us, and deliver the desired results. They will be available when wearing devices like smart glasses or other smart gadgets.

Chips, operating systems, apps — everything will be infused with AI-optimized components, such as cameras that analyze every image, segment every frame, and understand every part of the surroundings. Programs can use automated communication with other devices and the data they collect. Thanks to wearable XR glasses, this information can be displayed at the point of interest in dynamic 3D in the user’s field of view, for example, inside your refrigerator. This enables more natural and faster interaction, with everything being computed in real-time in the cloud. Automated shopping and always-available health assistants will become everyday computing applications.

Spatial computing plays a crucial role in this, and XR headsets will be much more efficient in controlling our environment. But smartphones, smartwatches, headphones, and glasses will also be important helpers and part of the computing platform. That’s why device manufacturers are doing everything they can to keep us in their ecosystems. If you own an Apple Watch, you’re likely to own an iPhone as well. The smartphone will remain the central control unit for now, but once computing power is sufficiently miniaturized at the edge, the form factor will change, and there will no longer be a need for a social media feed on a palm-sized screen.

There are already harbingers in the form of a new, albeit somewhat undeveloped, class of devices like the Rabbit R1 or the Humane AI Pin. Computing will be performed spatially, and multimodal LLMs will even help you create great Excel spreadsheets without the need for a keyboard and screen.

5 Strategic Moves Meta Take to Dominate

With this vision of the future in mind, let’s take a closer look at why Meta is well-positioned. We ask whether Meta is on the verge of elevating an entire computing platform to a new level and, most importantly, freeing itself from its dependencies on competitors.

1. Aggressive Hardware Investments:

Meta’s commitment to becoming a dominant player in the next computing platform is clearly demonstrated by its aggressive investments in hardware development. At the most important conference for computer graphics, Siggraph 2024, Mark Zuckerberg met Nvidia’s Jensen Huang and made some interesting remarks about how things might evolve. Their publicly displayed camaraderie, likely backed by a massive investment from Meta in approximately 600,000 Nvidia H100 GPUs (costing around $20,000 each), was noteworthy — and now Meta has enormous AI training capacities.

The Meta Quest series, especially Quest 3, represents a significant leap in XR technology, showcasing true immersive Mixed Reality. And these aren’t the only smart glasses Meta has in its portfolio. The company is also attempting to pack as much technology and intelligence as possible into the sleekest form factor: the Meta Ray-Ban Smart Glasses, which can take photos and videos and include a voice-controlled AI assistant. Meta’s headsets combine affordability with advanced features that broaden consumers’ expectations for wearables.
Meta’s strategy, however, goes beyond headsets. Reality Labs, its XR division, regularly presents breakthroughs in computer vision and interface hardware, of course, at the cost of massive investments over many years. Advances in data processing and algorithms are driven mainly by novel AI chip designs, and Meta is also active in development. This ensures that Meta isn’t just a participant in the XR space but a market leader shaping the future of hardware and the emerging computing platform.

2. Building a New Ecosystem:

Meta’s ambitions in software development are no less ambitious than in hardware. By developing its own operating system, Horizon OS, Meta is effectively creating a new ecosystem that’s independent of Google’s Android or Apple’s iOS and even aims to operate XR glasses from other manufacturers. This move is critical because it gives Meta full control over how its devices interact with software, allowing for deeper integration of XR and AI technologies. By rejecting Google’s offer to integrate Android XR, Meta strengthens Horizon’s position. Control over the operating system also means control over the application marketplace.

However, Horizon isn’t just another operating system. It’s being developed from the ground up to meet the spatial computing needs of the future, with features like real-time 3D rendering or advanced machine learning for space detection. The ease of use of the Meta Quest home screen already leaves other manufacturers far behind in terms of usability.

In the future, operating systems will work across devices. The combination of IoT, social media, digital twins, and virtual 3D spaces will result in a highly connected and personalized metaverse. Meta’s hardware and software ecosystem aims to bring this future closer to reality.

3. AI Integration:

Artificial Intelligence has been at the core of Meta’s future strategy since Meta Connect 23. By deeply integrating AI models into its products, Meta not only improves user experience but also paves the way for AI to become a central component of the next computing platform. Business models like AI-as-a-Service enable developers to create and monetize smarter, more responsive applications, while Meta’s focus on avatars and virtual personas promises to make online interactions more immersive and personalized.

4. Leveraging a Massive User Base:

Meta’s existing user base, primarily drawn from its suite of social apps like Facebook, Instagram, and WhatsApp, is one of its greatest advantages. Messenger services are becoming a central component of AI interfaces. This is one of the most interesting coincidences that, in my opinion, even Meta (then Facebook) couldn’t have foreseen when it bought WhatsApp.

With billions of users already engaged in its ecosystem, Meta has an easy opportunity to attract an audience to its new computing platform by adding AI features. These will provide a tested user experience when spatial computing is introduced. The vast user base offers a large testing ground for new features and ensures that Meta’s XR and AI-driven applications can be adopted on a large scale from the outset.

5. Empowering Developers and Open Source

Meta understands that the success of the third computing platform depends on the strength of its developer ecosystem. Therefore, the company has provided developers with tools that enable rapid prototyping, which will ultimately lead to many new and previously unseen forms of mixed-reality applications.

In recent years, Meta has increasingly taken developers seriously. The tools have improved and matured. The introduction of standards such as OpenXR and compatibility with Unity’s ARFoundation not only simplified the development processes for spatially anchored applications but also ensured that the resulting applications are optimized and standardized. A drastic but consistent step is the discontinuation of Meta’s own augmented reality development platform, SparkAR. This platform was specialized in AR apps for smartphones, but Meta now seems to be paying less attention to the smartphone platform. If the Quest platform achieves commercial success, we may see SparkAR re-emerge there in an adapted form.

Also noteworthy is the open-sourcing of AI models like Llama 3.1 or tools like PyTorch, enabling the training of personalized or business-oriented AIs. By fostering a vibrant developer community, Meta ensures that its platform will be rich in innovative applications and experiences. This will help build a strong developer ecosystem, and Meta is not just creating a platform but a thriving community that will drive the adoption of the third computing platform and secure its long-term success.

Conclusion: Meta aims to take on Google and Apple — and Microsoft?

The most striking sign that the transition to the third computing platform has been completed will be when people start communicating with their personalized AI agents using natural language to handle everyday tasks, or when they use gestures and virtual 3D spaces to interact with applications or each other. By that point, input and output interfaces will have fundamentally changed once again.

Through strategic investments in hardware, the development of a new operating system, a massive and engaged user base, deep AI integration, and a robust developer ecosystem, Meta is laying the groundwork for a future where computing is more immersive, personalized, application-driven, and interconnected than ever before. This could put the company in a position to compete fiercely with Apple and Google, as personal computing is reshaped once again.

It will also be interesting to see if Microsoft seizes the opportunity to re-enter the platform competition after its failure in the smartphone market and develop its own spatial AI operating system — or whether it will once again be late to the party. With the OpenAI partnership and its own experience from HoloLens development, success could theoretically be possible. But that’s a different topic altogether…

--

--

René Kasperek

I am an XR experience creator with a passion for coding, education and consulting. I work across education, arts & culture, entertainment and architecture.