The History of XR and The Metaverse
Pre-Digital Imagination (Pre-1800s)
Long before the first computer, humanity dreamed of alternate realities and immersive illusions. The seeds of what we now call virtual reality were planted in philosophy, art, and stagecraft centuries ago:
- Plato’s Allegory of the Cave (c. 380 BCE) – One of the first metaphors for simulated experience: people chained in a cave mistake shadows on the wall for reality, suggesting perception can be an illusion.
- Renaissance Perspective Art (1400s–1600s) – Artists like Brunelleschi and da Vinci used geometry to create 3D illusions on flat surfaces, making viewers feel they were looking through a window into another world.
- Theatrical Illusion and Stagecraft (1500s–1700s) – Baroque theaters used moving set pieces, trap doors, and forced perspective to simulate fantastical environments for live audiences.
- Magic Lantern Shows (1600s onward) – Early projection devices used light, glass slides, and narrative to create immersive multimedia experiences, often traveling town to town.
These techniques didn’t involve headsets or pixels—but they shared the same goal: to transport the viewer elsewhere.
First Wave: Origins of Media and Proto-Virtual Reality (1800s–1950s)
Before 3D graphics or immersive headsets, early media technologies laid the foundation for XR. The idea of transmitting sensory experience across distance—whether visual, auditory, or imaginative—began with inventions like the:
- Telegraph (1830s) – Enabled instant communication over long distances.
- Telephone (1876) – Added real-time voice to distant interaction.
- Panoramas and Stereoscopes (19th century) – Created early illusions of depth and immersion.
- Kinetoscope (1890s) – Edison’s motion picture viewer let users peer into moving scenes, a one-person cinematic experience.
The dream of virtual experience wasn’t new. Science fiction from the early 20th century, such as Pygmalion’s Spectacles (1935), imagined headsets that could deliver stories with sight, sound, touch, and smell. This short story by Stanley G. Weinbaum is often credited as the earliest detailed concept of a fully immersive virtual world, including the sensory experiences we now associate with VR.
By the mid-1900s, inventors were already experimenting with sensory immersion:
- Flight simulators (1920s) – Used mechanical rigs to replicate flight conditions for training.
- Morton Heilig’s Sensorama (1950s) – A multi-sensory machine combining visuals, vibration, sound, and smell, often credited as the first VR experience.
At the same time, networks began evolving. The development of early computers and packet-switching technologies led to the formation of the internet, which would become the backbone of all future virtual environments. Without the internet, virtual reality would have remained siloed. Instead, it became part of a vast, connectable system—laying the groundwork for the idea of a shared, immersive, digital universe.
Second Wave: Digital Immersion and 3D Breakthroughs (1960s–early 2020s)
The Rise of 3D Graphics
3D in computer programs changed the landscape of possibility for movies, product mockups, gaming, 3D printing, and more. 3D computer modeling is the creation of a three-dimensional digital visual representation of an object by using software. The computer renders (calculates) the shadows, lighting, and angles of shapes, while also allowing real-time interaction with the 3D object from multiple angles.
One of the earliest uses of CGI in film was Futureworld (1976), the first major feature film to use 3D computer-generated imagery (CGI) with a 3D hand and face.
In the late ’70s, 3D computer graphics began reaching consumers. 3D Art Graphics, a group of effects coded by Kazumasa Mitazawa for the Apple II, was released in 1978. This marked the earliest known example of consumer-accessible 3D rendering.
The first true 3D video game is debated. Descent (1994) used 3D environments but retained 2D elements like a 2D spaceship HUD. Others consider Quake (1996) the first true 3D first-person shooter for its fully 3D-rendered environments and models.
Pixar’s Toy Story (1995) was the first fully 3D computer-animated feature film. It was followed by titles like A Bug’s Life (1998), Shrek (2001), Jimmy Neutron (2002), Ice Age (2002), and Finding Nemo (2003). Pixar quickly became a household name for 3D animation.
In the same year (1995), Dutch studio NeoGeo began developing Blender, a 3D modeling program that remains free and open-source. Blender is a popular alternative to paid software like Autodesk Maya, known for its accessibility among independent creators.
The future of 3D lies in AR/VR applications—3D models that adapt to real-world lighting or respond to user movement.
The History of Extended Reality (XR)
Extended Reality (XR) is an umbrella term covering virtual reality (VR), augmented reality (AR), and mixed reality (MR). Though it feels futuristic, XR’s origins go back nearly a century.
- 1960s: Ivan Sutherland created the Sword of Damocles, a primitive VR headset that displayed wireframe graphics and tracked head movement.
The 1990s: From Fiction to Framework
- 1992: Sci-fi author Neal Stephenson coined the term “Metaverse” in his novel Snow Crash.
- 1995: Nintendo launched the Virtual Boy, the first head-mounted console using stereoscopic 3D. Despite \$25 million in promotion, it sold only 770,000 units and was discontinued within a year. The monochromatic display and lack of portability contributed to its failure, though its stereoscopic tech later reappeared in the successful 3DS handheld.
- Mid–1990s: The Virtual Fixture System, an early AR training tool by the U.S. Air Force, overlaid information onto the real world using head-mounted displays.
- 1997: The first-ever Virtual Worlds Conference convened in San Francisco.
- VRML (Virtual Reality Modeling Language) emerged to build 3D web environments but never saw wide adoption due to bandwidth limitations.
The internet boom was a parallel and critical piece. As browsers and broadband spread in the late 1990s and early 2000s, virtual experiences became globally accessible. The idea of digital identity, avatars, and shared spaces became plausible not just because of computing power—but because of a connected network of users.
2000s–2010s: The Social Shift
- 2003: Second Life launched, letting users create avatars, trade virtual goods, and host virtual events.
- 2004: World of Warcraft debuted, popularizing massively multiplayer online worlds.
- 2006: Nintendo Wii popularized kinetic gameplay, though EyeToy on PlayStation 2 preceded it by three years.
EyeToy (2003–2005): The Forgotten AR Pioneer
The EyeToy for PlayStation 2 was an early AR camera accessory that projected players onto the screen and recognized movement for interaction. Games like EyeToy: Play (2003), EyeToy: Play 2/3, and EyeToy: Antigrav (2004) created full-body control without traditional gamepads. While less remembered than the Wii, EyeToy remains a landmark in early home-based AR gaming.
Oculus and the XR Boom
- 2012: Oculus was founded by Palmer Luckey, Brendan Iribe, Michael Antonov, and Nate Mitchell, launching the Oculus Rift headset.
- 2014: Facebook acquired Oculus, betting big on VR.
- 2016: Pokémon Go demonstrated mass appeal of AR by blending GPS and real-world exploration.
The XR boom also saw companies like Sony (PlayStation VR), HTC Vive, and Snapchat (AR filters) enter the market. Apple and Google introduced AR developer kits for mobile phones.
The Third Wave: XR After Meta’s Metaverse Push (2021–2023)
When Facebook renamed itself Meta in October 2021, it didn’t just rebrand a company—it reignited what could be called the Second Wave of the metaverse. The First Wave began in the 1990s, when science fiction coined the term and early virtual worlds like Active Worlds, Second Life, and experimental 3D spaces captured imaginations alongside the rise of the internet and open world video games. Discussions of cyberspace, avatar identity, and shared digital environments emerged alongside the gaming boom and social media sites of the late ’90s and early 2000s. While those early efforts were limited by bandwidth and hardware, they planted the conceptual seeds. Meta’s rebrand revived those ideas on a global corporate stage. What had once been niche tech jargon became a global headline—and suddenly, every brand, developer, and futurist had an opinion on what the internet’s next chapter should look like. The pivot triggered an industry-wide rush: some hopeful, some cynical, all scrambling to define and claim a piece of this new virtual frontier.
Meta led with bold promises. Horizon Worlds opened in beta. Quest 2 sales surged. Face and eye tracking teased a future of photoreal avatars. A social VR space where users could build and monetize their own worlds sounded utopian, but quickly ran into the same problems as every platform before it: moderation, harassment, monetization headaches. The 47.5% creator fee became a meme for corporate greed in a so-called open world. Still, Meta’s billions guaranteed attention—and imitation. Yet as enthusiasm spread, so did tension between vision and execution: many creators struggled with opaque algorithms, high platform fees, and limited monetization pathways that made XR creator economies difficult to sustain.
Within weeks, major companies unveiled their own “metaverse” strategies. Nike acquired RTFKT to sell virtual sneakers. Disney quietly filed patents for virtual theme parks. Forever 21 launched a Roblox store where users could design their own fashion franchise. Niantic pitched an AR metaverse grounded in physical space. Microsoft acquired Activision Blizzard, citing the metaverse as a major motivation. Everyone wanted in—but no one agreed on what “in” meant. Web3-based platforms like Decentraland and The Sandbox promised digital ownership and decentralized architecture, but struggled with user retention and were often swept up in the broader crypto market’s volatility.
XR hardware entered a new boom-bust cycle. HTC introduced the lightweight Vive Flow. Apple’s long-rumored headset failed to materialize, but stayed in headlines. Snap pushed wearable AR with Spectacles and sign language lenses. Meanwhile, new entrants like Shiftall, Somnium, and Nreal tested hardware at every price point. Enterprise-focused devices like Magic Leap 2 and Lenovo’s ThinkReality headset leaned into productivity over play. Haptic gear—from TactGlove to VR treadmills—promised deeper immersion, though mostly for developers and enthusiasts.
The software side bloomed in parallel. Beat Saber celebrated four years and broke sales records. Resident Evil 4 in VR proved traditional games could thrive in a headset. Cities: VR, Among Us VR, and NFL Pro Era introduced new genres. Meanwhile, fitness apps like Supernatural and FitXR found devoted users, and tools like Arkio, Adobe’s Substance 3D, and Varjo’s Reality Cloud pushed XR beyond gaming into productivity and design. The promise of social and creator-centered platforms evolved too: Rec Room launched creation tools for Unity users, Engage XR hosted virtual universities, VRChat began testing mobile compatibility, and independent XR artists pushed new formats through browser-based WebXR galleries, underground XR theater, and immersive installations at both major and grassroots festivals.
But cracks appeared beneath the surface. Meta’s Reality Labs lost billions per quarter. Snap laid off a fifth of its workforce. AltspaceVR shut down. Microsoft disbanded its MRTK team. While the hype swelled, so did uncertainty. XR was no longer a side project—it was expected to perform, and many teams weren’t ready. Even Meta, the movement’s accidental leader, faced scrutiny for its unclear roadmap and relentless push despite user confusion.
And yet, real-world XR integration moved ahead. In hospitals, Osso VR and VisAR trained surgeons. At home, Snap AR filters let users try makeup and nail polish. AR wayfinding tools emerged in malls and airports. Meta opened its first retail store. The Vatican partnered with XR studios for virtual art galleries. Niantic mapped parks for immersive placemaking games. Amazon began embedding product links into AR games. From classrooms to sports arenas to concert venues, XR stopped being just an experiment—it started becoming infrastructure. This included broader educational adoption beyond universities: K–12 schools used XR for science labs and language immersion, while job training programs explored virtual simulations for factory work and emergency response. Accessibility-focused projects used XR for virtual mobility training, immersive sign language teaching, and spatial audio guidance for blind users—reminding the industry that immersion also meant inclusion. Meanwhile, eye tracking emerged as both a UX innovation and a privacy concern: enabling adaptive interfaces, analytics, and emotion detection while raising ethical questions about biometric data collection. Neural interface research—from Meta’s wrist-based EMG experiments to early BCI devices like NextMind and Neuralink—hinted at a future where intention could directly control virtual interaction, though most remained experimental.
By mid-2023, the term “metaverse” had already become polarizing, if not passé. Often underemphasized in mainstream coverage—despite consistent recognition in developer circles and week-to-week XR reporting—was NVIDIA. Through its Omniverse platform, NVIDIA positioned itself as the backbone of immersive computing: enabling digital twins, AI-enhanced simulations, and real-time collaboration. Its RTX-powered rendering and foundational role in both XR and AI toolchains made it central to progress in industrial and creative sectors. NVIDIA’s Omniverse enabled simulation environments for everything from climate modeling to architecture prototyping, used by firms like BMW and Ericsson for smart factory planning and digital twin experimentation. While Meta dominated headlines, NVIDIA quietly supplied the infrastructure many teams used to build the persistent, interoperable spaces the metaverse promised. Outside the U.S., other ecosystems thrived as well: South Korea launched a national metaverse strategy centered on public services, Japan invested in XR creator tools, and China issued draft regulations outlining how its version of the metaverse should evolve—revealing how immersive futures were being shaped by distinct regional values and governance models. Photogrammetry tools like Epic’s RealityScan, Polycam, and Luma AI made it easier than ever to bring real-world objects into virtual spaces. These tools weren’t just technical innovations—they became crucial for XR’s role in digital preservation, helping archive cultural heritage sites, endangered architecture, and even physical protest installations. Meanwhile, open standards like OpenXR, glTF, and WebGPU enabled more cross-platform compatibility and interoperability, helping reduce friction for developers and users alike. The 3D web slowly advanced toward a shared language, even as app stores and SDKs remained fragmented and often difficult to navigate for indie creators. And even as many headlines focused on big tech companies, independent creators using platforms like VRChat, Rec Room, and WebXR kept XR culture alive through community worlds, avatar modding, and storytelling festivals at places like Tribeca and Venice. While corporate XR saw waves of consolidation and shutdowns, indie developers remained resilient—pivoting to open tools, browser-native platforms, and crowdfunded distribution models. Despite platform shutdowns like AltspaceVR, XR users found new spaces to gather, migrate, and preserve what they’d built. Gesture-based interaction and hand tracking also became more central, as companies like Meta, Ultraleap, and Apple emphasized controller-free input as a more intuitive gateway into spatial computing. WebXR kept immersive experiences browser-accessible, even as proprietary platforms shut down. Community projects like VRChat festivals, archival scans of lost worlds, and experimental virtual museums helped preserve digital memory in ways that challenged the idea of XR as ephemeral. These moments highlighted XR’s fragility—platform closures often erased years of community-building and user-generated content, raising questions about permanence, portability, and digital memory. Generative AI—especially with the rise of ChatGPT, Midjourney, and Sora—captured public imagination. Tools that could instantly generate text, images, music, code, and video began to feel more tangible and accessible than headsets and avatars. While XR still promised immersion, AI offered instant creation—and platforms like Runway, Pika, and ElevenLabs expanded the creative toolkit even further. Venture funding flowed into AI startups, often redirecting investor attention away from immersive tech. XR coverage waned, even as deeper integrations between AI and XR quietly began to take shape. Advances in edge computing and 5G opened doors for cloud-streamed XR, reducing headset hardware demands and allowing for richer environments to be delivered remotely.
Meta’s Reality Labs continued to bleed money, and internal morale fluctuated. Executives departed, and reports emerged about unclear direction within Horizon. At the same time, Microsoft laid off teams in its XR division, Snap scaled back Spectacles plans, and Niantic shuttered several projects—including its Harry Potter AR game—while continuing to promote Lightship as a more grounded AR platform. Apple’s Vision Pro announcement in mid-2023 reignited some XR excitement, but even that was framed through the lens of ‘spatial computing’—a softer, more Apple-esque distancing from the metaverse buzzword.
The cultural zeitgeist had moved. XR didn’t disappear, but it lost the spotlight. Some companies even scrubbed references to the ‘metaverse’ from marketing entirely, reframing their work as AI-driven or spatial tech instead. Meanwhile, XR remained active in schools, maker spaces, and art collectives—far from the headlines, but not from impact. It now shared space with AI, automation, and ethical questions about machine creativity. Developers still built XR tools. Users still logged into Rec Room, still danced in Beat Saber, still trained in virtual surgery simulators. But the metaverse no longer felt like a revolution on the edge of happening. It felt like a foundation quietly being laid—waiting for better hardware, broader access, and a more focused sense of purpose.
Meta’s rebrand marked a shift. It led to industry experimentation, renewed creative focus, and a wave of both enthusiasm and skepticism. Developers also grappled with friction from a fragmented ecosystem—SDK lock-ins, inconsistent app stores, and limited platform interoperability—while open standards like glTF and WebXR aimed to smooth the path forward. XR progressed faster than it had in years—but struggled to keep pace with shifting public expectations. Motion sickness, headset discomfort, and high price points continued to limit adoption, even as software advanced. While developers refined their worlds and hardware gradually improved, generative AI quickly captured the spotlight. It offered fast, accessible creation: text, images, music, and code generated from a single prompt. Unlike XR, you didn’t need a headset to be impressed.
As public focus shifted, many companies repositioned. Meta started embedding AI into avatars and productivity tools, echoing work already underway in NVIDIA’s ACE (Avatar Cloud Engine), which used AI to simulate realistic speech and behavior in virtual humans. Microsoft leaned harder into GPT integration than Mesh. Snap doubled down on AR ads. At the same time, wearable tech at the fringes of XR gained traction: Humane’s AI Pin, Meta’s Ray-Ban Stories, and smart rings from companies like Ultrahuman suggested a blurring between XR, ambient computing, and always-on AI assistants. And even Apple, with the long-anticipated Vision Pro, avoided the term ‘metaverse’ altogether. Instead, it introduced ‘spatial computing’—a new narrative for an old dream.
The metaverse didn’t die, but it hit the ‘Trough of Disillusionment’—a stage in the Gartner Hype Cycle where early hype fades and reality sets in. Hardware friction, market uncertainty, and unmet expectations slowed progress. Yet what’s left is more grounded for longterm growth: researchers refining avatars, surgeons practicing in VR, students exploring virtual campuses, and artists continuing to build imaginative worlds that don’t go viral—but don’t vanish either.
The real metaverse is still emerging—not in centralized marketing decks or keynote slides, but through slow, uneven adoption: surgeons using VR, teachers leading virtual classes, artists prototyping in Tilt Brush and Gravity Sketch, and startups still experimenting with niche hardware like the Lynx R-1 or Tilt Five. Even failed experiments like Diver-X’s HalfDive revealed a hunger for new interaction models. The groundwork is still being laid—slowly, quietly, and unevenly. Whether the world joins in remains to be seen—but AI may not be a distraction from the metaverse’s future so much as a necessary ingredient. Tools like generative media, intelligent avatars, and latent space simulations could lower the barrier to creating vast, personalized worlds. AI-powered procedural environments, real-time character behavior, and ambient worldbuilding could make world creation accessible to non-coders and accelerate development across both entertainment and enterprise sectors. As XR matures, AI might be the thing that finally makes the metaverse feel real.
