Stanford University

Search form

  • Find Stories
  • For Journalists

Stanford researchers are using artificial intelligence to create better virtual reality experiences

Working at the intersection of hardware and software engineering, researchers are developing new techniques for improving 3D displays for virtual and augmented reality technologies.

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published Nov. 12 in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

write an article about virtual reality

Photograph of a holographic display prototype. (Image credit: Stanford Computational Imaging Lab)

At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer’s eyes, instead of 3D – or holographic – images like we see in the real world.

“They are not perceptually realistic,” explained Gordon Wetzstein , associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Wetzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality while creating displays that are more visually appealing and easier on the eyes.

The research published in Science Advances details a technique for reducing a speckling distortion often seen in regular laser-based holographic displays, while the SIGGRAPH Asia paper proposes a technique to more realistically represent the physics that would apply to the 3D scene if it existed in the real world.

Bridging simulation and reality

In the past decades, image quality for existing holographic displays has been limited. As Wetzstein explains it, researchers have been faced with the challenge of getting a holographic display to look as good as an LCD display.

One problem is that it is difficult to control the shape of light waves at the resolution of a hologram. The other major challenge hindering the creation of high-quality holographic displays is overcoming the gap between what is going on in the simulation versus what the same scene would look like in a real environment.

Previously, scientists have attempted to create algorithms to address both of these problems. Wetzstein and his colleagues also developed algorithms but did so using neural networks, a form of artificial intelligence that attempts to mimic the way the human brain learns information. They call this “ neural holography .”

Video showing how the researchers’ neural holography model compares to current state-of-the-art algorithms when applied to 3D scenes. (Stanford Computational Imaging Lab)

“Artificial intelligence has revolutionized pretty much all aspects of engineering and beyond,” said Wetzstein. “But in this specific area of holographic displays or computer-generated holography, people have only just started to explore AI techniques.”

Yifan Peng, a postdoctoral research fellow in the Stanford Computational Imaging Lab, is using his interdisciplinary background in both optics and computer science to help design the optical engine to go into the holographic displays.

“Only recently, with the emerging machine intelligence innovations, have we had access to the powerful tools and capabilities to make use of the advances in computer technology,” said Peng, who is co-lead author of the Science Advances paper and a co-author of the SIGGRAPH paper.

The neural holographic display that these researchers have created involved training a neural network to mimic the real-world physics of what was happening in the display and achieved real-time images. They then paired this with a “camera-in-the-loop” calibration strategy that provides near-instantaneous feedback to inform adjustments and improvements. By creating an algorithm and calibration technique, which run in real time with the image seen, the researchers were able to create more realistic-looking visuals with better color, contrast and clarity.

The new SIGGRAPH Asia paper highlights the lab’s first application of their neural holography system to 3D scenes . This system produces high-quality, realistic representation of scenes that contain visual depth, even when parts of the scenes are intentionally depicted as far away or out-of-focus.

The Science Advances work uses the same camera-in-the-loop optimization strategy, paired with an artificial intelligence-inspired algorithm, to provide an improved system for holographic displays that use partially coherent light sources – LEDs and SLEDs. These light sources are attractive for their cost, size and energy requirements and they also have the potential to avoid the speckled appearance of images produced by systems that rely on coherent light sources, like lasers. But the same characteristics that help partially coherent source systems avoid speckling tend to result in blurred images with a lack of contrast. By building an algorithm specific to the physics of partially coherent light sources, the researchers have produced the first high-quality and speckle-free holographic 2D and 3D images using LEDs and SLEDs.

Transformative potential

Wetzstein and Peng believe this coupling of emerging artificial intelligence techniques along with virtual and augmented reality will become increasingly ubiquitous in a number of industries in the coming years.

“I’m a big believer in the future of wearable computing systems and AR and VR in general, I think they’re going to have a transformative impact on people’s lives,” said Wetzstein. It might not be for the next few years, he said, but Wetzstein believes that augmented reality is the “big future.”

Though augmented virtual reality is primarily associated with gaming right now, it and augmented reality have potential use in a variety of fields, including medicine. Medical students can use augmented reality for training as well as for overlaying medical data from CT scans and MRIs directly onto the patients.

“These types of technologies are already in use for thousands of surgeries, per year,” said Wetzstein. “We envision that head-worn displays that are smaller, lighter weight and just more visually comfortable are a big part of the future of surgery planning.”

“It is very exciting to see how the computation can improve the display quality with the same hardware setup,” said Jonghyun Kim, a visiting scholar from Nvidia and co-author of both papers. “Better computation can make a better display, which can be a game changer for the display industry.”

Stanford graduate student is co-lead author of both papers Suyeon Choi and Stanford graduate student Manu Gopakumar is co-lead author of the SIGGRAPH paper. This work was funded by Ford, Sony, Intel, the National Science Foundation, the Army Research Office, a Kwanjeong Scholarship, a Korea Government Scholarship and a Stanford Graduate Fellowship.

To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest .

To revist this article, visit My Profile, then View saved stories .

  • Backchannel
  • Artificial Intelligence
  • Newsletters
  • Wired Insider

Peter Rubin Jaina Grey

The WIRED Guide to Virtual Reality

All hail the headset. Or, alternatively, all ignore the headset, because it’s gonna be a dismal failure anyway.

That’s pretty much the conversation around virtual reality (VR), a technology by which computer-aided stimuli create the immersive illusion of being somewhere else—and a topic on which middle ground is about as scarce as affordable housing in Silicon Valley.

VR is either going to upend our lives in a way nothing has since the smartphone, or it’s the technological equivalent of trying to make “fetch” happen . The poles of that debate were established in 2012, when VR first reemerged from obscurity at a videogame trade show; they’ve persisted through Facebook’s $3 billion acquisition of headset maker Oculus in 2014, through years of refinement and improvement, and well into the first and a half generation of consumer hardware.

The truth is likely somewhere in between. But either way, virtual reality represents an extraordinary shift in the way humans experience the digital realm. Computing has always been a mediated experience: People pass information back and forth through screens and keyboards. VR promises to do away with that pesky middle layer altogether. As does VR's cousin augmented reality (AR), which is sometimes called mixed reality (MR)—not to mention that VR, AR, and MR can all be lumped into the umbrella term XR, for "extended reality."

VR depends on headsets, while AR is (for now, at least) more commonly experienced through your phone. Got all that? Don't worry, we're generally just going to stick with VR for the purposes of this guide. By enveloping you in an artificial world, or bringing virtual objects into your real-world environment, "spatial computing" allows you to interact more intuitively with those objects and information.

Now VR is finally beginning to come of age, having survived the troublesome stages of the famous "hype cycle"—the Peak of Inflated Expectation, even the so-called Trough of Disillusionment. But it's doing so at a time when people are warier about technology than they've ever been. Privacy breaches, internet addiction, toxic online behavior: These ills are all at the forefront of the cultural conversation, and they all have the potential to be amplified many times over by VR and AR. As with the technology itself, "potential" is only one road of many. But, since VR and AR are poised to make significant leaps in the next two years (for real this time!), there's no better time to engage with their promise and their pitfalls.

What is Virtual Reality  The Complete WIRED Guide

The current life cycle of virtual reality may have begun when the earliest prototypes of the Oculus Rift showed up at the E3 videogame trade show in 2012, but it’s been licking at the edges of our collective consciousness for more than a century. The idea of immersing ourselves in 3D environments dates all the way back to the stereoscopes that captivated people's imaginations in the 19th century. If you present an almost identical image to each eye, your brain will combine them and find depth in their discrepancies; it's the same mechanism View-Masters used to become a childhood staple.

When actual VR took root in our minds as an all-encompassing simulacrum is a little fuzzier. As with most technological breakthroughs, the vision likely began with science fiction—specifically Stanley G. Weinbaum’s 1935 short story “ Pygmalion’s Spectacles ,” in which a scientist devises a pair of glasses that can "make it so that you are in the story, you speak to the shadows, and the shadows reply, and instead of being on a screen, the story is all about you, and you are in it."

Moving beyond stereoscopes and toward those magical glasses took a little more time, however. In the late 1960s, a University of Utah computer science professor named Ivan Sutherland—who had invented Sketchpad, the predecessor of the first graphic computer interface, as an MIT student—created a contraption called the Sword of Damocles.

The name was fitting: The Sword of Damocles was so large it had to be suspended from the ceiling. Nonetheless, it was the first "head-mounted display"; users who had its twin screens attached to their head could look around the room and see a virtual 3D cube hovering in midair. (Because you could also see your real-world surroundings, this was more like AR than VR, but it remains the inspiration for both technologies.)

Sutherland and his colleague David Evans eventually joined the private sector, adapting their work to flight simulator products. The Air Force and NASA were both actively researching head-mounted displays as well, leading to massive helmets that could envelop pilots and astronauts in the illusion of 360-degree space. Inside the helmets, pilots could see a digital simulation of the world outside their plane, with their instruments superimposed in 3D over the display; when they moved their heads the display would shift, reflecting whatever part of the world they were "looking" at.

None of this technology had a true name, though—at least not until the 1980s, when a twenty-something college dropout named Jaron Lanier dubbed it "virtual reality." (The phrase was first used by French playwright Antonio Artaud in a 1933 essay.) The company Lanier cofounded, VPL Research, created the first official products that could deliver VR: the EyePhone (yup), the DataGlove, and the DataSuit. They delivered a compelling, if graphically primitive, experience, but they were slow, uncomfortable, and—at more than $350,000 for a full setup for two people, including the computer to run it all—prohibitively expensive.

Yet, led by VPL’s promise and fueled by sci-fi writers, VR captured the popular imagination in the first half of the 1990s. If you didn't read Neal Stephenson's 1992 novel Snow Crash , you may have seen the movie Lawnmower Man that same year—a divine piece of schlock that featured VPL's gear (and was so far removed from the Stephen King short story it purported to adapt that King sued to have his name removed from the poster). It wasn't just colonizing genre movies or speculative fiction: VR figured prominently in syndicated live-action kiddie fare like VR Troopers , and even popped up in episodes of Murder She Wrote and Mad About You .

The End of Airbnb in New York

Amanda Hoover

The Burning Man Fiasco Is the Ultimate Tech Culture Clash

Angela Watercutter

What OpenAI Really Wants

Steven Levy

Sorry, Your Paper Coffee Cup Is a Toxic Nightmare

Sabrina Weiss

In the real world, virtual reality was promised to gamers everywhere. In arcades and malls, Virtuality pods let people play short VR games (remember Dactyl Nightmare ?); in living rooms, Nintendo called its 3D videogame system "Virtual Boy," conveniently ignoring the fact that the headsets delivered headaches rather than actual VR. (The Virtual Boy was discontinued six months after release.) VR proved unable to deliver on its promise, and its cultural presence eventually dried up. Research continued in academia and private-sector labs, but VR simply ceased to exist as a viable consumer technology.

Then the smartphone came along.

Phones featured compact high-resolution displays; they contained tiny gyroscopes and accelerometers; they boasted mobile processors that could handle 3D graphics. And all of a sudden, the hardware limitations that stood in the way of VR weren't a problem anymore.

In 2012, id Software cofounder and virtual-reality aficionado, John Carmack, came to the E3 videogame trade show with a special surprise: He had borrowed a prototype of a headset created by a 19-year-old VR enthusiast named Palmer Luckey and hacked it to run a VR version of the game Doom . Its face was covered with duct tape, and a strap ripped from a pair of Oakley ski goggles was all that held it to your head, but it worked. When people put on the headset, they found themselves surrounded by the 3D graphics they'd normally see on a TV or monitor. They weren't just playing Doom —they were inside it.

Things happened fast after that. Luckey's company, Oculus, raised more than $2 million on Kickstarter to produce the headset, which he called the Oculus Rift. In 2014, Facebook purchased Oculus for nearly $3 billion. ("Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate," Mark Zuckerberg said at the time.)

In 2016, the first wave of dedicated consumer VR headsets arrived, though all three were effectively peripherals rather than full systems: The Oculus Rift and the HTC Vive each connected to high-powered PCs, and the PlayStation VR system ran off a PlayStation 4 game console. In 2018, the first "stand-alone" headsets hit the market. They don't connect to a computer or depend on your smartphone to supply the display and processing; they're self-contained, all-in-one devices that make VR truly easy to use for the first time ever.

In 2020 the world of VR is going to be defined by these stand-alone headsets. The tethered-to-a-desktop headsets are still a high-end option for die-hards looking for the highest fidelity experiences possible, but an untethered stand-alone headset delivers on the promise of deeply immersive VR in the way previous tethered versions just haven’t—at least not without spending serious cash on hardware and accessories. The first next-gen stand-alone headsets are starting to hit store shelves already. Oculus released its version, the Oculus Quest , back in May 2019, and HTC is poised to release a modular competitor, the Vive Cosmos Play , later this year.

What is Virtual Reality  The Complete WIRED Guide

What all this is for is a question that doesn't have a single answer. The easiest but least satisfying response is that it's for everything. Beyond games and other interactive entertainment, VR shows promising applications for pain relief and PTSD, for education and design, for both telecommuting and office work. Thanks to "embodied presence"—you occupy an avatar in virtual space—social VR is not just more immersive than any digitally mediated communication we've ever experienced, but more affecting as well. The experiences we have virtually, from our reactions to our surroundings to the quality of our interactions, are stored and retrieved in our brains like any other experiential memory.

Yet, for all the billions of dollars poured into the field, nothing has yet emerged as the iPhone of VR: the product that combines compelling technology with an intuitive, desirable form. And while augmented and mixed reality are still a few years behind VR, it stands to reason that these related technologies won't remain distinct for long, instead merging into a single device that can deliver immersive, shut-out-the-world VR experiences—and then become transparent to let you interact with the world again.

That may end up coming from Apple; the Cupertino company is reportedly at work on a headset that could launch as early as 2020. However, incredibly well-funded and even more incredibly secretive company Magic Leap has recently emerged from years of guarded development to launch the first developer-only version of its own AR headset; the company has said its device would be able to deliver traditional VR as well as hologram-driven mixed reality.

But even with that sort of device, we're at the beginning of a long, uncertain road—not because of what the technology can do, but because of how people could misuse it. The internet is great; how people treat each other on the internet, not so much. Apply that logic to VR, where being embodied as an avatar means you have personal boundaries that can be violated, and where spatialized audio and haptic feedback lets you hear and feel what other people are saying and doing to you, and you're looking at a potential for harassment and toxic behavior that's exponentially more visceral and traumatizing than anything on conventional social media.

And then there's the question of authentication. The internet has given us phishing and catfishing, deep fakes, and fake news. Transpose any one of those into an all-encompassing experiential medium, and it's not hard to imagine what a bad actor (or geopolitical entity) could accomplish.

Those are the darkest timelines, for sure—and despite what the creators of Black Mirror seem to think, there's no guarantee things will swing that way. But if we've learned anything from how our lawmakers think about technology, it's that they don't think about it hard enough, and they don't think about it soon enough. So it's better to have these conversations now before we find ourselves trying to answer questions no one saw coming.

Besides, the way things are going, there's going to be a lot of good coming at us in the next few years. Let's try to keep it that way.

Updated March 2020: We've added some commentary about the state of VR in 2020 to reflect changes in the landscape.

What is Virtual Reality  The Complete WIRED Guide

  • The Untold Story of Magic Leap, the World’s Most Secretive Startup When the first wave of high-end VR headsets landed in 2016, they realized a decades-long dream—but there was another technology already on the horizon.
  • The Inside Story of Oculus Rift and How Virtual Reality Became Reality When the Oculus Rift first showed up at a videogame trade show in 2012, it was meant to be a Kickstarter project for a few VR die-hards. Turns out reality had other plans.
  • Coming Attractions: The Rise of VR Porn Like many new technologies over the years, VR found an early foothold in the adult-film industry. But the results may upend everything you thought you knew about porn.
  • The Display of the Future Might Be in Your Contact Lens AR is moving from our smartphones to eyeglasses and now contact lenses. This new company is at the frontier.
  • What a Real Wedding in a Virtual Space Says About the Future They met in VR. They grew close in VR. They got married in VR, surrounded by their friends from around the world.
  • As Social VR Grows, Users Are the Ones Building Its Worlds VR's growth hinges on the creativity of the people wearing the headset as much as it does on the technology powering it.
  • Facebook's Bizarre VR App Is Exactly Why Zuck Bought Oculus When Facebook announced its social VR app, Spaces, it gave people their first look at why the company paid $3 billion to acquire the headset maker.

Enjoyed this deep dive? Check out more WIRED Guides .

write an article about virtual reality

John Semley

Hip Hop’s Surprising, Never-Ending Evolutions

Jason Parham

Six-Word Sci-Fi: Stories Written by You

WIRED Readers

TikTok’s Berberine Fad Is About More Than ‘Nature’s Ozempic’

Kate Knibbs

Why This Award-Winning Piece of AI Art Can’t Be Copyrighted

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open Access
  • Published: 25 October 2021

Augmented reality and virtual reality displays: emerging technologies and future perspectives

  • Jianghao Xiong 1 ,
  • En-Lin Hsiang 1 ,
  • Ziqian He 1 ,
  • Tao Zhan   ORCID: 1 &
  • Shin-Tson Wu   ORCID: 1  

Light: Science & Applications volume  10 , Article number:  216 ( 2021 ) Cite this article

85k Accesses

260 Citations

26 Altmetric

Metrics details

  • Liquid crystals

With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.


Recent advances in high-speed communication and miniature mobile computing platforms have escalated a strong demand for deeper human-digital interactions beyond traditional flat panel displays. Augmented reality (AR) and virtual reality (VR) headsets 1 , 2 are emerging as next-generation interactive displays with the ability to provide vivid three-dimensional (3D) visual experiences. Their useful applications include education, healthcare, engineering, and gaming, just to name a few 3 , 4 , 5 . VR embraces a total immersive experience, while AR promotes the interaction between user, digital contents, and real world, therefore displaying virtual images while remaining see-through capability. In terms of display performance, AR and VR face several common challenges to satisfy demanding human vision requirements, including field of view (FoV), eyebox, angular resolution, dynamic range, and correct depth cue, etc. Another pressing demand, although not directly related to optical performance, is ergonomics. To provide a user-friendly wearing experience, AR and VR should be lightweight and ideally have a compact, glasses-like form factor. The above-mentioned requirements, nonetheless, often entail several tradeoff relations with one another, which makes the design of high-performance AR/VR glasses/headsets particularly challenging.

In the 1990s, AR/VR experienced the first boom, which quickly subsided due to the lack of eligible hardware and digital content 6 . Over the past decade, the concept of immersive displays was revisited and received a new round of excitement. Emerging technologies like holography and lithography have greatly reshaped the AR/VR display systems. In this article, we firstly review the basic requirements of AR/VR displays and their associated challenges. Then, we briefly describe the properties of two emerging technologies: holographic optical elements (HOEs) and lithography-based devices (Fig. 1 ). Next, we separately introduce VR and AR systems because of their different device structures and requirements. For the immersive VR system, the major challenges and how these emerging technologies help mitigate the problems will be discussed. For the see-through AR system, we firstly review the present status of light engines and introduce some architectures for the optical combiners. Performance summaries on microdisplay light engines and optical combiners will be provided, that serve as a comprehensive overview of the current AR display systems.

figure 1

The left side illustrates HOEs and lithography-based devices. The right side shows the challenges in VR and architectures in AR, and how the emerging technologies can be applied

Key parameters of AR and VR displays

AR and VR displays face several common challenges to satisfy the demanding human vision requirements, such as FoV, eyebox, angular resolution, dynamic range, and correct depth cue, etc. These requirements often exhibit tradeoffs with one another. Before diving into detailed relations, it is beneficial to review the basic definitions of the above-mentioned display parameters.

Definition of parameters

Taking a VR system (Fig. 2a ) as an example. The light emitting from the display module is projected to a FoV, which can be translated to the size of the image perceived by the viewer. For reference, human vision’s horizontal FoV can be as large as 160° for monocular vision and 120° for overlapped binocular vision 6 . The intersection area of ray bundles forms the exit pupil, which is usually correlated with another parameter called eyebox. The eyebox defines the region within which the whole image FoV can be viewed without vignetting. It therefore generally manifests a 3D geometry 7 , whose volume is strongly dependent on the exit pupil size. A larger eyebox offers more tolerance to accommodate the user’s diversified interpupillary distance (IPD) and wiggling of headset when in use. Angular resolution is defined by dividing the total resolution of the display panel by FoV, which measures the sharpness of a perceived image. For reference, a human visual acuity of 20/20 amounts to 1 arcmin angular resolution, or 60 pixels per degree (PPD), which is considered as a common goal for AR and VR displays. Another important feature of a 3D display is depth cue. Depth cue can be induced by displaying two separate images to the left eye and the right eye, which forms the vergence cue. But the fixed depth of the displayed image often mismatches with the actual depth of the intended 3D image, which leads to incorrect accommodation cues. This mismatch causes the so-called vergence-accommodation conflict (VAC), which will be discussed in detail later. One important observation is that the VAC issue may be more serious in AR than VR, because the image in an AR display is directly superimposed onto the real-world with correct depth cues. The image contrast is dependent on the display panel and stray light. To achieve a high dynamic range, the display panel should exhibit high brightness, low dark level, and more than 10-bits of gray levels. Nowadays, the display brightness of a typical VR headset is about 150–200 cd/m 2 (or nits).

figure 2

a Schematic of a VR display defining FoV, exit pupil, eyebox, angular resolution, and accommodation cue mismatch. b Sketch of an AR display illustrating ACR

Figure 2b depicts a generic structure of an AR display. The definition of above parameters remains the same. One major difference is the influence of ambient light on the image contrast. For a see-through AR display, ambient contrast ratio (ACR) 8 is commonly used to quantify the image contrast:

where L on ( L off ) represents the on (off)-state luminance (unit: nit), L am is the ambient luminance, and T is the see-through transmittance. In general, ambient light is measured in illuminance (lux). For the convenience of comparison, we convert illuminance to luminance by dividing a factor of π, assuming the emission profile is Lambertian. In a normal living room, the illuminance is about 100 lux (i.e., L am  ≈ 30 nits), while in a typical office lighting condition, L am  ≈ 150 nits. For outdoors, on an overcast day, L am  ≈ 300 nits, and L am  ≈ 3000 nits on a sunny day. For AR displays, a minimum ACR should be 3:1 for recognizable images, 5:1 for adequate readability, and ≥10:1 for outstanding readability. To make a simple estimate without considering all the optical losses, to achieve ACR = 10:1 in a sunny day (~3000 nits), the display needs to deliver a brightness of at least 30,000 nits. This imposes big challenges in finding a high brightness microdisplay and designing a low loss optical combiner.

Tradeoffs and potential solutions

Next, let us briefly review the tradeoff relations mentioned earlier. To begin with, a larger FoV leads to a lower angular resolution for a given display resolution. In theory, to overcome this tradeoff only requires a high-resolution-display source, along with high-quality optics to support the corresponding modulation transfer function (MTF). To attain 60 PPD across 100° FoV requires a 6K resolution for each eye. This may be realizable in VR headsets because a large display panel, say 2–3 inches, can still accommodate a high resolution with acceptable manufacture cost. However, for a glasses-like wearable AR display, the conflict between small display size and the high solution becomes obvious as further shrinking the pixel size of a microdisplay is challenging.

To circumvent this issue, the concept of the foveated display is proposed 9 , 10 , 11 , 12 , 13 . The idea is based on that the human eye only has high visual acuity in the central fovea region, which accounts for about 10° FoV. If the high-resolution image is only projected to fovea while the peripheral image remains low resolution, then a microdisplay with 2K resolution can satisfy the need. Regarding the implementation method of foveated display, a straightforward way is to optically combine two display sources 9 , 10 , 11 : one for foveal and one for peripheral FoV. This approach can be regarded as spatial multiplexing of displays. Alternatively, time-multiplexing can also be adopted, by temporally changing the optical path to produce different magnification factors for the corresponding FoV 12 . Finally, another approach without multiplexing is to use a specially designed lens with intended distortion to achieve non-uniform resolution density 13 . Aside from the implementation of foveation, another great challenge is to dynamically steer the foveated region as the viewer’s eye moves. This task is strongly related to pupil steering, which will be discussed in detail later.

A larger eyebox or FoV usually decreases the image brightness, which often lowers the ACR. This is exactly the case for a waveguide AR system with exit pupil expansion (EPE) while operating under a strong ambient light. To improve ACR, one approach is to dynamically adjust the transmittance with a tunable dimmer 14 , 15 . Another solution is to directly boost the image brightness with a high luminance microdisplay and an efficient combiner optics. Details of this topic will be discussed in the light engine section.

Another tradeoff of FoV and eyebox in geometric optical systems results from the conservation of etendue (or optical invariant). To increase the system etendue requires a larger optics, which in turn compromises the form factor. Finally, to address the VAC issue, the display system needs to generate a proper accommodation cue, which often requires the modulation of image depth or wavefront, neither of which can be easily achieved in a traditional geometric optical system. While remarkable progresses have been made to adopt freeform surfaces 16 , 17 , 18 , to further advance AR and VR systems requires additional novel optics with a higher degree of freedom in structure design and light modulation. Moreover, the employed optics should be thin and lightweight. To mitigate the above-mentioned challenges, diffractive optics is a strong contender. Unlike geometric optics relying on curved surfaces to refract or reflect light, diffractive optics only requires a thin layer of several micrometers to establish efficient light diffractions. Two major types of diffractive optics are HOEs based on wavefront recording and manually written devices like surface relief gratings (SRGs) based on lithography. While SRGs have large design freedoms of local grating geometry, a recent publication 19 indicates the combination of HOE and freeform optics can also offer a great potential for arbitrary wavefront generation. Furthermore, the advances in lithography have also enabled optical metasurfaces beyond diffractive and refractive optics, and miniature display panels like micro-LED (light-emitting diode). These devices hold the potential to boost the performance of current AR/VR displays, while keeping a lightweight and compact form factor.

Formation and properties of HOEs

HOE generally refers to a recorded hologram that reproduces the original light wavefront. The concept of holography is proposed by Dennis Gabor 20 , which refers to the process of recording a wavefront in a medium (hologram) and later reconstructing it with a reference beam. Early holography uses intensity-sensitive recording materials like silver halide emulsion, dichromated gelatin, and photopolymer 21 . Among them, photopolymer stands out due to its easy fabrication and ability to capture high-fidelity patterns 22 , 23 . It has therefore found extensive applications like holographic data storage 23 and display 24 , 25 . Photopolymer HOEs (PPHOEs) have a relatively small refractive index modulation and therefore exhibits a strong selectivity on the wavelength and incident angle. Another feature of PPHOE is that several holograms can be recorded into a photopolymer film by consecutive exposures. Later, liquid-crystal holographic optical elements (LCHOEs) based on photoalignment polarization holography have also been developed 25 , 26 . Due to the inherent anisotropic property of liquid crystals, LCHOEs are extremely sensitive to the polarization state of the input light. This feature, combined with the polarization modulation ability of liquid crystal devices, offers a new possibility for dynamic wavefront modulation in display systems.

The formation of PPHOE is illustrated in Fig. 3a . When exposed to an interfering field with high-and-low intensity fringes, monomers tend to move toward bright fringes due to the higher local monomer-consumption rate. As a result, the density and refractive index is slightly larger in bright regions. Note the index modulation δ n here is defined as the difference between the maximum and minimum refractive indices, which may be twice the value in other definitions 27 . The index modulation δ n is typically in the range of 0–0.06. To understand the optical properties of PPHOE, we simulate a transmissive grating and a reflective grating using rigorous coupled-wave analysis (RCWA) 28 , 29 and plot the results in Fig. 3b . Details of grating configuration can be found in Table S1 . Here, the reason for only simulating gratings is that for a general HOE, the local region can be treated as a grating. The observation of gratings can therefore offer a general insight of HOEs. For a transmissive grating, its angular bandwidth (efficiency > 80%) is around 5° ( λ  = 550 nm), while the spectral band is relatively broad, with bandwidth around 175 nm (7° incidence). For a reflective grating, its spectral band is narrow, with bandwidth around 10 nm. The angular bandwidth varies with the wavelength, ranging from 2° to 20°. The strong selectivity of PPHOE on wavelength and incident angle is directly related to its small δ n , which can be adjusted by controlling the exposure dosage.

figure 3

a Schematic of the formation of PPHOE. Simulated efficiency plots for b1 transmissive and b2 reflective PPHOEs. c Working principle of multiplexed PPHOE. d Formation and molecular configurations of LCHOEs. Simulated efficiency plots for e1 transmissive and e2 reflective LCHOEs. f Illustration of polarization dependency of LCHOEs

A distinctive feature of PPHOE is the ability to multiplex several holograms into one film sample. If the exposure dosage of a recording process is controlled so that the monomers are not completely depleted in the first exposure, the remaining monomers can continue to form another hologram in the following recording process. Because the total amount of monomer is fixed, there is usually an efficiency tradeoff between multiplexed holograms. The final film sample would exhibit the wavefront modulation functions of multiple holograms (Fig. 3c ).

Liquid crystals have also been used to form HOEs. LCHOEs can generally be categorized into volume-recording type and surface-alignment type. Volume-recording type LCHOEs are either based on early polarization holography recordings with azo-polymer 30 , 31 , or holographic polymer-dispersed liquid crystals (HPDLCs) 32 , 33 formed by liquid-crystal-doped photopolymer. Surface-alignment type LCHOEs are based on photoalignment polarization holography (PAPH) 34 . The first step is to record the desired polarization pattern in a thin photoalignment layer, and the second step is to use it to align the bulk liquid crystal 25 , 35 . Due to the simple fabrication process, high efficiency, and low scattering from liquid crystal’s self-assembly nature, surface-alignment type LCHOEs based on PAPH have recently attracted increasing interest in applications like near-eye displays. Here, we shall focus on this type of surface-alignment LCHOE and refer to it as LCHOE thereafter for simplicity.

The formation of LCHOEs is illustrated in Fig. 3d . The information of the wavefront and the local diffraction pattern is recorded in a thin photoalignment layer. The volume liquid crystal deposited on the photoalignment layer, depending on whether it is nematic liquid crystal or cholesteric liquid crystal (CLC), forms a transmissive or a reflective LCHOE. In a transmissive LCHOE, the bulk nematic liquid crystal molecules generally follow the pattern of the bottom alignment layer. The smallest allowable pattern period is governed by the liquid crystal distortion-free energy model, which predicts the pattern period should generally be larger than sample thickness 36 , 37 . This results in a maximum diffraction angle under 20°. On the other hand, in a reflective LCHOE 38 , 39 , the bulk CLC molecules form a stable helical structure, which is tilted to match the k -vector of the bottom pattern. The structure exhibits a very low distorted free energy 40 , 41 and can accommodate a pattern period that is small enough to diffract light into the total internal reflection (TIR) of a glass substrate.

The diffraction property of LCHOEs is shown in Fig. 3e . The maximum refractive index modulation of LCHOE is equal to the liquid crystal birefringence (Δ n ), which may vary from 0.04 to 0.5, depending on the molecular conjugation 42 , 43 . The birefringence used in our simulation is Δ n  = 0.15. Compared to PPHOEs, the angular and spectral bandwidths are significantly larger for both transmissive and reflective LCHOEs. For a transmissive LCHOE, its angular bandwidth is around 20° ( λ  = 550 nm), while the spectral bandwidth is around 300 nm (7° incidence). For a reflective LCHOE, its spectral bandwidth is around 80 nm and angular bandwidth could vary from 15° to 50°, depending on the wavelength.

The anisotropic nature of liquid crystal leads to LCHOE’s unique polarization-dependent response to an incident light. As depicted in Fig. 3f , for a transmissive LCHOE the accumulated phase is opposite for the conjugated left-handed circular polarization (LCP) and right-handed circular polarization (RCP) states, leading to reversed diffraction directions. For a reflective LCHOE, the polarization dependency is similar to that of a normal CLC. For the circular polarization with the same handedness as the helical structure of CLC, the diffraction is strong. For the opposite circular polarization, the diffraction is negligible.

Another distinctive property of liquid crystal is its dynamic response to an external voltage. The LC reorientation can be controlled with a relatively low voltage (<10 V rms ) and the response time is on the order of milliseconds, depending mainly on the LC viscosity and layer thickness. Methods to dynamically control LCHOEs can be categorized as active addressing and passive addressing, which can be achieved by either directly switching the LCHOE or modulating the polarization state with an active waveplate. Detailed addressing methods will be described in the VAC section.

Lithography-enabled devices

Lithography technologies are used to create arbitrary patterns on wafers, which lays the foundation of the modern integrated circuit industry 44 . Photolithography is suitable for mass production while electron/ion beam lithography is usually used to create photomask for photolithography or to write structures with nanometer-scale feature size. Recent advances in lithography have enabled engineered structures like optical metasurfaces 45 , SRGs 46 , as well as micro-LED displays 47 . Metasurfaces exhibit a remarkable design freedom by varying the shape of meta-atoms, which can be utilized to achieve novel functions like achromatic focus 48 and beam steering 49 . Similarly, SRGs also offer a large design freedom by manipulating the geometry of local grating regions to realize desired optical properties. On the other hand, micro-LED exhibits several unique features, such as ultrahigh peak brightness, small aperture ratio, excellent stability, and nanosecond response time, etc. As a result, micro-LED is a promising candidate for AR and VR systems for achieving high ACR and high frame rate for suppressing motion image blurs. In the following section, we will briefly review the fabrication and properties of micro-LEDs and optical modulators like metasurfaces and SRGs.

Fabrication and properties of micro-LEDs

LEDs with a chip size larger than 300 μm have been widely used in solid-state lighting and public information displays. Recently, micro-LEDs with chip sizes <5 μm have been demonstrated 50 . The first micro-LED disc with a diameter of about 12 µm was demonstrated in 2000 51 . After that, a single color (blue or green) LED microdisplay was demonstrated in 2012 52 . The high peak brightness, fast response time, true dark state, and long lifetime of micro-LEDs are attractive for display applications. Therefore, many companies have since released their micro-LED prototypes or products, ranging from large-size TVs to small-size microdisplays for AR/VR applications 53 , 54 . Here, we focus on micro-LEDs for near-eye display applications. Regarding the fabrication of micro-LEDs, through the metal-organic chemical vapor deposition (MOCVD) method, the AlGaInP epitaxial layer is grown on GaAs substrate for red LEDs, and GaN epitaxial layers on sapphire substrate for green and blue LEDs. Next, a photolithography process is applied to define the mesa and deposit electrodes. To drive the LED array, the fabricated micro-LEDs are transferred to a CMOS (complementary metal oxide semiconductor) driver board. For a small size (<2 inches) microdisplay used in AR or VR, the precision of the pick-and-place transfer process is hard to meet the high-resolution-density (>1000 pixel per inch) requirement. Thus, the main approach to assemble LED chips with driving circuits is flip-chip bonding 50 , 55 , 56 , 57 , as Fig. 4a depicts. In flip-chip bonding, the mesa and electrode pads should be defined and deposited before the transfer process, while metal bonding balls should be preprocessed on the CMOS substrate. After that, thermal-compression method is used to bond the two wafers together. However, due to the thermal mismatch of LED chip and driving board, as the pixel size decreases, the misalignment between the LED chip and the metal bonding ball on the CMOS substrate becomes serious. In addition, the common n-GaN layer may cause optical crosstalk between pixels, which degrades the image quality. To overcome these issues, the LED epitaxial layer can be firstly metal-bonded with the silicon driver board, followed by the photolithography process to define the LED mesas and electrodes. Without the need for an alignment process, the pixel size can be reduced to <5 µm 50 .

figure 4

a Illustration of flip-chip bonding technology. b Simulated IQE-LED size relations for red and blue LEDs based on ABC model. c Comparison of EQE of different LED sizes with and without KOH and ALD side wall treatment. d Angular emission profiles of LEDs with different sizes. Metasurfaces based on e resonance-tuning, f non-resonance tuning and g combination of both. h Replication master and i replicated SRG based on nanoimprint lithography. Reproduced from a ref. 55 with permission from AIP Publishing, b ref. 61 with permission from PNAS, c ref. 66 with permission from IOP Publishing, d ref. 67 with permission from AIP Publishing, e ref. 69 with permission from OSA Publishing f ref. 48 with permission from AAAS g ref. 70 with permission from AAAS and h , i ref. 85 with permission from OSA Publishing

In addition to manufacturing process, the electrical and optical characteristics of LED also depend on the chip size. Generally, due to Shockley-Read-Hall (SRH) non-radiative recombination on the sidewall of active area, a smaller LED chip size results in a lower internal quantum efficiency (IQE), so that the peak IQE driving point will move toward a higher current density due to increased ratio of sidewall surface to active volume 58 , 59 , 60 . In addition, compared to the GaN-based green and blue LEDs, the AlGaInP-based red LEDs with a larger surface recombination and carrier diffusion length suffer a more severe efficiency drop 61 , 62 . Figure 4b shows the simulated result of IQE drop in relation with the LED chip size of blue and red LEDs based on ABC model 63 . To alleviate the efficiency drop caused by sidewall defects, depositing passivation materials by atomic layer deposition (ALD) or plasma enhanced chemical vapor deposition (PECVD) is proven to be helpful for both GaN and AlGaInP based LEDs 64 , 65 . In addition, applying KOH (Potassium hydroxide) treatment after ALD can further reduce the EQE drop of micro-LEDs 66 (Fig. 4c ). Small-size LEDs also exhibit some advantages, such as higher light extraction efficiency (LEE). Compared to an 100-µm LED, the LEE of a 2-µm LED increases from 12.2 to 25.1% 67 . Moreover, the radiation pattern of micro-LED is more directional than that of a large-size LED (Fig. 4d ). This helps to improve the lens collection efficiency in AR/VR display systems.

Metasurfaces and SGs

Thanks to the advances in lithography technology, low-loss dielectric metasurfaces working in the visible band have recently emerged as a platform for wavefront shaping 45 , 48 , 68 . They consist of an array of subwavelength-spaced structures with individually engineered wavelength-dependent polarization/phase/ amplitude response. In general, the light modulation mechanisms can be classified into resonant tuning 69 (Fig. 4e ), non-resonant tuning 48 (Fig. 4f ), and combination of both 70 (Fig. 4g ). In comparison with non-resonant tuning (based on geometric phase and/or dynamic propagation phase), the resonant tuning (such as Fabry–Pérot resonance, Mie resonance, etc.) is usually associated with a narrower operating bandwidth and a smaller out-of-plane aspect ratio (height/width) of nanostructures. As a result, they are easier to fabricate but more sensitive to fabrication tolerances. For both types, materials with a higher refractive index and lower absorption loss are beneficial to reduce the aspect ratio of nanostructure and improve the device efficiency. To this end, titanium dioxide (TiO 2 ) and gallium nitride (GaN) are the major choices for operating in the entire visible band 68 , 71 . While small-sized metasurfaces (diameter <1 mm) are usually fabricated via electron-beam lithography or focused ion beam milling in the labs, the ability of mass production is the key to their practical adoption. The deep ultraviolet (UV) photolithography has proven its feasibility for reproducing centimeter-size metalenses with decent imaging performance, while it requires multiple steps of etching 72 . Interestingly, the recently developed UV nanoimprint lithography based on a high-index nanocomposite only takes a single step and can obtain an aspect ratio larger than 10, which shows great promise for high-volume production 73 .

The arbitrary wavefront shaping capability and the thinness of the metasurfaces have aroused strong research interests in the development of novel AR/VR prototypes with improved performance. Lee et al. employed nanoimprint lithography to fabricate a centimeter-size, geometric-phase metalens eyepiece for full-color AR displays 74 . Through tailoring its polarization conversion efficiency and stacking with a circular polarizer, the virtual image can be superimposed with the surrounding scene. The large numerical aperture (NA~0.5) of the metalens eyepiece enables a wide FoV (>76°) that conventional optics are difficult to obtain. However, the geometric phase metalens is intrinsically a diffractive lens that also suffers from strong chromatic aberrations. To overcome this issue, an achromatic lens can be designed via simultaneously engineering the group delay and the group delay dispersion 75 , 76 , which will be described in detail later. Other novel and/or improved near-eye display architectures include metasurface-based contact lens-type AR 77 , achromatic metalens array enabled integral-imaging light field displays 78 , wide FoV lightguide AR with polarization-dependent metagratings 79 , and off-axis projection-type AR with an aberration-corrected metasurface combiner 80 , 81 , 82 . Nevertheless, from the existing AR/VR prototypes, metasurfaces still face a strong tradeoff between numerical aperture (for metalenses), chromatic aberration, monochromatic aberration, efficiency, aperture size, and fabrication complexity.

On the other hand, SRGs are diffractive gratings that have been researched for decades as input/output couplers of waveguides 83 , 84 . Their surface is composed of corrugated microstructures, and different shapes including binary, blazed, slanted, and even analogue can be designed. The parameters of the corrugated microstructures are determined by the target diffraction order, operation spectral bandwidth, and angular bandwidth. Compared to metasurfaces, SRGs have a much larger feature size and thus can be fabricated via UV photolithography and subsequent etching. They are usually replicated by nanoimprint lithography with appropriate heating and surface treatment. According to a report published a decade ago, SRGs with a height of 300 nm and a slant angle of up to 50° can be faithfully replicated with high yield and reproducibility 85 (Fig. 4g, h ).

Challenges and solutions of VR displays

The fully immersive nature of VR headset leads to a relatively fixed configuration where the display panel is placed in front of the viewer’s eye and an imaging optics is placed in-between. Regarding the system performance, although inadequate angular resolution still exists in some current VR headsets, the improvement of display panel resolution with advanced fabrication process is expected to solve this issue progressively. Therefore, in the following discussion, we will mainly focus on two major challenges: form factor and 3D cue generation.

Form factor

Compact and lightweight near-eye displays are essential for a comfortable user experience and therefore highly desirable in VR headsets. Current mainstream VR headsets usually have a considerably larger volume than eyeglasses, and most of the volume is just empty. This is because a certain distance is required between the display panel and the viewing optics, which is usually close to the focal length of the lens system as illustrated in Fig. 5a . Conventional VR headsets employ a transmissive lens with ~4 cm focal length to offer a large FoV and eyebox. Fresnel lenses are thinner than conventional ones, but the distance required between the lens and the panel does not change significantly. In addition, the diffraction artifacts and stray light caused by the Fresnel grooves can degrade the image quality, or MTF. Although the resolution density, quantified as pixel per inch (PPI), of current VR headsets is still limited, eventually Fresnel lens will not be an ideal solution when a high PPI display is available. The strong chromatic aberration of Fresnel singlet should also be compensated if a high-quality imaging system is preferred.

figure 5

a Schematic of a basic VR optical configuration. b Achromatic metalens used as VR eyepiece. c VR based on curved display and lenslet array. d Basic working principle of a VR display based on pancake optics. e VR with pancake optics and Fresnel lens array. f VR with pancake optics based on purely HOEs. Reprinted from b ref. 87 under the Creative Commons Attribution 4.0 License. Adapted from c ref. 88 with permission from IEEE, e ref. 91 and f ref. 92 under the Creative Commons Attribution 4.0 License

It is tempting to replace the refractive elements with a single thin diffractive lens like a transmissive LCHOE. However, the diffractive nature of such a lens will result in serious color aberrations. Interestingly, metalenses can fulfil this objective without color issues. To understand how metalenses achieve achromatic focus, let us first take a glance at the general lens phase profile \(\Phi (\omega ,r)\) expanded as a Taylor series 75 :

where \(\varphi _0(\omega )\) is the phase at the lens center, \(F\left( \omega \right)\) is the focal length as a function of frequency ω , r is the radial coordinate, and \(\omega _0\) is the central operation frequency. To realize achromatic focus, \(\partial F{{{\mathrm{/}}}}\partial \omega\) should be zero. With a designed focal length, the group delay \(\partial \Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega\) and the group delay dispersion \(\partial ^2\Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega ^2\) can be determined, and \(\varphi _0(\omega )\) is an auxiliary degree of freedom of the phase profile design. In the design of an achromatic metalens, the group delay is a function of the radial coordinate and monotonically increases with the metalens radius. Many designs have proven that the group delay has a limited variation range 75 , 76 , 78 , 86 . According to Shrestha et al. 86 , there is an inevitable tradeoff between the maximum radius of the metalens, NA, and operation bandwidth. Thus, the reported achromatic metalenses at visible usually have limited lens aperture (e.g., diameter < 250 μm) and NA (e.g., <0.2). Such a tradeoff is undesirable in VR displays, as the eyepiece favors a large clear aperture (inch size) and a reasonably high NA (>0.3) to maintain a wide FoV and a reasonable eye relief 74 .

To overcome this limitation, Li et al. 87 proposed a novel zone lens method. Unlike the traditional phase Fresnel lens where the zones are determined by the phase reset, the new approach divides the zones by the group delay reset. In this way, the lens aperture and NA can be much enlarged, and the group delay limit is bypassed. A notable side effect of this design is the phase discontinuity at zone boundaries that will contribute to higher-order focusing. Therefore, significant efforts have been conducted to find the optimal zone transition locations and to minimize the phase discontinuities. Using this method, they have demonstrated an impressive 2-mm-diameter metalens with NA = 0.7 and nearly diffraction-limited focusing for the designed wavelengths (488, 532, 658 nm) (Fig. 5b ). Such a metalens consists of 681 zones and works for the visible band ranging from 470 to 670 nm, though the focusing efficiency is in the order of 10%. This is a great starting point for the achromatic metalens to be employed as a compact, chromatic-aberration-free eyepiece in near-eye displays. Future challenges are how to further increase the aperture size, correct the off-axis aberrations, and improve the optical efficiency.

Besides replacing the refractive lens with an achromatic metalens, another way to reduce system focal length without decreasing NA is to use a lenslet array 88 . As depicted in Fig. 5c , both the lenslet array and display panel adopt a curved structure. With the latest flexible OLED panel, the display can be easily curved in one dimension. The system exhibits a large diagonal FoV of 180° with an eyebox of 19 by 12 mm. The geometry of each lenslet is optimized separately to achieve an overall performance with high image quality and reduced distortions.

Aside from trying to shorten the system focal length, another way to reduce total track is to fold optical path. Recently, polarization-based folded lenses, also known as pancake optics, are under active development for VR applications 89 , 90 . Figure 5d depicts the structure of an exemplary singlet pancake VR lens system. The pancake lenses can offer better imaging performance with a compact form factor since there are more degrees of freedom in the design and the actual light path is folded thrice. By using a reflective surface with a positive power, the field curvature of positive refractive lenses can be compensated. Also, the reflective surface has no chromatic aberrations and it contributes considerable optical power to the system. Therefore, the optical power of refractive lenses can be smaller, resulting in an even weaker chromatic aberration. Compared to Fresnel lenses, the pancake lenses have smooth surfaces and much fewer diffraction artifacts and stray light. However, such a pancake lens design is not perfect either, whose major shortcoming is low light efficiency. With two incidences of light on the half mirror, the maximum system efficiency is limited to 25% for a polarized input and 12.5% for an unpolarized input light. Moreover, due to the existence of multiple surfaces in the system, stray light caused by surface reflections and polarization leakage may lead to apparent ghost images. As a result, the catadioptric pancake VR headset usually manifests a darker imagery and lower contrast than the corresponding dioptric VR.

Interestingly, the lenslet and pancake optics can be combined to further reduce the system form. Bang et al. 91 demonstrated a compact VR system with a pancake optics and a Fresnel lenslet array. The pancake optics serves to fold the optical path between the display panel and the lenslet array (Fig. 5e ). Another Fresnel lens is used to collect the light from the lenslet array. The system has a decent horizontal FoV of 102° and an eyebox of 8 mm. However, a certain degree of image discontinuity and crosstalk are still present, which can be improved with further optimizations on the Fresnel lens and the lenslet array.

One step further, replacing all conventional optics in catadioptric VR headset with holographic optics can make the whole system even thinner. Maimone and Wang demonstrated such a lightweight, high-resolution, and ultra-compact VR optical system using purely HOEs 92 . This holographic VR optics was made possible by combining several innovative optical components, including a reflective PPHOE, a reflective LCHOE, and a PPHOE-based directional backlight with laser illumination, as shown in Fig. 5f . Since all the optical power is provided by the HOEs with negligible weight and volume, the total physical thickness can be reduced to <10 mm. Also, unlike conventional bulk optics, the optical power of a HOE is independent of its thickness, only subject to the recording process. Another advantage of using holographic optical devices is that they can be engineered to offer distinct phase profiles for different wavelengths and angles of incidence, adding extra degrees of freedom in optical designs for better imaging performance. Although only a single-color backlight has been demonstrated, such a PPHOE has the potential to achieve full-color laser backlight with multiplexing ability. The PPHOE and LCHOE in the pancake optics can also be optimized at different wavelengths for achieving high-quality full-color images.

Vergence-accommodation conflict

Conventional VR displays suffer from VAC, which is a common issue for stereoscopic 3D displays 93 . In current VR display modules, the distance between the display panel and the viewing optics is fixed, which means the VR imagery is displayed at a single depth. However, the image contents are generated by parallax rendering in three dimensions, offering distinct images for two eyes. This approach offers a proper stimulus to vergence but completely ignores the accommodation cue, which leads to the well-known VAC that can cause an uncomfortable user experience. Since the beginning of this century, numerous methods have been proposed to solve this critical issue. Methods to produce accommodation cue include multifocal/varifocal display 94 , holographic display 95 , and integral imaging display 96 . Alternatively, elimination of accommodation cue using a Maxwellian-view display 93 also helps to mitigate the VAC. However, holographic displays and Maxwellian-view displays generally require a totally different optical architecture than current VR systems. They are therefore more suitable for AR displays, which will be discussed later. Integral imaging, on the other hand, has an inherent tradeoff between view number and resolution. For current VR headsets pursuing high resolution to match human visual acuity, it may not be an appealing solution. Therefore, multifocal/varifocal displays that rely on depth modulation is a relatively practical and effective solution for VR headsets. Regarding the working mechanism, multifocal displays present multiple images with different depths to imitate the original 3D scene. Varifocal displays, in contrast, only show one image at each time frame. The image depth matches the viewer’s vergence depth. Nonetheless, the pre-knowledge of the viewer’s vergence depth requires an additional eye-tracking module. Despite different operation principles, a varifocal display can often be converted to a multifocal display as long as the varifocal module has enough modulation bandwidth to support multiple depths in a time frame.

To achieve depth modulation in a VR system, traditional liquid lens 97 , 98 with tunable focus suffers from the small aperture and large aberrations. Alvarez lens 99 is another tunable-focus solution but it requires mechanical adjustment, which adds to system volume and complexity. In comparison, transmissive LCHOEs with polarization dependency can achieve focus adjustment with electronic driving. Its ultra-thinness also satisfies the requirement of small form factors in VR headsets. The diffractive behavior of transmissive LCHOEs is often interpreted by the mechanism of Pancharatnam-Berry phase (also known as geometric phase) 100 . They are therefore often called Pancharatnam-Berry optical elements (PBOEs). The corresponding lens component is referred as Pancharatnam-Berry lens (PBL).

Two main approaches are used to switch the focus of a PBL, active addressing and passive addressing. In active addressing, the PBL itself (made of LC) can be switched by an applied voltage (Fig. 6a ). The optical power of the liquid crystal PBLs can be turned-on and -off by controlling the voltage. Stacking multiple active PBLs can produce 2 N depths, where N is the number of PBLs. The drawback of using active PBLs, however, is the limited spectral bandwidth since their diffraction efficiency is usually optimized at a single wavelength. In passive addressing, the depth modulation is achieved through changing the polarization state of input light by a switchable half-wave plate (HWP) (Fig. 6b ). The focal length can therefore be switched thanks to the polarization sensitivity of PBLs. Although this approach has a slightly more complicated structure, the overall performance can be better than the active one, because the PBLs made of liquid crystal polymer can be designed to manifest high efficiency within the entire visible spectrum 101 , 102 .

figure 6

Working principles of a depth switching PBL module based on a active addressing and b passive addressing. c A four-depth multifocal display based on time multiplexing. d A two-depth multifocal display based on polarization multiplexing. Reproduced from c ref. 103 with permission from OSA Publishing and d ref. 104 with permission from OSA Publishing

With the PBL module, multifocal displays can be built using time-multiplexing technique. Zhan et al. 103 demonstrated a four-depth multifocal display using two actively switchable liquid crystal PBLs (Fig. 6c ). The display is synchronized with the PBL module, which lowers the frame rate by the number of depths. Alternatively, multifocal displays can also be achieved by polarization-multiplexing, as demonstrated by Tan et al. 104 . The basic principle is to adjust the polarization state of local pixels so the image content on two focal planes of a PBL can be arbitrarily controlled (Fig. 6d ). The advantage of polarization multiplexing is that it does not sacrifice the frame rate, but it can only support two planes because only two orthogonal polarization states are available. Still, it can be combined with time-multiplexing to reduce the frame rate sacrifice by half. Naturally, varifocal displays can also be built with a PBL module. A fast-response 64-depth varifocal module with six PBLs has been demonstrated 105 .

The compact structure of PBL module leads to a natural solution of integrating it with above-mentioned pancake optics. A compact VR headset with dynamic depth modulation to solve VAC is therefore possible in practice. Still, due to the inherent diffractive nature of PBL, the PBL module face the issue of chromatic dispersion of focal length. To compensate for different focal depths for RGB colors may require additional digital corrections in image-rendering.

Architectures of AR displays

Unlike VR displays with a relatively fixed optical configuration, there exist a vast number of architectures in AR displays. Therefore, instead of following the narrative of tackling different challenges, a more appropriate way to review AR displays is to separately introduce each architecture and discuss its associated engineering challenges. An AR display usually consists of a light engine and an optical combiner. The light engine serves as display image source, while the combiner delivers the displayed images to viewer’s eye and in the meantime transmits the environment light. Some performance parameters like frame rate and power consumption are mainly determined by the light engine. Parameters like FoV, eyebox and MTF are primarily dependent on the combiner optics. Moreover, attributes like image brightness, overall efficiency, and form factor are influenced by both light engine and combiner. In this section, we will firstly discuss the light engine, where the latest advances in micro-LED on chip are reviewed and compared with existing microdisplay systems. Then, we will introduce two main types of combiners: free-space combiner and waveguide combiner.

Light engine

The light engine determines several essential properties of the AR system like image brightness, power consumption, frame rate, and basic etendue. Several types of microdisplays have been used in AR, including micro-LED, micro-organic-light-emitting-diodes (micro-OLED), liquid-crystal-on-silicon (LCoS), digital micromirror device (DMD), and laser beam scanning (LBS) based on micro-electromechanical system (MEMS). We will firstly describe the working principles of these devices and then analyze their performance. For those who are more interested in final performance parameters than details, Table 1 provides a comprehensive summary.

Working principles

Micro-LED and micro-OLED are self-emissive display devices. They are usually more compact than LCoS and DMD because no illumination optics is required. The fundamentally different material systems of LED and OLED lead to different approaches to achieve full-color displays. Due to the “green gap” in LEDs, red LEDs are manufactured on a different semiconductor material from green and blue LEDs. Therefore, how to achieve full-color display in high-resolution density microdisplays is quite a challenge for micro-LEDs. Among several solutions under research are two main approaches. The first is to combine three separate red, green and blue (RGB) micro-LED microdisplay panels 106 . Three single-color micro-LED microdisplays are manufactured separately through flip-chip transfer technology. Then, the projected images from three microdisplay panels are integrated by a trichroic prism (Fig. 7a ).

figure 7

a RGB micro-LED microdisplays combined by a trichroic prism. b QD-based micro-LED microdisplay. c Micro-OLED display with 4032 PPI. Working principles of d LCoS, e DMD, and f MEMS-LBS display modules. Reprinted from a ref. 106 with permission from IEEE, b ref. 108 with permission from Chinese Laser Press, c ref. 121 with permission from Jon Wiley and Sons, d ref. 124 with permission from Spring Nature, e ref. 126 with permission from Springer and f ref. 128 under the Creative Commons Attribution 4.0 License

Another solution is to assemble color-conversion materials like quantum dot (QD) on top of blue or ultraviolet (UV) micro-LEDs 107 , 108 , 109 (Fig. 7b ). The quantum dot color filter (QDCF) on top of the micro-LED array is mainly fabricated by inkjet printing or photolithography 110 , 111 . However, the display performance of color-conversion micro-LED displays is restricted by the low color-conversion efficiency, blue light leakage, and color crosstalk. Extensive efforts have been conducted to improve the QD-micro-LED performance. To boost QD conversion efficiency, structure designs like nanoring 112 and nanohole 113 , 114 have been proposed, which utilize the Förster resonance energy transfer mechanism to transfer excessive excitons in the LED active region to QD. To prevent blue light leakage, methods using color filters or reflectors like distributed Bragg reflector (DBR) 115 and CLC film 116 on top of QDCF are proposed. Compared to color filters that absorb blue light, DBR and CLC film help recycle the leaked blue light to further excite QDs. Other methods to achieve full-color micro-LED display like vertically stacked RGB micro-LED array 61 , 117 , 118 and monolithic wavelength tunable nanowire LED 119 are also under investigation.

Micro-OLED displays can be generally categorized into RGB OLED and white OLED (WOLED). RGB OLED displays have separate sub-pixel structures and optical cavities, which resonate at the desirable wavelength in RGB channels, respectively. To deposit organic materials onto the separated RGB sub-pixels, a fine metal mask (FMM) that defines the deposition area is required. However, high-resolution RGB OLED microdisplays still face challenges due to the shadow effect during the deposition process through FMM. In order to break the limitation, a silicon nitride film with small shadow has been proposed as a mask for high-resolution deposition above 2000 PPI (9.3 µm) 120 .

WOLED displays use color filters to generate color images. Without the process of depositing patterned organic materials, a high-resolution density up to 4000 PPI has been achieved 121 (Fig. 7c ). However, compared to RGB OLED, the color filters in WOLED absorb about 70% of the emitted light, which limits the maximum brightness of the microdisplay. To improve the efficiency and peak brightness of WOLED microdisplays, in 2019 Sony proposed to apply newly designed cathodes (InZnO) and microlens arrays on OLED microdisplays, which increased the peak brightness from 1600 nits to 5000 nits 120 . In addition, OLEDWORKs has proposed a multi-stacked OLED 122 with optimized microcavities whose emission spectra match the transmission bands of the color filters. The multi-stacked OLED shows a higher luminous efficiency (cd/A), but also requires a higher driving voltage. Recently, by using meta-mirrors as bottom reflective anodes, patterned microcavities with more than 10,000 PPI have been obtained 123 . The high-resolution meta-mirrors generate different reflection phases in the RGB sub-pixels to achieve desirable resonant wavelengths. The narrow emission spectra from the microcavity help to reduce the loss from color filters or even eliminate the need of color filters.

LCoS and DMD are light-modulating displays that generate images by controlling the reflection of each pixel. For LCoS, the light modulation is achieved by manipulating the polarization state of output light through independently controlling the liquid crystal reorientation in each pixel 124 , 125 (Fig. 7d ). Both phase-only and amplitude modulators have been employed. DMD is an amplitude modulation device. The modulation is achieved through controlling the tilt angle of bi-stable micromirrors 126 (Fig. 7e ). To generate an image, both LCoS and DMD rely on the light illumination systems, with LED or laser as light source. For LCoS, the generation of color image can be realized either by RGB color filters on LCoS (with white LEDs) or color-sequential addressing (with RGB LEDs or lasers). However, LCoS requires a linearly polarized light source. For an unpolarized LED light source, usually, a polarization recycling system 127 is implemented to improve the optical efficiency. For a single-panel DMD, the color image is mainly obtained through color-sequential addressing. In addition, DMD does not require a polarized light so that it generally exhibits a higher efficiency than LCoS if an unpolarized light source is employed.

MEMS-based LBS 128 , 129 utilizes micromirrors to directly scan RGB laser beams to form two-dimensional (2D) images (Fig. 7f ). Different gray levels are achieved by pulse width modulation (PWM) of the employed laser diodes. In practice, 2D scanning can be achieved either through a 2D scanning mirror or two 1D scanning mirrors with an additional focusing lens after the first mirror. The small size of MEMS mirror offers a very attractive form factor. At the same time, the output image has a large depth-of-focus (DoF), which is ideal for projection displays. One shortcoming, though, is that the small system etendue often hinders its applications in some traditional display systems.

Comparison of light engine performance

There are several important parameters for a light engine, including image resolution, brightness, frame rate, contrast ratio, and form factor. The resolution requirement (>2K) is similar for all types of light engines. The improvement of resolution is usually accomplished through the manufacturing process. Thus, here we shall focus on other three parameters.

Image brightness usually refers to the measured luminance of a light-emitting object. This measurement, however, may not be accurate for a light engine as the light from engine only forms an intermediate image, which is not directly viewed by the user. On the other hand, to solely focus on the brightness of a light engine could be misleading for a wearable display system like AR. Nowadays, data projectors with thousands of lumens are available. But the power consumption is too high for a battery-powered wearable AR display. Therefore, a more appropriate way to evaluate a light engine’s brightness is to use luminous efficacy (lm/W) measured by dividing the final output luminous flux (lm) by the input electric power (W). For a self-emissive device like micro-LED or micro-OLED, the luminous efficacy is directly determined by the device itself. However, for LCoS and DMD, the overall luminous efficacy should take into consideration the light source luminous efficacy, the efficiency of illumination optics, and the efficiency of the employed spatial light modulator (SLM). For a MEMS LBS engine, the efficiency of MEMS mirror can be considered as unity so that the luminous efficacy basically equals to that of the employed laser sources.

As mentioned earlier, each light engine has a different scheme for generating color images. Therefore, we separately list luminous efficacy of each scheme for a more inclusive comparison. For micro-LEDs, the situation is more complicated because the EQE depends on the chip size. Based on previous studies 130 , 131 , 132 , 133 , we separately calculate the luminous efficacy for RGB micro-LEDs with chip size ≈ 20 µm. For the scheme of direct combination of RGB micro-LEDs, the luminous efficacy is around 5 lm/W. For QD-conversion with blue micro-LEDs, the luminous efficacy is around 10 lm/W with the assumption of 100% color conversion efficiency, which has been demonstrated using structure engineering 114 . For micro-OLEDs, the calculated luminous efficacy is about 4–8 lm/W 120 , 122 . However, the lifetime and EQE of blue OLED materials depend on the driving current. To continuously display an image with brightness higher than 10,000 nits may dramatically shorten the device lifetime. The reason we compare the light engine at 10,000 nits is that it is highly desirable to obtain 1000 nits for the displayed image in order to keep ACR>3:1 with a typical AR combiner whose optical efficiency is lower than 10%.

For an LCoS engine using a white LED as light source, the typical optical efficiency of the whole engine is around 10% 127 , 134 . Then the engine luminous efficacy is estimated to be 12 lm/W with a 120 lm/W white LED source. For a color sequential LCoS using RGB LEDs, the absorption loss from color filters is eliminated, but the luminous efficacy of RGB LED source is also decreased to about 30 lm/W due to lower efficiency of red and green LEDs and higher driving current 135 . Therefore, the final luminous efficacy of the color sequential LCoS engine is also around 10 lm/W. If RGB linearly polarized lasers are employed instead of LEDs, then the LCoS engine efficiency can be quite high due to the high degree of collimation. The luminous efficacy of RGB laser source is around 40 lm/W 136 . Therefore, the laser-based LCoS engine is estimated to have a luminous efficacy of 32 lm/W, assuming the engine optical efficiency is 80%. For a DMD engine with RGB LEDs as light source, the optical efficiency is around 50% 137 , 138 , which leads to a luminous efficacy of 15 lm/W. By switching to laser light sources, the situation is similar to LCoS, with the luminous efficacy of about 32 lm/W. Finally, for MEMS-based LBS engine, there is basically no loss from the optics so that the final luminous efficacy is 40 lm/W. Detailed calculations of luminous efficacy can be found in Supplementary Information .

Another aspect of a light engine is the frame rate, which determines the volume of information it can deliver in a unit time. A high volume of information is vital for the construction of a 3D light field to solve the VAC issue. For micro-LEDs, the device response time is around several nanoseconds, which allows for visible light communication with bandwidth up to 1.5 Gbit/s 139 . For an OLED microdisplay, a fast OLED with ~200 MHz bandwidth has been demonstrated 140 . Therefore, the limitation of frame rate is on the driving circuits for both micro-LED and OLED. Another fact concerning driving circuit is the tradeoff between resolution and frame rate as a higher resolution panel means more scanning lines in each frame. So far, an OLED display with 480 Hz frame rate has been demonstrated 141 . For an LCoS, the frame rate is mainly limited by the LC response time. Depending on the LC material used, the response time is around 1 ms for nematic LC or 200 µs for ferroelectric LC (FLC) 125 . Nematic LC allows analog driving, which accommodates gray levels, typically with 8-bit depth. FLC is bistable so that PWM is used to generate gray levels. DMD is also a binary device. The frame rate can reach 30 kHz, which is mainly constrained by the response time of micromirrors. For MEMS-based LBS, the frame rate is limited by the scanning frequency of MEMS mirrors. A frame rate of 60 Hz with around 1 K resolution already requires a resonance frequency of around 50 kHz, with a Q-factor up to 145,000 128 . A higher frame rate or resolution requires a higher Q-factor and larger laser modulation bandwidth, which may be challenging.

Form factor is another crucial aspect for the light engines of near-eye displays. For self-emissive displays, both micro-OLEDs and QD-based micro-LEDs can achieve full color with a single panel. Thus, they are quite compact. A micro-LED display with separate RGB panels naturally have a larger form factor. In applications requiring direct-view full-color panel, the extra combining optics may also increase the volume. It needs to be pointed out, however, that the combing optics may not be necessary for some applications like waveguide displays, because the EPE process results in system’s insensitivity to the spatial positions of input RGB images. Therefore, the form factor of using three RGB micro-LED panels is medium. For LCoS and DMD with RGB LEDs as light source, the form factor would be larger due to the illumination optics. Still, if a lower luminous efficacy can be accepted, then a smaller form factor can be achieved by using a simpler optics 142 . If RGB lasers are used, the collimation optics can be eliminated, which greatly reduces the form factor 143 . For MEMS-LBS, the form factor can be extremely compact due to the tiny size of MEMS mirror and laser module.

Finally, contrast ratio (CR) also plays an important role affecting the observed images 8 . Micro-LEDs and micro-OLEDs are self-emissive so that their CR can be >10 6 :1. For a laser beam scanner, its CR can also achieve 10 6 :1 because the laser can be turned off completely at dark state. On the other hand, LCoS and DMD are reflective displays, and their CR is around 2000:1 to 5000:1 144 , 145 . It is worth pointing out that the CR of a display engine plays a significant role only in the dark ambient. As the ambient brightness increases, the ACR is mainly governed by the display’s peak brightness, as previously discussed.

The performance parameters of different light engines are summarized in Table 1 . Micro-LEDs and micro-OLEDs have similar levels of luminous efficacy. But micro-OLEDs still face the burn-in and lifetime issue when driving at a high current, which hinders its use for a high-brightness image source to some extent. Micro-LEDs are still under active development and the improvement on luminous efficacy from maturing fabrication process could be expected. Both devices have nanosecond response time and can potentially achieve a high frame rate with a well-designed integrated circuit. The frame rate of the driving circuit ultimately determines the motion picture response time 146 . Their self-emissive feature also leads to a small form factor and high contrast ratio. LCoS and DMD engines have similar performance of luminous efficacy, form factor, and contrast ratio. In terms of light modulation, DMD can provide a higher 1-bit frame rate, while LCoS can offer both phase and amplitude modulations. MEMS-based LBS exhibits the highest luminous efficacy so far. It also exhibits an excellent form factor and contrast ratio, but the presently demonstrated 60-Hz frame rate (limited by the MEMS mirrors) could cause image flickering.

Free-space combiners

The term ‘free-space’ generally refers to the case when light is freely propagating in space, as opposed to a waveguide that traps light into TIRs. Regarding the combiner, it can be a partial mirror, as commonly used in AR systems based on traditional geometric optics. Alternatively, the combiner can also be a reflective HOE. The strong chromatic dispersion of HOE necessitates the use of a laser source, which usually leads to a Maxwellian-type system.

Traditional geometric designs

Several systems based on geometric optics are illustrated in Fig. 8 . The simplest design uses a single freeform half-mirror 6 , 147 to directly collimate the displayed images to the viewer’s eye (Fig. 8a ). This design can achieve a large FoV (up to 90°) 147 , but the limited design freedom with a single freeform surface leads to image distortions, also called pupil swim 6 . The placement of half-mirror also results in a relatively bulky form factor. Another design using so-called birdbath optics 6 , 148 is shown in Fig. 8b . Compared to the single-combiner design, birdbath design has an extra optics on the display side, which provides space for aberration correction. The integration of beam splitter provides a folded optical path, which reduces the form factor to some extent. Another way to fold optical path is to use a TIR-prism. Cheng et al. 149 designed a freeform TIR-prism combiner (Fig. 8c ) offering a diagonal FoV of 54° and exit pupil diameter of 8 mm. All the surfaces are freeform, which offer an excellent image quality. To cancel the optical power for the transmitted environmental light, a compensator is added to the TIR prism. The whole system has a well-balanced performance between FoV, eyebox, and form factor. To release the space in front of viewer’s eye, relay optics can be used to form an intermediate image near the combiner 150 , 151 , as illustrated in Fig. 8d . Although the design offers more optical surfaces for aberration correction, the extra lenses also add to system weight and form factor.

figure 8

a Single freeform surface as the combiner. b Birdbath optics with a beam splitter and a half mirror. c Freeform TIR prism with a compensator. d Relay optics with a half mirror. Adapted from c ref. 149 with permission from OSA Publishing and d ref. 151 with permission from OSA Publishing

Regarding the approaches to solve the VAC issue, the most straightforward way is to integrate a tunable lens into the optical path, like a liquid lens 152 or Alvarez lens 99 , to form a varifocal system. Alternatively, integral imaging 153 , 154 can also be used, by replacing the original display panel with the central depth plane of an integral imaging module. The integral imaging can also be combined with varifocal approach to overcome the tradeoff between resolution and depth of field (DoF) 155 , 156 , 157 . However, the inherent tradeoff between resolution and view number still exists in this case.

Overall, AR displays based on traditional geometric optics have a relatively simple design with a decent FoV (~60°) and eyebox (8 mm) 158 . They also exhibit a reasonable efficiency. To measure the efficiency of an AR combiner, an appropriate measure is to divide the output luminance (unit: nit) by the input luminous flux (unit: lm), which we note as combiner efficiency. For a fixed input luminous flux, the output luminance, or image brightness, is related to the FoV and exit pupil of the combiner system. If we assume no light waste of the combiner system, then the maximum combiner efficiency for a typical diagonal FoV of 60° and exit pupil (10 mm square) is around 17,000 nit/lm (Eq. S2 ). To estimate the combiner efficiency of geometric combiners, we assume 50% of half-mirror transmittance and the efficiency of other optics to be 50%. Then the final combiner efficiency is about 4200 nit/lm, which is a high value in comparison with waveguide combiners. Nonetheless, to further shrink the system size or improve system performance ultimately encounters the etendue conservation issue. In addition, AR systems with traditional geometric optics is hard to achieve a configuration resembling normal flat glasses because the half-mirror has to be tilted to some extent.

Maxwellian-type systems

The Maxwellian view, proposed by James Clerk Maxwell (1860), refers to imaging a point light source in the eye pupil 159 . If the light beam is modulated in the imaging process, a corresponding image can be formed on the retina (Fig. 9a ). Because the point source is much smaller than the eye pupil, the image is always-in-focus on the retina irrespective of the eye lens’ focus. For applications in AR display, the point source is usually a laser with narrow angular and spectral bandwidths. LED light sources can also build a Maxwellian system, by adding an angular filtering module 160 . Regarding the combiner, although in theory a half-mirror can also be used, HOEs are generally preferred because they offer the off-axis configuration that places combiner in a similar position like eyeglasses. In addition, HOEs have a lower reflection of environment light, which provides a more natural appearance of the user behind the display.

figure 9

a Schematic of the working principle of Maxwellian displays. Maxwellian displays based on b SLM and laser diode light source and c MEMS-LBS with a steering mirror as additional modulation method. Generation of depth cues by d computational digital holography and e scanning of steering mirror to produce multiple views. Adapted from b, d ref. 143 and c, e ref. 167 under the Creative Commons Attribution 4.0 License

To modulate the light, a SLM like LCoS or DMD can be placed in the light path, as shown in Fig. 9b . Alternatively, LBS system can also be used (Fig. 9c ), where the intensity modulation occurs in the laser diode itself. Besides the operation in a normal Maxwellian-view, both implementations offer additional degrees of freedom for light modulation.

For a SLM-based system, there are several options to arrange the SLM pixels 143 , 161 . Maimone et al. 143 demonstrated a Maxwellian AR display with two modes to offer a large-DoF Maxwellian-view, or a holographic view (Fig. 9d ), which is often referred as computer-generated holography (CGH) 162 . To show an always-in-focus image with a large DoF, the image can be directly displayed on an amplitude SLM, or using amplitude encoding for a phase-only SLM 163 . Alternatively, if a 3D scene with correct depth cues is to be presented, then optimization algorithms for CGH can be used to generate a hologram for the SLM. The generated holographic image exhibits the natural focus-and-blur effect like a real 3D object (Fig. 9d ). To better understand this feature, we need to again exploit the concept of etendue. The laser light source can be considered to have a very small etendue due to its excellent collimation. Therefore, the system etendue is provided by the SLM. The micron-sized pixel-pitch of SLM offers a certain maximum diffraction angle, which, multiplied by the SLM size, equals system etendue. By varying the display content on SLM, the final exit pupil size can be changed accordingly. In the case of a large-DoF Maxwellian view, the exit pupil size is small, accompanied by a large FoV. For the holographic display mode, the reduced DoF requires a larger exit pupil with dimension close to the eye pupil. But the FoV is reduced accordingly due to etendue conservation. Another commonly concerned issue with CGH is the computation time. To achieve a real-time CGH rendering flow with an excellent image quality is quite a challenge. Fortunately, with recent advances in algorithm 164 and the introduction of convolutional neural network (CNN) 165 , 166 , this issue is gradually solved with an encouraging pace. Lately, Liang et al. 166 demonstrated a real-time CGH synthesis pipeline with a high image quality. The pipeline comprises an efficient CNN model to generate a complex hologram from a 3D scene and an improved encoding algorithm to convert the complex hologram to a phase-only one. An impressive frame rate of 60 Hz has been achieved on a desktop computing unit.

For LBS-based system, the additional modulation can be achieved by integrating a steering module, as demonstrated by Jang et al. 167 . The steering mirror can shift the focal point (viewpoint) within the eye pupil, therefore effectively expanding the system etendue. When the steering process is fast and the image content is updated simultaneously, correct 3D cues can be generated, as shown in Fig. 9e . However, there exists a tradeoff between the number of viewpoint and the final image frame rate, because the total frames are equally divided into each viewpoint. To boost the frame rate of MEMS-LBS systems by the number of views (e.g., 3 by 3) may be challenging.

Maxwellian-type systems offer several advantages. The system efficiency is usually very high because nearly all the light is delivered into viewer’s eye. The system FoV is determined by the f /# of combiner and a large FoV (~80° in horizontal) can be achieved 143 . The issue of VAC can be mitigated with an infinite-DoF image that deprives accommodation cue, or completely solved by generating a true-3D scene as discussed above. Despite these advantages, one major weakness of Maxwellian-type system is the tiny exit pupil, or eyebox. A small deviation of eye pupil location from the viewpoint results in the complete disappearance of the image. Therefore, to expand eyebox is considered as one of the most important challenges in Maxwellian-type systems.

Pupil duplication and steering

Methods to expand eyebox can be generally categorized into pupil duplication 168 , 169 , 170 , 171 , 172 and pupil steering 9 , 13 , 167 , 173 . Pupil duplication simply generates multiple viewpoints to cover a large area. In contrast, pupil steering dynamically shifts the viewpoint position, depending on the pupil location. Before reviewing detailed implementations of these two methods, it is worth discussing some of their general features. The multiple viewpoints in pupil duplication usually mean to equally divide the total light intensity. In each time frame, however, it is preferable that only one viewpoint enters the user’s eye pupil to avoid ghost image. This requirement, therefore, results in a reduced total light efficiency, while also conditioning the viewpoint separation to be larger than the pupil diameter. In addition, the separation should not be too large to avoid gap between viewpoints. Considering that human pupil diameter changes in response to environment illuminance, the design of viewpoint separation needs special attention. Pupil steering, on the other hand, only produces one viewpoint at each time frame. It is therefore more light-efficient and free from ghost images. But to determine the viewpoint position requires the information of eye pupil location, which demands a real-time eye-tracking module 9 . Another observation is that pupil steering can accommodate multiple viewpoints by its nature. Therefore, a pupil steering system can often be easily converted to a pupil duplication system by simultaneously generating available viewpoints.

To generate multiple viewpoints, one can focus on modulating the incident light or the combiner. Recall that viewpoint is the image of light source. To duplicate or shift light source can achieve pupil duplication or steering accordingly, as illustrated in Fig. 10a . Several schemes of light modulation are depicted in Fig. 10b–e . An array of light sources can be generated with multiple laser diodes (Fig. 10b ). To turn on all or one of the sources achieves pupil duplication or steering. A light source array can also be produced by projecting light on an array-type PPHOE 168 (Fig. 10c ). Apart from direct adjustment of light sources, modulating light on the path can also effectively steer/duplicate the light sources. Using a mechanical steering mirror, the beam can be deflected 167 (Fig. 10d ), which equals to shifting the light source position. Other devices like a grating or beam splitter can also serve as ray deflector/splitter 170 , 171 (Fig. 10e ).

figure 10

a Schematic of duplicating (or shift) viewpoint by modulation of incident light. Light modulation by b multiple laser diodes, c HOE lens array, d steering mirror and e grating or beam splitters. f Pupil duplication with multiplexed PPHOE. g Pupil steering with LCHOE. Reproduced from c ref. 168 under the Creative Commons Attribution 4.0 License, e ref. 169 with permission from OSA Publishing, f ref. 171 with permission from OSA Publishing and g ref. 173 with permission from OSA Publishing

Nonetheless, one problem of the light source duplication/shifting methods for pupil duplication/steering is that the aberrations in peripheral viewpoints are often serious 168 , 173 . The HOE combiner is usually recorded at one incident angle. For other incident angles with large deviations, considerable aberrations will occur, especially in the scenario of off-axis configuration. To solve this problem, the modulation can be focused on the combiner instead. While the mechanical shifting of combiner 9 can achieve continuous pupil steering, its integration into AR display with a small factor remains a challenge. Alternatively, the versatile functions of HOE offer possible solutions for combiner modulation. Kim and Park 169 demonstrated a pupil duplication system with multiplexed PPHOE (Fig. 10f ). Wavefronts of several viewpoints can be recorded into one PPHOE sample. Three viewpoints with a separation of 3 mm were achieved. However, a slight degree of ghost image and gap can be observed in the viewpoint transition. For a PPHOE to achieve pupil steering, the multiplexed PPHOE needs to record different focal points with different incident angles. If each hologram has no angular crosstalk, then with an additional device to change the light incident angle, the viewpoint can be steered. Alternatively, Xiong et al. 173 demonstrated a pupil steering system with LCHOEs in a simpler configuration (Fig. 10g ). The polarization-sensitive nature of LCHOE enables the controlling of which LCHOE to function with a polarization converter (PC). When the PC is off, the incident RCP light is focused by the right-handed LCHOE. When the PC is turned on, the RCP light is firstly converted to LCP light and passes through the right-handed LCHOE. Then it is focused by the left-handed LCHOE into another viewpoint. To add more viewpoints requires stacking more pairs of PC and LCHOE, which can be achieved in a compact manner with thin glass substrates. In addition, to realize pupil duplication only requires the stacking of multiple low-efficiency LCHOEs. For both PPHOEs and LCHOEs, because the hologram for each viewpoint is recorded independently, the aberrations can be eliminated.

Regarding the system performance, in theory the FoV is not limited and can reach a large value, such as 80° in horizontal direction 143 . The definition of eyebox is different from traditional imaging systems. For a single viewpoint, it has the same size as the eye pupil diameter. But due to the viewpoint steering/duplication capability, the total system eyebox can be expanded accordingly. The combiner efficiency for pupil steering systems can reach 47,000 nit/lm for a FoV of 80° by 80° and pupil diameter of 4 mm (Eq. S2 ). At such a high brightness level, eye safety could be a concern 174 . For a pupil duplication system, the combiner efficiency is decreased by the number of viewpoints. With a 4-by-4 viewpoint array, it can still reach 3000 nit/lm. Despite the potential gain of pupil duplication/steering, when considering the rotation of eyeball, the situation becomes much more complicated 175 . A perfect pupil steering system requires a 5D steering, which proposes a challenge for practical implementation.

Pin-light systems

Recently, another type of display in close relation with Maxwellian view called pin-light display 148 , 176 has been proposed. The general working principle of pin-light display is illustrated in Fig. 11a . Each pin-light source is a Maxwellian view with a large DoF. When the eye pupil is no longer placed near the source point as in Maxwellian view, each image source can only form an elemental view with a small FoV on retina. However, if the image source array is arranged in a proper form, the elemental views can be integrated together to form a large FoV. According to the specific optical architectures, pin-light display can take different forms of implementation. In the initial feasibility demonstration, Maimone et al. 176 used a side-lit waveguide plate as the point light source (Fig. 11b ). The light inside the waveguide plate is extracted by the etched divots, forming a pin-light source array. A transmissive SLM (LCD) is placed behind the waveguide plate to modulate the light intensity and form the image. The display has an impressive FoV of 110° thanks to the large scattering angle range. However, the direct placement of LCD before the eye brings issues of insufficient resolution density and diffraction of background light.

figure 11

a Schematic drawing of the working principle of pin-light display. b Pin-light display utilizing a pin-light source and a transmissive SLM. c An example of pin-mirror display with a birdbath optics. d SWD system with LBS image source and off-axis lens array. Reprinted from b ref. 176 under the Creative Commons Attribution 4.0 License and d ref. 180 with permission from OSA Publishing

To avoid these issues, architectures using pin-mirrors 177 , 178 , 179 are proposed. In these systems, the final combiner is an array of tiny mirrors 178 , 179 or gratings 177 , in contrast to their counterparts using large-area combiners. An exemplary system with birdbath design is depicted in Fig. 11c . In this case, the pin-mirrors replace the original beam-splitter in the birdbath and can thus shrink the system volume, while at the same time providing large DoF pin-light images. Nonetheless, such a system may still face the etendue conservation issue. Meanwhile, the size of pin-mirror cannot be too small in order to prevent degradation of resolution density due to diffraction. Therefore, its influence on the see-through background should also be considered in the system design.

To overcome the etendue conservation and improve see-through quality, Xiong et al. 180 proposed another type of pin-light system exploiting the etendue expansion property of waveguide, which is also referred as scanning waveguide display (SWD). As illustrated in Fig. 11d , the system uses an LBS as the image source. The collimated scanned laser rays are trapped in the waveguide and encounter an array of off-axis lenses. Upon each encounter, the lens out-couples the laser rays and forms a pin-light source. SWD has the merits of good see-through quality and large etendue. A large FoV of 100° was demonstrated with the help of an ultra-low f /# lens array based on LCHOE. However, some issues like insufficient image resolution density and image non-uniformity remain to be overcome. To further improve the system may require optimization of Gaussian beam profile and additional EPE module 180 .

Overall, pin-light systems inherit the large DoF from Maxwellian view. With adequate number of pin-light sources, the FoV and eyebox can be expanded accordingly. Nonetheless, despite different forms of implementation, a common issue of pin-light system is the image uniformity. The overlapped region of elemental views has a higher light intensity than the non-overlapped region, which becomes even more complicated considering the dynamic change of pupil size. In theory, the displayed image can be pre-processed to compensate for the optical non-uniformity. But that would require knowledge of precise pupil location (and possibly size) and therefore an accurate eye-tracking module 176 . Regarding the system performance, pin-mirror systems modified from other free-space systems generally shares similar FoV and eyebox with original systems. The combiner efficiency may be lower due to the small size of pin-mirrors. SWD, on the other hand, shares the large FoV and DoF with Maxwellian view, and large eyebox with waveguide combiners. The combiner efficiency may also be lower due to the EPE process.

Waveguide combiner

Besides free-space combiners, another common architecture in AR displays is waveguide combiner. The term ‘waveguide’ indicates the light is trapped in a substrate by the TIR process. One distinctive feature of a waveguide combiner is the EPE process that effectively enlarges the system etendue. In the EPE process, a portion of the trapped light is repeatedly coupled out of the waveguide in each TIR. The effective eyebox is therefore enlarged. According to the features of couplers, we divide the waveguide combiners into two types: diffractive and achromatic, as described in the followings.

Diffractive waveguides

As the name implies, diffractive-type waveguides use diffractive elements as couplers. The in-coupler is usually a diffractive grating and the out-coupler in most cases is also a grating with the same period as the in-coupler, but it can also be an off-axis lens with a small curvature to generate image with finite depth. Three major diffractive couplers have been developed: SRGs, photopolymer gratings (PPGs), and liquid crystal gratings (grating-type LCHOE; also known as polarization volume gratings (PVGs)). Some general protocols for coupler design are that the in-coupler should have a relatively high efficiency and the out-coupler should have a uniform light output. A uniform light output usually requires a low-efficiency coupler, with extra degrees of freedom for local modulation of coupling efficiency. Both in-coupler and out-coupler should have an adequate angular bandwidth to accommodate a reasonable FoV. In addition, the out-coupler should also be optimized to avoid undesired diffractions, including the outward diffraction of TIR light and diffraction of environment light into user’s eyes, which are referred as light leakage and rainbow. Suppression of these unwanted diffractions should also be considered in the optimization process of waveguide design, along with performance parameters like efficiency and uniformity.

The basic working principles of diffractive waveguide-based AR systems are illustrated in Fig. 12 . For the SRG-based waveguides 6 , 8 (Fig. 12a ), the in-coupler can be a transmissive-type or a reflective-type 181 , 182 . The grating geometry can be optimized for coupling efficiency with a large degree of freedom 183 . For the out-coupler, a reflective SRG with a large slant angle to suppress the transmission orders is preferred 184 . In addition, a uniform light output usually requires a gradient efficiency distribution in order to compensate for the decreased light intensity in the out-coupling process. This can be achieved by varying the local grating configurations like height and duty cycle 6 . For the PPG-based waveguides 185 (Fig. 12b ), the small angular bandwidth of a high-efficiency transmissive PPG prohibits its use as in-coupler. Therefore, both in-coupler and out-coupler are usually reflective types. The gradient efficiency can be achieved by space-variant exposure to control the local index modulation 186 or local Bragg slant angle variation through freeform exposure 19 . Due to the relatively small angular bandwidth of PPG, to achieve a decent FoV usually requires stacking two 187 or three 188 PPGs together for a single color. The PVG-based waveguides 189 (Fig. 12c ) also prefer reflective PVGs as in-couplers because the transmissive PVGs are much more difficult to fabricate due to the LC alignment issue. In addition, the angular bandwidth of transmissive PVGs in Bragg regime is also not large enough to support a decent FoV 29 . For the out-coupler, the angular bandwidth of a single reflective PVG can usually support a reasonable FoV. To obtain a uniform light output, a polarization management layer 190 consisting of a LC layer with spatially variant orientations can be utilized. It offers an additional degree of freedom to control the polarization state of the TIR light. The diffraction efficiency can therefore be locally controlled due to the strong polarization sensitivity of PVG.

figure 12

Schematics of waveguide combiners based on a SRGs, b PPGs and c PVGs. Reprinted from a ref. 85 with permission from OSA Publishing, b ref. 185 with permission from John Wiley and Sons and c ref. 189 with permission from OSA Publishing

The above discussion describes the basic working principle of 1D EPE. Nonetheless, for the 1D EPE to produce a large eyebox, the exit pupil in the unexpanded direction of the original image should be large. This proposes design challenges in light engines. Therefore, a 2D EPE is favored for practical applications. To extend EPE in two dimensions, two consecutive 1D EPEs can be used 191 , as depicted in Fig. 13a . The first 1D EPE occurs in the turning grating, where the light is duplicated in y direction and then turned into x direction. Then the light rays encounter the out-coupler and are expanded in x direction. To better understand the 2D EPE process, the k -vector diagram (Fig. 13b ) can be used. For the light propagating in air with wavenumber k 0 , its possible k -values in x and y directions ( k x and k y ) fall within the circle with radius k 0 . When the light is trapped into TIR, k x and k y are outside the circle with radius k 0 and inside the circle with radius nk 0 , where n is the refractive index of the substrate. k x and k y stay unchanged in the TIR process and are only changed in each diffraction process. The central red box in Fig. 13b indicates the possible k values within the system FoV. After the in-coupler, the k values are added by the grating k -vector, shifting the k values into TIR region. The turning grating then applies another k -vector and shifts the k values to near x -axis. Finally, the k values are shifted by the out-coupler and return to the free propagation region in air. One observation is that the size of red box is mostly limited by the width of TIR band. To accommodate a larger FoV, the outer boundary of TIR band needs to be expanded, which amounts to increasing waveguide refractive index. Another important fact is that when k x and k y are near the outer boundary, the uniformity of output light becomes worse. This is because the light propagation angle is near 90° in the waveguide. The spatial distance between two consecutive TIRs becomes so large that the out-coupled beams are spatially separated to an unacceptable degree. The range of possible k values for practical applications is therefore further shrunk due to this fact.

figure 13

a Schematic of 2D EPE based on two consecutive 1D EPEs. Gray/black arrows indicate light in air/TIR. Black dots denote TIRs. b k-diagram of the two-1D-EPE scheme. c Schematic of 2D EPE with a 2D hexagonal grating d k-diagram of the 2D-grating scheme

Aside from two consecutive 1D EPEs, the 2D EPE can also be directly implemented with a 2D grating 192 . An example using a hexagonal grating is depicted in Fig. 13c . The hexagonal grating can provide k -vectors in six directions. In the k -diagram (Fig. 13d ), after the in-coupling, the k values are distributed into six regions due to multiple diffractions. The out-coupling occurs simultaneously with pupil expansion. Besides a concise out-coupler configuration, the 2D EPE scheme offers more degrees of design freedom than two 1D EPEs because the local grating parameters can be adjusted in a 2D manner. The higher design freedom has the potential to reach a better output light uniformity, but at the cost of a higher computation demand for optimization. Furthermore, the unslanted grating geometry usually leads to a large light leakage and possibly low efficiency. Adding slant to the geometry helps alleviate the issue, but the associated fabrication may be more challenging.

Finally, we discuss the generation of full-color images. One important issue to clarify is that although diffractive gratings are used here, the final image generally has no color dispersion even if we use a broadband light source like LED. This can be easily understood in the 1D EPE scheme. The in-coupler and out-coupler have opposite k -vectors, which cancels the color dispersion for each other. In the 2D EPE schemes, the k -vectors always form a closed loop from in-coupled light to out-coupled light, thus, the color dispersion also vanishes likewise. The issue of using a single waveguide for full-color images actually exists in the consideration of FoV and light uniformity. The breakup of propagation angles for different colors results in varied out-coupling situations for each color. To be more specific, if the red and the blue channels use the same in-coupler, the propagating angle for the red light is larger than that of the blue light. The red light in peripheral FoV is therefore easier to face the mentioned large-angle non-uniformity issue. To acquire a decent FoV and light uniformity, usually two or three layers of waveguides with different grating pitches are adopted.

Regarding the system performance, the eyebox is generally large enough (~10 mm) to accommodate different user’s IPD and alignment shift during operation. A parameter of significant concern for a waveguide combiner is its FoV. From the k -vector analysis, we can conclude the theoretical upper limit is determined by the waveguide refractive index. But the light/color uniformity also influences the effective FoV, over which the degradation of image quality becomes unacceptable. Current diffractive waveguide combiners generally achieve a FoV of about 50°. To further increase FoV, a straightforward method is to use a higher refractive index waveguide. Another is to tile FoV through direct stacking of multiple waveguides or using polarization-sensitive couplers 79 , 193 . As to the optical efficiency, a typical value for the diffractive waveguide combiner is around 50–200 nit/lm 6 , 189 . In addition, waveguide combiners adopting grating out-couplers generate an image with fixed depth at infinity. This leads to the VAC issue. To tackle VAC in waveguide architectures, the most practical way is to generate multiple depths and use the varifocal or multifocal driving scheme, similar to those mentioned in the VR systems. But to add more depths usually means to stack multiple layers of waveguides together 194 . Considering the additional waveguide layers for RGB colors, the final waveguide thickness would undoubtedly increase.

Other parameters special to waveguide includes light leakage, see-through ghost, and rainbow. Light leakage refers to out-coupled light that goes outwards to the environment, as depicted in Fig. 14a . Aside from decreased efficiency, the leakage also brings drawback of unnatural “bright-eye” appearance of the user and privacy issue. Optimization of the grating structure like geometry of SRG may reduce the leakage. See-through ghost is formed by consecutive in-coupling and out-couplings caused by the out-coupler grating, as sketched in Fig. 14b , After the process, a real object with finite depth may produce a ghost image with shift in both FoV and depth. Generally, an out-coupler with higher efficiency suffers more see-through ghost. Rainbow is caused by the diffraction of environment light into user’s eye, as sketched in Fig. 14c . The color dispersion in this case will occur because there is no cancellation of k -vector. Using the k -diagram, we can obtain a deeper insight into the formation of rainbow. Here, we take the EPE structure in Fig. 13a as an example. As depicted in Fig. 14d , after diffractions by the turning grating and the out-coupler grating, the k values are distributed in two circles that shift from the origin by the grating k -vectors. Some diffracted light can enter the see-through FoV and form rainbow. To reduce rainbow, a straightforward way is to use a higher index substrate. With a higher refractive index, the outer boundary of k diagram is expanded, which can accommodate larger grating k -vectors. The enlarged k -vectors would therefore “push” these two circles outwards, leading to a decreased overlapping region with the see-through FoV. Alternatively, an optimized grating structure would also help reduce the rainbow effect by suppressing the unwanted diffraction.

figure 14

Sketches of formations of a light leakage, b see-through ghost and c rainbow. d Analysis of rainbow formation with k-diagram

Achromatic waveguide

Achromatic waveguide combiners use achromatic elements as couplers. It has the advantage of realizing full-color image with a single waveguide. A typical example of achromatic element is a mirror. The waveguide with partial mirrors as out-coupler is often referred as geometric waveguide 6 , 195 , as depicted in Fig. 15a . The in-coupler in this case is usually a prism to avoid unnecessary color dispersion if using diffractive elements otherwise. The mirrors couple out TIR light consecutively to produce a large eyebox, similarly in a diffractive waveguide. Thanks to the excellent optical property of mirrors, the geometric waveguide usually exhibits a superior image regarding MTF and color uniformity to its diffractive counterparts. Still, the spatially discontinuous configuration of mirrors also results in gaps in eyebox, which may be alleviated by using a dual-layer structure 196 . Wang et al. designed a geometric waveguide display with five partial mirrors (Fig. 15b ). It exhibits a remarkable FoV of 50° by 30° (Fig. 15c ) and an exit pupil of 4 mm with a 1D EPE. To achieve 2D EPE, similar architectures in Fig. 13a can be used by integrating a turning mirror array as the first 1D EPE module 197 . Unfortunately, the k -vector diagrams in Fig. 13b, d cannot be used here because the k values in x-y plane no longer conserve in the in-coupling and out-coupling processes. But some general conclusions remain valid, like a higher refractive index leading to a larger FoV and gradient out-coupling efficiency improving light uniformity.

figure 15

a Schematic of the system configuration. b Geometric waveguide with five partial mirrors. c Image photos demonstrating system FoV. Adapted from b , c ref. 195 with permission from OSA Publishing

The fabrication process of geometric waveguide involves coating mirrors on cut-apart pieces and integrating them back together, which may result in a high cost, especially for the 2D EPE architecture. Another way to implement an achromatic coupler is to use multiplexed PPHOE 198 , 199 to mimic the behavior of a tilted mirror (Fig. 16a ). To understand the working principle, we can use the diagram in Fig. 16b . The law of reflection states the angle of reflection equals to the angle of incidence. If we translate this behavior to k -vector language, it means the mirror can apply any length of k -vector along its surface normal direction. The k -vector length of the reflected light is always equal to that of the incident light. This puts a condition that the k -vector triangle is isosceles. With a simple geometric deduction, it can be easily observed this leads to the law of reflection. The behavior of a general grating, however, is very different. For simplicity we only consider the main diffraction order. The grating can only apply a k -vector with fixed k x due to the basic diffraction law. For the light with a different incident angle, it needs to apply different k z to produce a diffracted light with equal k -vector length as the incident light. For a grating with a broad angular bandwidth like SRG, the range of k z is wide, forming a lengthy vertical line in Fig. 16b . For a PPG with a narrow angular bandwidth, the line is short and resembles a dot. If multiple of these tiny dots are distributed along the oblique line corresponding to a mirror, then the final multiplexed PPGs can imitate the behavior of a tilted mirror. Such a PPHOE is sometimes referred as a skew-mirror 198 . In theory, to better imitate the mirror, a lot of multiplexed PPGs is preferred, while each PPG has a small index modulation δn . But this proposes a bigger challenge in device fabrication. Recently, Utsugi et al. demonstrated an impressive skew-mirror waveguide based on 54 multiplexed PPGs (Fig. 16c, d ). The display exhibits an effective FoV of 35° by 36°. In the peripheral FoV, there still exists some non-uniformity (Fig. 16e ) due to the out-coupling gap, which is an inherent feature of the flat-type out-couplers.

figure 16

a System configuration. b Diagram demonstrating how multiplexed PPGs resemble the behavior of a mirror. Photos showing c the system and d image. e Picture demonstrating effective system FoV. Adapted from c – e ref. 199 with permission from ITE

Finally, it is worth mentioning that metasurfaces are also promising to deliver achromatic gratings 200 , 201 for waveguide couplers ascribed to their versatile wavefront shaping capability. The mechanism of the achromatic gratings is similar to that of the achromatic lenses as previously discussed. However, the current development of achromatic metagratings is still in its infancy. Much effort is needed to improve the optical efficiency for in-coupling, control the higher diffraction orders for eliminating ghost images, and enable a large size design for EPE.

Generally, achromatic waveguide combiners exhibit a comparable FoV and eyebox with diffractive combiners, but with a higher efficiency. For a partial-mirror combiner, its combiner efficiency is around 650 nit/lm 197 (2D EPE). For a skew-mirror combiner, although the efficiency of multiplexed PPHOE is relatively low (~1.5%) 199 , the final combiner efficiency of the 1D EPE system is still high (>3000 nit/lm) due to multiple out-couplings.

Table 2 summarizes the performance of different AR combiners. When combing the luminous efficacy in Table 1 and the combiner efficiency in Table 2 , we can have a comprehensive estimate of the total luminance efficiency (nit/W) for different types of systems. Generally, Maxwellian-type combiners with pupil steering have the highest luminance efficiency when partnered with laser-based light engines like laser-backlit LCoS/DMD or MEM-LBS. Geometric optical combiners have well-balanced image performances, but to further shrink the system size remains a challenge. Diffractive waveguides have a relatively low combiner efficiency, which can be remedied by an efficient light engine like MEMS-LBS. Further development of coupler and EPE scheme would also improve the system efficiency and FoV. Achromatic waveguides have a decent combiner efficiency. The single-layer design also enables a smaller form factor. With advances in fabrication process, it may become a strong contender to presently widely used diffractive waveguides.

Conclusions and perspectives

VR and AR are endowed with a high expectation to revolutionize the way we interact with digital world. Accompanied with the expectation are the engineering challenges to squeeze a high-performance display system into a tightly packed module for daily wearing. Although the etendue conservation constitutes a great obstacle on the path, remarkable progresses with innovative optics and photonics continue to take place. Ultra-thin optical elements like PPHOEs and LCHOEs provide alternative solutions to traditional optics. Their unique features of multiplexing capability and polarization dependency further expand the possibility of novel wavefront modulations. At the same time, nanoscale-engineered metasurfaces/SRGs provide large design freedoms to achieve novel functions beyond conventional geometric optical devices. Newly emerged micro-LEDs open an opportunity for compact microdisplays with high peak brightness and good stability. Further advances on device engineering and manufacturing process are expected to boost the performance of metasurfaces/SRGs and micro-LEDs for AR and VR applications.

Data availability

All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors.

Cakmakci, O. & Rolland, J. Head-worn displays: a review. J. Disp. Technol. 2 , 199–216 (2006).

Article   ADS   Google Scholar  

Zhan, T. et al. Augmented reality and virtual reality displays: perspectives and challenges. iScience 23 , 101397 (2020).

Rendon, A. A. et al. The effect of virtual reality gaming on dynamic balance in older adults. Age Ageing 41 , 549–552 (2012).

Article   Google Scholar  

Choi, S., Jung, K. & Noh, S. D. Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurrent Eng. 23 , 40–63 (2015).

Li, X. et al. A critical review of virtual and augmented reality (VR/AR) applications in construction safety. Autom. Constr. 86 , 150–162 (2018).

Kress, B. C. Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (Bellingham: SPIE Press, 2020).

Cholewiak, S. A. et al. A perceptual eyebox for near-eye displays. Opt. Express 28 , 38008–38028 (2020).

Lee, Y. H., Zhan, T. & Wu, S. T. Prospects and challenges in augmented reality displays. Virtual Real. Intell. Hardw. 1 , 10–20 (2019).

Kim, J. et al. Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38 , 99 (2019).

Tan, G. J. et al. Foveated imaging for near-eye displays. Opt. Express 26 , 25076–25085 (2018).

Lee, S. et al. Foveated near-eye display for mixed reality using liquid crystal photonics. Sci. Rep. 10 , 16127 (2020).

Yoo, C. et al. Foveated display system based on a doublet geometric phase lens. Opt. Express 28 , 23690–23702 (2020).

Akşit, K. et al. Manufacturing application-driven foveated near-eye displays. IEEE Trans. Vis. Computer Graph. 25 , 1928–1939 (2019).

Zhu, R. D. et al. High-ambient-contrast augmented reality with a tunable transmittance liquid crystal film and a functional reflective polarizer. J. Soc. Inf. Disp. 24 , 229–233 (2016).

Lincoln, P. et al. Scene-adaptive high dynamic range display for low latency augmented reality. In Proc. 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games . (ACM, San Francisco, CA, 2017).

Duerr, F. & Thienpont, H. Freeform imaging systems: fermat’s principle unlocks “first time right” design. Light.: Sci. Appl. 10 , 95 (2021).

Bauer, A., Schiesser, E. M. & Rolland, J. P. Starting geometry creation and design method for freeform optics. Nat. Commun. 9 , 1756 (2018).

Rolland, J. P. et al. Freeform optics for imaging. Optica 8 , 161–176 (2021).

Jang, C. et al. Design and fabrication of freeform holographic optical elements. ACM Trans. Graph. 39 , 184 (2020).

Gabor, D. A new microscopic principle. Nature 161 , 777–778 (1948).

Kostuk, R. K. Holography: Principles and Applications (Boca Raton: CRC Press, 2019).

Lawrence, J. R., O'Neill, F. T. & Sheridan, J. T. Photopolymer holographic recording material. Optik 112 , 449–463 (2001).

Guo, J. X., Gleeson, M. R. & Sheridan, J. T. A review of the optimisation of photopolymer materials for holographic data storage. Phys. Res. Int. 2012 , 803439 (2012).

Jang, C. et al. Recent progress in see-through three-dimensional displays using holographic optical elements [Invited]. Appl. Opt. 55 , A71–A85 (2016).

Xiong, J. H. et al. Holographic optical elements for augmented reality: principles, present status, and future perspectives. Adv. Photonics Res. 2 , 2000049 (2021).

Tabiryan, N. V. et al. Advances in transparent planar optics: enabling large aperture, ultrathin lenses. Adv. Optical Mater. 9 , 2001692 (2021).

Zanutta, A. et al. Photopolymeric films with highly tunable refractive index modulation for high precision diffractive optics. Optical Mater. Express 6 , 252–263 (2016).

Moharam, M. G. & Gaylord, T. K. Rigorous coupled-wave analysis of planar-grating diffraction. J. Optical Soc. Am. 71 , 811–818 (1981).

Xiong, J. H. & Wu, S. T. Rigorous coupled-wave analysis of liquid crystal polarization gratings. Opt. Express 28 , 35960–35971 (2020).

Xie, S., Natansohn, A. & Rochon, P. Recent developments in aromatic azo polymers research. Chem. Mater. 5 , 403–411 (1993).

Shishido, A. Rewritable holograms based on azobenzene-containing liquid-crystalline polymers. Polym. J. 42 , 525–533 (2010).

Bunning, T. J. et al. Holographic polymer-dispersed liquid crystals (H-PDLCs). Annu. Rev. Mater. Sci. 30 , 83–115 (2000).

Liu, Y. J. & Sun, X. W. Holographic polymer-dispersed liquid crystals: materials, formation, and applications. Adv. Optoelectron. 2008 , 684349 (2008).

Xiong, J. H. & Wu, S. T. Planar liquid crystal polarization optics for augmented reality and virtual reality: from fundamentals to applications. eLight 1 , 3 (2021).

Yaroshchuk, O. & Reznikov, Y. Photoalignment of liquid crystals: basics and current trends. J. Mater. Chem. 22 , 286–300 (2012).

Sarkissian, H. et al. Periodically aligned liquid crystal: potential application for projection displays. Mol. Cryst. Liq. Cryst. 451 , 1–19 (2006).

Komanduri, R. K. & Escuti, M. J. Elastic continuum analysis of the liquid crystal polarization grating. Phys. Rev. E 76 , 021701 (2007).

Kobashi, J., Yoshida, H. & Ozaki, M. Planar optics with patterned chiral liquid crystals. Nat. Photonics 10 , 389–392 (2016).

Lee, Y. H., Yin, K. & Wu, S. T. Reflective polarization volume gratings for high efficiency waveguide-coupling augmented reality displays. Opt. Express 25 , 27008–27014 (2017).

Lee, Y. H., He, Z. Q. & Wu, S. T. Optical properties of reflective liquid crystal polarization volume gratings. J. Optical Soc. Am. B 36 , D9–D12 (2019).

Xiong, J. H., Chen, R. & Wu, S. T. Device simulation of liquid crystal polarization gratings. Opt. Express 27 , 18102–18112 (2019).

Czapla, A. et al. Long-period fiber gratings with low-birefringence liquid crystal. Mol. Cryst. Liq. Cryst. 502 , 65–76 (2009).

Dąbrowski, R., Kula, P. & Herman, J. High birefringence liquid crystals. Crystals 3 , 443–482 (2013).

Mack, C. Fundamental Principles of Optical Lithography: The Science of Microfabrication (Chichester: John Wiley & Sons, 2007).

Genevet, P. et al. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 4 , 139–152 (2017).

Guo, L. J. Nanoimprint lithography: methods and material requirements. Adv. Mater. 19 , 495–513 (2007).

Park, J. et al. Electrically driven mid-submicrometre pixelation of InGaN micro-light-emitting diode displays for augmented-reality glasses. Nat. Photonics 15 , 449–455 (2021).

Khorasaninejad, M. et al. Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 , 1190–1194 (2016).

Li, S. Q. et al. Phase-only transmissive spatial light modulator based on tunable dielectric metasurface. Science 364 , 1087–1090 (2019).

Liang, K. L. et al. Advances in color-converted micro-LED arrays. Jpn. J. Appl. Phys. 60 , SA0802 (2020).

Jin, S. X. et al. GaN microdisk light emitting diodes. Appl. Phys. Lett. 76 , 631–633 (2000).

Day, J. et al. Full-scale self-emissive blue and green microdisplays based on GaN micro-LED arrays. In Proc. SPIE 8268, Quantum Sensing and Nanophotonic Devices IX (SPIE, San Francisco, California, United States, 2012).

Huang, Y. G. et al. Mini-LED, micro-LED and OLED displays: present status and future perspectives. Light.: Sci. Appl. 9 , 105 (2020).

Parbrook, P. J. et al. Micro-light emitting diode: from chips to applications. Laser Photonics Rev. 15 , 2000133 (2021).

Day, J. et al. III-Nitride full-scale high-resolution microdisplays. Appl. Phys. Lett. 99 , 031116 (2011).

Liu, Z. J. et al. 360 PPI flip-chip mounted active matrix addressable light emitting diode on silicon (LEDoS) micro-displays. J. Disp. Technol. 9 , 678–682 (2013).

Zhang, L. et al. Wafer-scale monolithic hybrid integration of Si-based IC and III–V epi-layers—A mass manufacturable approach for active matrix micro-LED micro-displays. J. Soc. Inf. Disp. 26 , 137–145 (2018).

Tian, P. F. et al. Size-dependent efficiency and efficiency droop of blue InGaN micro-light emitting diodes. Appl. Phys. Lett. 101 , 231110 (2012).

Olivier, F. et al. Shockley-Read-Hall and Auger non-radiative recombination in GaN based LEDs: a size effect study. Appl. Phys. Lett. 111 , 022104 (2017).

Konoplev, S. S., Bulashevich, K. A. & Karpov, S. Y. From large-size to micro-LEDs: scaling trends revealed by modeling. Phys. Status Solidi (A) 215 , 1700508 (2018).

Li, L. Z. et al. Transfer-printed, tandem microscale light-emitting diodes for full-color displays. Proc. Natl Acad. Sci. USA 118 , e2023436118 (2021).

Oh, J. T. et al. Light output performance of red AlGaInP-based light emitting diodes with different chip geometries and structures. Opt. Express 26 , 11194–11200 (2018).

Shen, Y. C. et al. Auger recombination in InGaN measured by photoluminescence. Appl. Phys. Lett. 91 , 141101 (2007).

Wong, M. S. et al. High efficiency of III-nitride micro-light-emitting diodes by sidewall passivation using atomic layer deposition. Opt. Express 26 , 21324–21331 (2018).

Han, S. C. et al. AlGaInP-based Micro-LED array with enhanced optoelectrical properties. Optical Mater. 114 , 110860 (2021).

Wong, M. S. et al. Size-independent peak efficiency of III-nitride micro-light-emitting-diodes using chemical treatment and sidewall passivation. Appl. Phys. Express 12 , 097004 (2019).

Ley, R. T. et al. Revealing the importance of light extraction efficiency in InGaN/GaN microLEDs via chemical treatment and dielectric passivation. Appl. Phys. Lett. 116 , 251104 (2020).

Moon, S. W. et al. Recent progress on ultrathin metalenses for flat optics. iScience 23 , 101877 (2020).

Arbabi, A. et al. Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers. Opt. Express 23 , 33310–33317 (2015).

Yu, N. F. et al. Light propagation with phase discontinuities: generalized laws of reflection and refraction. Science 334 , 333–337 (2011).

Liang, H. W. et al. High performance metalenses: numerical aperture, aberrations, chromaticity, and trade-offs. Optica 6 , 1461–1470 (2019).

Park, J. S. et al. All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography. Nano Lett. 19 , 8673–8682 (2019).

Yoon, G. et al. Single-step manufacturing of hierarchical dielectric metalens in the visible. Nat. Commun. 11 , 2268 (2020).

Lee, G. Y. et al. Metasurface eyepiece for augmented reality. Nat. Commun. 9 , 4562 (2018).

Chen, W. T. et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat. Nanotechnol. 13 , 220–226 (2018).

Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13 , 227–232 (2018).

Lan, S. F. et al. Metasurfaces for near-eye augmented reality. ACS Photonics 6 , 864–870 (2019).

Fan, Z. B. et al. A broadband achromatic metalens array for integral imaging in the visible. Light.: Sci. Appl. 8 , 67 (2019).

Shi, Z. J., Chen, W. T. & Capasso, F. Wide field-of-view waveguide displays enabled by polarization-dependent metagratings. In Proc. SPIE 10676, Digital Optics for Immersive Displays (SPIE, Strasbourg, France, 2018).

Hong, C. C., Colburn, S. & Majumdar, A. Flat metaform near-eye visor. Appl. Opt. 56 , 8822–8827 (2017).

Bayati, E. et al. Design of achromatic augmented reality visors based on composite metasurfaces. Appl. Opt. 60 , 844–850 (2021).

Nikolov, D. K. et al. Metaform optics: bridging nanophotonics and freeform optics. Sci. Adv. 7 , eabe5112 (2021).

Tamir, T. & Peng, S. T. Analysis and design of grating couplers. Appl. Phys. 14 , 235–254 (1977).

Miller, J. M. et al. Design and fabrication of binary slanted surface-relief gratings for a planar optical interconnection. Appl. Opt. 36 , 5717–5727 (1997).

Levola, T. & Laakkonen, P. Replicated slanted gratings with a high refractive index material for in and outcoupling of light. Opt. Express 15 , 2067–2074 (2007).

Shrestha, S. et al. Broadband achromatic dielectric metalenses. Light.: Sci. Appl. 7 , 85 (2018).

Li, Z. Y. et al. Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 7 , eabe4458 (2021).

Ratcliff, J. et al. ThinVR: heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Trans. Vis. Computer Graph. 26 , 1981–1990 (2020).

Wong, T. L. et al. Folded optics with birefringent reflective polarizers. In Proc. SPIE 10335, Digital Optical Technologies 2017 (SPIE, Munich, Germany, 2017).

Li, Y. N. Q. et al. Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics. Opt. Express 29 , 6011–6020 (2021).

Bang, K. et al. Lenslet VR: thin, flat and wide-FOV virtual reality display using fresnel lens and lenslet array. IEEE Trans. Vis. Computer Graph. 27 , 2545–2554 (2021).

Maimone, A. & Wang, J. R. Holographic optics for thin and lightweight virtual reality. ACM Trans. Graph. 39 , 67 (2020).

Kramida, G. Resolving the vergence-accommodation conflict in head-mounted displays. IEEE Trans. Vis. Computer Graph. 22 , 1912–1931 (2016).

Zhan, T. et al. Multifocal displays: review and prospect. PhotoniX 1 , 10 (2020).

Shimobaba, T., Kakue, T. & Ito, T. Review of fast algorithms and hardware implementations on computer holography. IEEE Trans. Ind. Inform. 12 , 1611–1622 (2016).

Xiao, X. et al. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt. 52 , 546–560 (2013).

Kuiper, S. & Hendriks, B. H. W. Variable-focus liquid lens for miniature cameras. Appl. Phys. Lett. 85 , 1128–1130 (2004).

Liu, S. & Hua, H. Time-multiplexed dual-focal plane head-mounted display with a liquid lens. Opt. Lett. 34 , 1642–1644 (2009).

Wilson, A. & Hua, H. Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses. Opt. Express 27 , 15627–15637 (2019).

Zhan, T. et al. Pancharatnam-Berry optical elements for head-up and near-eye displays [Invited]. J. Optical Soc. Am. B 36 , D52–D65 (2019).

Oh, C. & Escuti, M. J. Achromatic diffraction from polarization gratings with high efficiency. Opt. Lett. 33 , 2287–2289 (2008).

Zou, J. Y. et al. Broadband wide-view Pancharatnam-Berry phase deflector. Opt. Express 28 , 4921–4927 (2020).

Zhan, T., Lee, Y. H. & Wu, S. T. High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses. Opt. Express 26 , 4863–4872 (2018).

Tan, G. J. et al. Polarization-multiplexed multiplane display. Opt. Lett. 43 , 5651–5654 (2018).

Lanman, D. R. Display systems research at facebook reality labs (conference presentation). In Proc. SPIE 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) (SPIE, San Francisco, California, United States, 2020).

Liu, Z. J. et al. A novel BLU-free full-color LED projector using LED on silicon micro-displays. IEEE Photonics Technol. Lett. 25 , 2267–2270 (2013).

Han, H. V. et al. Resonant-enhanced full-color emission of quantum-dot-based micro LED display technology. Opt. Express 23 , 32504–32515 (2015).

Lin, H. Y. et al. Optical cross-talk reduction in a quantum-dot-based full-color micro-light-emitting-diode display by a lithographic-fabricated photoresist mold. Photonics Res. 5 , 411–416 (2017).

Liu, Z. J. et al. Micro-light-emitting diodes with quantum dots in display technology. Light.: Sci. Appl. 9 , 83 (2020).

Kim, H. M. et al. Ten micrometer pixel, quantum dots color conversion layer for high resolution and full color active matrix micro-LED display. J. Soc. Inf. Disp. 27 , 347–353 (2019).

Xuan, T. T. et al. Inkjet-printed quantum dot color conversion films for high-resolution and full-color micro light-emitting diode displays. J. Phys. Chem. Lett. 11 , 5184–5191 (2020).

Chen, S. W. H. et al. Full-color monolithic hybrid quantum dot nanoring micro light-emitting diodes with improved efficiency using atomic layer deposition and nonradiative resonant energy transfer. Photonics Res. 7 , 416–422 (2019).

Krishnan, C. et al. Hybrid photonic crystal light-emitting diode renders 123% color conversion effective quantum yield. Optica 3 , 503–509 (2016).

Kang, J. H. et al. RGB arrays for micro-light-emitting diode applications using nanoporous GaN embedded with quantum dots. ACS Applied Mater. Interfaces 12 , 30890–30895 (2020).

Chen, G. S. et al. Monolithic red/green/blue micro-LEDs with HBR and DBR structures. IEEE Photonics Technol. Lett. 30 , 262–265 (2018).

Hsiang, E. L. et al. Enhancing the efficiency of color conversion micro-LED display with a patterned cholesteric liquid crystal polymer film. Nanomaterials 10 , 2430 (2020).

Kang, C. M. et al. Hybrid full-color inorganic light-emitting diodes integrated on a single wafer using selective area growth and adhesive bonding. ACS Photonics 5 , 4413–4422 (2018).

Geum, D. M. et al. Strategy toward the fabrication of ultrahigh-resolution micro-LED displays by bonding-interface-engineered vertical stacking and surface passivation. Nanoscale 11 , 23139–23148 (2019).

Ra, Y. H. et al. Full-color single nanowire pixels for projection displays. Nano Lett. 16 , 4608–4615 (2016).

Motoyama, Y. et al. High-efficiency OLED microdisplay with microlens array. J. Soc. Inf. Disp. 27 , 354–360 (2019).

Fujii, T. et al. 4032 ppi High-resolution OLED microdisplay. J. Soc. Inf. Disp. 26 , 178–186 (2018).

Hamer, J. et al. High-performance OLED microdisplays made with multi-stack OLED formulations on CMOS backplanes. In Proc. SPIE 11473, Organic and Hybrid Light Emitting Materials and Devices XXIV . Online Only (SPIE, 2020).

Joo, W. J. et al. Metasurface-driven OLED displays beyond 10,000 pixels per inch. Science 370 , 459–463 (2020).

Vettese, D. Liquid crystal on silicon. Nat. Photonics 4 , 752–754 (2010).

Zhang, Z. C., You, Z. & Chu, D. P. Fundamentals of phase-only liquid crystal on silicon (LCOS) devices. Light.: Sci. Appl. 3 , e213 (2014).

Hornbeck, L. J. The DMD TM projection display chip: a MEMS-based technology. MRS Bull. 26 , 325–327 (2001).

Zhang, Q. et al. Polarization recycling method for light-pipe-based optical engine. Appl. Opt. 52 , 8827–8833 (2013).

Hofmann, U., Janes, J. & Quenzer, H. J. High-Q MEMS resonators for laser beam scanning displays. Micromachines 3 , 509–528 (2012).

Holmström, S. T. S., Baran, U. & Urey, H. MEMS laser scanners: a review. J. Microelectromechanical Syst. 23 , 259–275 (2014).

Bao, X. Z. et al. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices. Opt. Laser Technol. 78 , 34–41 (2016).

Olivier, F. et al. Influence of size-reduction on the performances of GaN-based micro-LEDs for display application. J. Lumin. 191 , 112–116 (2017).

Liu, Y. B. et al. High-brightness InGaN/GaN Micro-LEDs with secondary peak effect for displays. IEEE Electron Device Lett. 41 , 1380–1383 (2020).

Qi, L. H. et al. 848 ppi high-brightness active-matrix micro-LED micro-display using GaN-on-Si epi-wafers towards mass production. Opt. Express 29 , 10580–10591 (2021).

Chen, E. G. & Yu, F. H. Design of an elliptic spot illumination system in LED-based color filter-liquid-crystal-on-silicon pico projectors for mobile embedded projection. Appl. Opt. 51 , 3162–3170 (2012).

Darmon, D., McNeil, J. R. & Handschy, M. A. 70.1: LED-illuminated pico projector architectures. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 39 , 1070–1073 (2008).

Essaian, S. & Khaydarov, J. State of the art of compact green lasers for mobile projectors. Optical Rev. 19 , 400–404 (2012).

Sun, W. S. et al. Compact LED projector design with high uniformity and efficiency. Appl. Opt. 53 , H227–H232 (2014).

Sun, W. S., Chiang, Y. C. & Tsuei, C. H. Optical design for the DLP pocket projector using LED light source. Phys. Procedia 19 , 301–307 (2011).

Chen, S. W. H. et al. High-bandwidth green semipolar (20–21) InGaN/GaN micro light-emitting diodes for visible light communication. ACS Photonics 7 , 2228–2235 (2020).

Yoshida, K. et al. 245 MHz bandwidth organic light-emitting diodes used in a gigabit optical wireless data link. Nat. Commun. 11 , 1171 (2020).

Park, D. W. et al. 53.5: High-speed AMOLED pixel circuit and driving scheme. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 41 , 806–809 (2010).

Tan, L., Huang, H. C. & Kwok, H. S. 78.1: Ultra compact polarization recycling system for white light LED based pico-projection system. Soc. Inf. Disp. Int. Symp. Dig. Tech. Pap. 41 , 1159–1161 (2010).

Maimone, A., Georgiou, A. & Kollin, J. S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. 36 , 85 (2017).

Pan, J. W. et al. Portable digital micromirror device projector using a prism. Appl. Opt. 46 , 5097–5102 (2007).

Huang, Y. et al. Liquid-crystal-on-silicon for augmented reality displays. Appl. Sci. 8 , 2366 (2018).

Peng, F. L. et al. Analytical equation for the motion picture response time of display devices. J. Appl. Phys. 121 , 023108 (2017).

Pulli, K. 11-2: invited paper: meta 2: immersive optical-see-through augmented reality. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 48 , 132–133 (2017).

Lee, B. & Jo, Y. in Advanced Display Technology: Next Generation Self-Emitting Displays (eds Kang, B., Han, C. W. & Jeong, J. K.) 307–328 (Springer, 2021).

Cheng, D. W. et al. Design of an optical see-through head-mounted display with a low f -number and large field of view using a freeform prism. Appl. Opt. 48 , 2655–2668 (2009).

Zheng, Z. R. et al. Design and fabrication of an off-axis see-through head-mounted display with an x–y polynomial surface. Appl. Opt. 49 , 3661–3668 (2010).

Wei, L. D. et al. Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface. Opt. Express 26 , 8550–8565 (2018).

Liu, S., Hua, H. & Cheng, D. W. A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Vis. Computer Graph. 16 , 381–393 (2010).

Hua, H. & Javidi, B. A 3D integral imaging optical see-through head-mounted display. Opt. Express 22 , 13484–13491 (2014).

Song, W. T. et al. Design of a light-field near-eye display using random pinholes. Opt. Express 27 , 23763–23774 (2019).

Wang, X. & Hua, H. Depth-enhanced head-mounted light field displays based on integral imaging. Opt. Lett. 46 , 985–988 (2021).

Huang, H. K. & Hua, H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. Opt. Express 27 , 25154–25171 (2019).

Huang, H. K. & Hua, H. High-performance integral-imaging-based light field augmented reality display using freeform optics. Opt. Express 26 , 17578–17590 (2018).

Cheng, D. W. et al. Design and manufacture AR head-mounted displays: a review and outlook. Light.: Adv. Manuf. 2 , 24 (2021).

Google Scholar  

Westheimer, G. The Maxwellian view. Vis. Res. 6 , 669–682 (1966).

Do, H., Kim, Y. M. & Min, S. W. Focus-free head-mounted display based on Maxwellian view using retroreflector film. Appl. Opt. 58 , 2882–2889 (2019).

Park, J. H. & Kim, S. B. Optical see-through holographic near-eye-display with eyebox steering and depth of field control. Opt. Express 26 , 27076–27088 (2018).

Chang, C. L. et al. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica 7 , 1563–1578 (2020).

Hsueh, C. K. & Sawchuk, A. A. Computer-generated double-phase holograms. Appl. Opt. 17 , 3874–3883 (1978).

Chakravarthula, P. et al. Wirtinger holography for near-eye displays. ACM Trans. Graph. 38 , 213 (2019).

Peng, Y. F. et al. Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39 , 185 (2020).

Shi, L. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591 , 234–239 (2021).

Jang, C. et al. Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina. ACM Trans. Graph. 36 , 190 (2017).

Jang, C. et al. Holographic near-eye display with expanded eye-box. ACM Trans. Graph. 37 , 195 (2018).

Kim, S. B. & Park, J. H. Optical see-through Maxwellian near-to-eye display with an enlarged eyebox. Opt. Lett. 43 , 767–770 (2018).

Shrestha, P. K. et al. Accommodation-free head mounted display with comfortable 3D perception and an enlarged eye-box. Research 2019 , 9273723 (2019).

Lin, T. G. et al. Maxwellian near-eye display with an expanded eyebox. Opt. Express 28 , 38616–38625 (2020).

Jo, Y. et al. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. Appl. Opt. 60 , A268–A276 (2021).

Xiong, J. H. et al. Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses. Opt. Lett. 46 , 1760–1763 (2021).

Viirre, E. et al. Laser safety analysis of a retinal scanning display system. J. Laser Appl. 9 , 253–260 (1997).

Ratnam, K. et al. Retinal image quality in near-eye pupil-steered systems. Opt. Express 27 , 38289–38311 (2019).

Maimone, A. et al. Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources. In Proc. ACM SIGGRAPH 2014 Emerging Technologies (ACM, Vancouver, Canada, 2014).

Jeong, J. et al. Holographically printed freeform mirror array for augmented reality near-eye display. IEEE Photonics Technol. Lett. 32 , 991–994 (2020).

Ha, J. & Kim, J. Augmented reality optics system with pin mirror. US Patent 10,989,922 (2021).

Park, S. G. Augmented and mixed reality optical see-through combiners based on plastic optics. Inf. Disp. 37 , 6–11 (2021).

Xiong, J. H. et al. Breaking the field-of-view limit in augmented reality with a scanning waveguide display. OSA Contin. 3 , 2730–2740 (2020).

Levola, T. 7.1: invited paper: novel diffractive optical components for near to eye displays. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 37 , 64–67 (2006).

Laakkonen, P. et al. High efficiency diffractive incouplers for light guides. In Proc. SPIE 6896, Integrated Optics: Devices, Materials, and Technologies XII . (SPIE, San Jose, California, United States, 2008).

Bai, B. F. et al. Optimization of nonbinary slanted surface-relief gratings as high-efficiency broadband couplers for light guides. Appl. Opt. 49 , 5454–5464 (2010).

Äyräs, P., Saarikko, P. & Levola, T. Exit pupil expander with a large field of view based on diffractive optics. J. Soc. Inf. Disp. 17 , 659–664 (2009).

Yoshida, T. et al. A plastic holographic waveguide combiner for light-weight and highly-transparent augmented reality glasses. J. Soc. Inf. Disp. 26 , 280–286 (2018).

Yu, C. et al. Highly efficient waveguide display with space-variant volume holographic gratings. Appl. Opt. 56 , 9390–9397 (2017).

Shi, X. L. et al. Design of a compact waveguide eyeglass with high efficiency by joining freeform surfaces and volume holographic gratings. J. Optical Soc. Am. A 38 , A19–A26 (2021).

Han, J. et al. Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms. Opt. Express 23 , 3534–3549 (2015).

Weng, Y. S. et al. Liquid-crystal-based polarization volume grating applied for full-color waveguide displays. Opt. Lett. 43 , 5773–5776 (2018).

Lee, Y. H. et al. Compact see-through near-eye display with depth adaption. J. Soc. Inf. Disp. 26 , 64–70 (2018).

Tekolste, R. D. & Liu, V. K. Outcoupling grating for augmented reality system. US Patent 10,073,267 (2018).

Grey, D. & Talukdar, S. Exit pupil expanding diffractive optical waveguiding device. US Patent 10,073, 267 (2019).

Yoo, C. et al. Extended-viewing-angle waveguide near-eye display with a polarization-dependent steering combiner. Opt. Lett. 45 , 2870–2873 (2020).

Schowengerdt, B. T., Lin, D. & St. Hilaire, P. Multi-layer diffractive eyepiece with wavelength-selective reflector. US Patent 10,725,223 (2020).

Wang, Q. W. et al. Stray light and tolerance analysis of an ultrathin waveguide display. Appl. Opt. 54 , 8354–8362 (2015).

Wang, Q. W. et al. Design of an ultra-thin, wide-angle, stray-light-free near-eye display with a dual-layer geometrical waveguide. Opt. Express 28 , 35376–35394 (2020).

Frommer, A. Lumus: maximus: large FoV near to eye display for consumer AR glasses. In Proc. SPIE 11764, AVR21 Industry Talks II . Online Only (SPIE, 2021).

Ayres, M. R. et al. Skew mirrors, methods of use, and methods of manufacture. US Patent 10,180,520 (2019).

Utsugi, T. et al. Volume holographic waveguide using multiplex recording for head-mounted display. ITE Trans. Media Technol. Appl. 8 , 238–244 (2020).

Aieta, F. et al. Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 , 1342–1345 (2015).

Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4 , 625–632 (2017).

Download references


The authors are indebted to Goertek Electronics for the financial support and Guanjun Tan for helpful discussions.

Author information

Authors and affiliations.

College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA

Jianghao Xiong, En-Lin Hsiang, Ziqian He, Tao Zhan & Shin-Tson Wu

You can also search for this author in PubMed   Google Scholar


J.X. conceived the idea and initiated the project. J.X. mainly wrote the manuscript and produced the figures. E.-L.H., Z.H., and T.Z. contributed to parts of the manuscript. S.W. supervised the project and edited the manuscript.

Corresponding author

Correspondence to Shin-Tson Wu .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit .

Reprints and Permissions

About this article

Cite this article.

Xiong, J., Hsiang, EL., He, Z. et al. Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl 10 , 216 (2021).

Download citation

Received : 06 June 2021

Revised : 26 September 2021

Accepted : 04 October 2021

Published : 25 October 2021


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Solid-body trajectoids shaped to roll along desired pathways.

  • Yaroslav I. Sobolev
  • Bartosz A. Grzybowski

Nature (2023)

Extended reality for biomedicine

  • Sohail S. Hassan
  • Yichen Ding

Nature Reviews Methods Primers (2023)

Predicting choice behaviour in economic games using gaze data encoded as scanpath images

  • Sean Anthony Byrne
  • Adam Peter Frederick Reynolds
  • Massimo Riccaboni

Scientific Reports (2023)

Dispersion-engineered metasurfaces reaching broadband 90% relative diffraction efficiency

  • Wei Ting Chen
  • Joon-Suh Park
  • Federico Capasso

Nature Communications (2023)

Efficient methodology with potential uses of Fresnel diffractometry for real-time study of uniaxial nematic liquid crystal phase transitions

  • Narges Madadi
  • Mohammad Amiri

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

write an article about virtual reality

More From Forbes

The Future Of Virtual Reality (VR)

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

You might think you’ve experienced VR, and you might have been pretty impressed. Particularly if you’re a gamer, there are some great experiences to be had out there (or rather, in there) today.

But over the next few years, in VR, as in all fields of technology, we’re going to see things that make what is cutting-edge today look like Space Invaders. And although the games will be amazing, the effects of this transformation will be far broader, touching on our work, education, and social lives.

Today’s most popular VR applications involve taking total control of a user’s senses (sight and hearing, particularly) to create a totally immersive experience that places the user in a fully virtual environment that feels pretty realistic.

Climb up something high and look down, and you’re likely to get a sense of vertigo. If you see an object moving quickly towards your head, you’ll feel an urge to duck out of the way.

Very soon, VR creators will extend this sensory hijacking to our other faculties – for example, touch and smell – to deepen that sense of immersion. At the same time, the devices we use to visit these virtual worlds will become cheaper and lighter, removing the friction that can currently be a barrier.

I believe extended reality (XR) – a term that covers virtual reality (VR), augmented reality (AR) , and mixed reality (MR) – will be one of the most transformative tech trends of the next five years. It will be enabled and augmented by other tech trends, including super-fast networking, that will let us experience VR as a cloud service just like we currently consume music and movies. And artificial intelligence (AI) will provide us with more personalized virtual worlds to explore, even giving us realistic virtual characters to share our experiences with.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

VR in education and training

VR is already making great inroads into education, with a large number of startups and established companies offering packaged experiences and services aimed at schools. Engage’s platform is used by the likes of Facebook, HTC, and the European Commission to enable remote learning. And one study published in 2019 found that medical students trained using VR were able to carry out certain procedures quicker and more accurately than peers trained using traditional methods.

These new methods of teaching and learning will become increasingly effective as new technologies emerge. One that is likely to make waves is the Teslasuit , which uses a full-body suit to offer haptic feedback, enhancing the immersion through the sense of touch. It also offers an array of biometric sensors enabling the user's heartbeat, perspiration, and other stress indicators to be measured. The suit is already used in NASA astronaut training, but its potential uses are unlimited.

For training, it could be used to safely simulate any number of hazardous or stressful conditions and monitor the way we respond to them. For example, Walmart has used it to train retail staff to work in Black Friday situations , instructing them on how to best to operate in busy shop environments with long queues of customers.

As well as training us for dangerous situations, it will also drastically reduce the financial risks involved with letting students and inexperienced recruits loose with expensive tools and machinery in any industry.

VR in industry and work

The pandemic has changed many things about the way we work, including the wholesale shift to home working for large numbers of employees. This brings challenges, including the need to retain an environment that fosters cooperative activity and the building of company culture. Solutions involving VR are quickly emerging to help tackle these.

Spatial , which creates a tool best described as a VR version of Zoom, reported a 1,000% increase in the use of its platform since March 2020. In total, the value of the market for VR business equipment is forecast to grow from $829 million in 2018 to $4.26 billion by 2023, according to research by ARtillery Intelligence.

Communication giant Ericsson (which has provided Oculus VR headsets to employees working from home during the pandemic for VR meetings) has talked about creating the " Internet of Senses ." This involves developing projects involving simulating touch, taste and smell, and sensations such as hot or cold. It predicts that by 2030, we will be able to enter digital environments that appear completely real to all of our five senses simultaneously.

This will lead to the advent of what it calls the “dematerialized office” – where the office effectively vanishes from our lives as we’re able to create entirely interactive and collaborative working environments wherever we are in the world, simply by slipping on a headset and whatever other devices are needed for the task at hand.

VR in socializing

There are already a number of VR-based social platforms that allow friends or strangers to meet up and chat or play in virtual environments, such as VR Chat , Altspace VR , and Rec Room . As with VR in other fields, the growing level of immersion that is possible thanks to new technological developments will make them more useful and more attractive to mainstream audiences throughout the coming decade.

This year Facebook, which has long had a stake in VR due to its acquisition of headset manufacturer Oculus, unveiled its Horizon platform. Currently, in beta, it allows people to build and share collaborative online worlds where they can hang out, play games, or work together on collaborative projects.  

While we will always make time for meeting up with friends and loved ones in the real world, as our working and school lives become increasingly remote, it’s likely that more of our social interaction will move into the online realm, too. Just as we are no longer barred from careers or educational opportunities due to an increasingly virtualized world, we will have more meaningful ways to connect with other humans as technology improves in this area.

And of course – VR in games and entertainment

The “killer app” for VR is gaming, and the reason the technology is developing at the pace it is, is due to the large market of people willing to spend money on the most impressive and immersive entertainment experiences.

Sandbox VR operates real-world VR centers where equipment that it simply wouldn’t be practical or affordable to use in our homes offer some of the most immersive experiences yet created.

Using full-body haptic feedback suits, they offer five games – one licensed from Star Trek – that let groups cooperate or battle it out in deep space, aboard ghostly pirate ships, or through a zombie infestation.

CEO Steve Zhao describes the experience his company has created as a "minimal viable Matrix or holodeck." In a recent conversation that you can see here , he told me, "the outcome is that you believe in the world – it's very real, and in order to progress, you and your friends have to communicate and work together. One of the best ways to describe it is that you are the stars inside your own movie – that's basically what we created."

It makes sense in many ways that there could be two markets for consuming VR entertainment – at least in its early days. While the most immersive and impressive tech is big, expensive, and requires technical skill to operate, it's more viable to offer it at dedicated venues rather than as an in-home experience. As with movies, the stay-at-home offerings will provide something perhaps a little less spectacular but more convenient – at least until we get to the point where we can have full-size Star Trek holodecks in our own homes!

Bernard Marr

  • Editorial Standards
  • Reprints & Permissions

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Front Psychol

The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature

Pietro cipresso.

1 Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Milan, Italy

2 Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy

Irene Alice Chicchi Giglioli

3 Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain

Mariano Alcañiz Raya

Giuseppe riva, associated data.

The recent appearance of low cost virtual reality (VR) technologies – like the Oculus Rift, the HTC Vive and the Sony PlayStation VR – and Mixed Reality Interfaces (MRITF) – like the Hololens – is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation. However, the history of VR technology is longer than it may seem: the concept of VR was formulated in the 1960s and the first commercial VR tools appeared in the late 1980s. For this reason, during the last 20 years, 100s of researchers explored the processes, effects, and applications of this technology producing 1000s of scientific papers. What is the outcome of this significant research work? This paper wants to provide an answer to this question by exploring, using advanced scientometric techniques, the existing research corpus in the field. We collected all the existent articles about VR in the Web of Science Core Collection scientific database, and the resultant dataset contained 21,667 records for VR and 9,944 for augmented reality (AR). The bibliographic record contained various fields, such as author, title, abstract, country, and all the references (needed for the citation analysis). The network and cluster analysis of the literature showed a composite panorama characterized by changes and evolutions over the time. Indeed, whether until 5 years ago, the main publication media on VR concerned both conference proceeding and journals, more recently journals constitute the main medium of communication. Similarly, if at first computer science was the leading research field, nowadays clinical areas have increased, as well as the number of countries involved in VR research. The present work discusses the evolution and changes over the time of the use of VR in the main areas of application with an emphasis on the future expected VR’s capacities, increases and challenges. We conclude considering the disruptive contribution that VR/AR/MRITF will be able to get in scientific fields, as well in human communication and interaction, as already happened with the advent of mobile phones by increasing the use and the development of scientific applications (e.g., in clinical areas) and by modifying the social communication and interaction among people.


In the last 5 years, virtual reality (VR) and augmented reality (AR) have attracted the interest of investors and the general public, especially after Mark Zuckerberg bought Oculus for two billion dollars ( Luckerson, 2014 ; Castelvecchi, 2016 ). Currently, many other companies, such as Sony, Samsung, HTC, and Google are making huge investments in VR and AR ( Korolov, 2014 ; Ebert, 2015 ; Castelvecchi, 2016 ). However, if VR has been used in research for more than 25 years, and now there are 1000s of papers and many researchers in the field, comprising a strong, interdisciplinary community, AR has a more recent application history ( Burdea and Coiffet, 2003 ; Kim, 2005 ; Bohil et al., 2011 ; Cipresso and Serino, 2014 ; Wexelblat, 2014 ). The study of VR was initiated in the computer graphics field and has been extended to several disciplines ( Sutherland, 1965 , 1968 ; Mazuryk and Gervautz, 1996 ; Choi et al., 2015 ). Currently, videogames supported by VR tools are more popular than the past, and they represent valuables, work-related tools for neuroscientists, psychologists, biologists, and other researchers as well. Indeed, for example, one of the main research purposes lies from navigation studies that include complex experiments that could be done in a laboratory by using VR, whereas, without VR, the researchers would have to go directly into the field, possibly with limited use of intervention. The importance of navigation studies for the functional understanding of human memory in dementia has been a topic of significant interest for a long time, and, in 2014, the Nobel Prize in “Physiology or Medicine” was awarded to John M. O’Keefe, May-Britt Moser, and Edvard I. Moser for their discoveries of nerve cells in the brain that enable a sense of place and navigation. Journals and magazines have extended this knowledge by writing about “the brain GPS,” which gives a clear idea of the mechanism. A huge number of studies have been conducted in clinical settings by using VR ( Bohil et al., 2011 ; Serino et al., 2014 ), and Nobel Prize winner, Edvard I. Moser commented about the use of VR ( Minderer et al., 2016 ), highlighting its importance for research and clinical practice. Moreover, the availability of free tools for VR experimental and computational use has made it easy to access any field ( Riva et al., 2011 ; Cipresso, 2015 ; Brown and Green, 2016 ; Cipresso et al., 2016 ).

Augmented reality is a more recent technology than VR and shows an interdisciplinary application framework, in which, nowadays, education and learning seem to be the most field of research. Indeed, AR allows supporting learning, for example increasing-on content understanding and memory preservation, as well as on learning motivation. However, if VR benefits from clear and more definite fields of application and research areas, AR is still emerging in the scientific scenarios.

In this article, we present a systematic and computational analysis of the emerging interdisciplinary VR and AR fields in terms of various co-citation networks in order to explore the evolution of the intellectual structure of this knowledge domain over time.

Virtual Reality Concepts and Features

The concept of VR could be traced at the mid of 1960 when Ivan Sutherland in a pivotal manuscript attempted to describe VR as a window through which a user perceives the virtual world as if looked, felt, sounded real and in which the user could act realistically ( Sutherland, 1965 ).

Since that time and in accordance with the application area, several definitions have been formulated: for example, Fuchs and Bishop (1992) defined VR as “real-time interactive graphics with 3D models, combined with a display technology that gives the user the immersion in the model world and direct manipulation” ( Fuchs and Bishop, 1992 ); Gigante (1993) described VR as “The illusion of participation in a synthetic environment rather than external observation of such an environment. VR relies on a 3D, stereoscopic head-tracker displays, hand/body tracking and binaural sound. VR is an immersive, multi-sensory experience” ( Gigante, 1993 ); and “Virtual reality refers to immersive, interactive, multi-sensory, viewer-centered, 3D computer generated environments and the combination of technologies required building environments” ( Cruz-Neira, 1993 ).

As we can notice, these definitions, although different, highlight three common features of VR systems: immersion, perception to be present in an environment, and interaction with that environment ( Biocca, 1997 ; Lombard and Ditton, 1997 ; Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ; Bailenson et al., 2006 ; Skalski and Tamborini, 2007 ; Andersen and Thorpe, 2009 ; Slater, 2009 ; Sundar et al., 2010 ). Specifically, immersion concerns the amount of senses stimulated, interactions, and the reality’s similarity of the stimuli used to simulate environments. This feature can depend on the properties of the technological system used to isolate user from reality ( Slater, 2009 ).

Higher or lower degrees of immersion can depend by three types of VR systems provided to the user:

  • simple • Non-immersive systems are the simplest and cheapest type of VR applications that use desktops to reproduce images of the world.
  • simple • Immersive systems provide a complete simulated experience due to the support of several sensory outputs devices such as head mounted displays (HMDs) for enhancing the stereoscopic view of the environment through the movement of the user’s head, as well as audio and haptic devices.
  • simple • Semi-immersive systems such as Fish Tank VR are between the two above. They provide a stereo image of a three dimensional (3D) scene viewed on a monitor using a perspective projection coupled to the head position of the observer ( Ware et al., 1993 ). Higher technological immersive systems have showed a closest experience to reality, giving to the user the illusion of technological non-mediation and feeling him or her of “being in” or present in the virtual environment ( Lombard and Ditton, 1997 ). Furthermore, higher immersive systems, than the other two systems, can give the possibility to add several sensory outputs allowing that the interaction and actions were perceived as real ( Loomis et al., 1999 ; Heeter, 2000 ; Biocca et al., 2001 ).

Finally, the user’s VR experience could be disclosed by measuring presence, realism, and reality’s levels. Presence is a complex psychological feeling of “being there” in VR that involves the sensation and perception of physical presence, as well as the possibility to interact and react as if the user was in the real world ( Heeter, 1992 ). Similarly, the realism’s level corresponds to the degree of expectation that the user has about of the stimuli and experience ( Baños et al., 2000 , 2009 ). If the presented stimuli are similar to reality, VR user’s expectation will be congruent with reality expectation, enhancing VR experience. In the same way, higher is the degree of reality in interaction with the virtual stimuli, higher would be the level of realism of the user’s behaviors ( Baños et al., 2000 , 2009 ).

From Virtual to Augmented Reality

Looking chronologically on VR and AR developments, we can trace the first 3D immersive simulator in 1962, when Morton Heilig created Sensorama, a simulated experience of a motorcycle running through Brooklyn characterized by several sensory impressions, such as audio, olfactory, and haptic stimuli, including also wind to provide a realist experience ( Heilig, 1962 ). In the same years, Ivan Sutherland developed The Ultimate Display that, more than sound, smell, and haptic feedback, included interactive graphics that Sensorama didn’t provide. Furthermore, Philco developed the first HMD that together with The Sword of Damocles of Sutherland was able to update the virtual images by tracking user’s head position and orientation ( Sutherland, 1965 ). In the 70s, the University of North Carolina realized GROPE, the first system of force-feedback and Myron Krueger created VIDEOPLACE an Artificial Reality in which the users’ body figures were captured by cameras and projected on a screen ( Krueger et al., 1985 ). In this way two or more users could interact in the 2D-virtual space. In 1982, the US’ Air Force created the first flight simulator [Visually Coupled Airbone System Simulator (VCASS)] in which the pilot through an HMD could control the pathway and the targets. Generally, the 80’s were the years in which the first commercial devices began to emerge: for example, in 1985 the VPL company commercialized the DataGlove, glove sensors’ equipped able to measure the flexion of fingers, orientation and position, and identify hand gestures. Another example is the Eyephone, created in 1988 by the VPL Company, an HMD system for completely immerging the user in a virtual world. At the end of 80’s, Fake Space Labs created a Binocular-Omni-Orientational Monitor (BOOM), a complex system composed by a stereoscopic-displaying device, providing a moving and broad virtual environment, and a mechanical arm tracking. Furthermore, BOOM offered a more stable image and giving more quickly responses to movements than the HMD devices. Thanks to BOOM and DataGlove, the NASA Ames Research Center developed the Virtual Wind Tunnel in order to research and manipulate airflow in a virtual airplane or space ship. In 1992, the Electronic Visualization Laboratory of the University of Illinois created the CAVE Automatic Virtual Environment, an immersive VR system composed by projectors directed on three or more walls of a room.

More recently, many videogames companies have improved the development and quality of VR devices, like Oculus Rift, or HTC Vive that provide a wider field of view and lower latency. In addition, the actual HMD’s devices can be now combined with other tracker system as eye-tracking systems (FOVE), and motion and orientation sensors (e.g., Razer Hydra, Oculus Touch, or HTC Vive).

Simultaneously, at the beginning of 90’, the Boing Corporation created the first prototype of AR system for showing to employees how set up a wiring tool ( Carmigniani et al., 2011 ). At the same time, Rosenberg and Feiner developed an AR fixture for maintenance assistance, showing that the operator performance enhanced by added virtual information on the fixture to repair ( Rosenberg, 1993 ). In 1993 Loomis and colleagues produced an AR GPS-based system for helping the blind in the assisted navigation through adding spatial audio information ( Loomis et al., 1998 ). Always in the 1993 Julie Martin developed “Dancing in Cyberspace,” an AR theater in which actors interacted with virtual object in real time ( Cathy, 2011 ). Few years later, Feiner et al. (1997) developed the first Mobile AR System (MARS) able to add virtual information about touristic buildings ( Feiner et al., 1997 ). Since then, several applications have been developed: in Thomas et al. (2000) , created ARQuake, a mobile AR video game; in 2008 was created Wikitude that through the mobile camera, internet, and GPS could add information about the user’s environments ( Perry, 2008 ). In 2009 others AR applications, like AR Toolkit and SiteLens have been developed in order to add virtual information to the physical user’s surroundings. In 2011, Total Immersion developed D’Fusion, and AR system for designing projects ( Maurugeon, 2011 ). Finally, in 2013 and 2015, Google developed Google Glass and Google HoloLens, and their usability have begun to test in several field of application.

Virtual Reality Technologies

Technologically, the devices used in the virtual environments play an important role in the creation of successful virtual experiences. According to the literature, can be distinguished input and output devices ( Burdea et al., 1996 ; Burdea and Coiffet, 2003 ). Input devices are the ones that allow the user to communicate with the virtual environment, which can range from a simple joystick or keyboard to a glove allowing capturing finger movements or a tracker able to capture postures. More in detail, keyboard, mouse, trackball, and joystick represent the desktop input devices easy to use, which allow the user to launch continuous and discrete commands or movements to the environment. Other input devices can be represented by tracking devices as bend-sensing gloves that capture hand movements, postures and gestures, or pinch gloves that detect the fingers movements, and trackers able to follow the user’s movements in the physical world and translate them in the virtual environment.

On the contrary, the output devices allow the user to see, hear, smell, or touch everything that happens in the virtual environment. As mentioned above, among the visual devices can be found a wide range of possibilities, from the simplest or least immersive (monitor of a computer) to the most immersive one such as VR glasses or helmets or HMD or CAVE systems.

Furthermore, auditory, speakers, as well as haptic output devices are able to stimulate body senses providing a more real virtual experience. For example, haptic devices can stimulate the touch feeling and force models in the user.

Virtual Reality Applications

Since its appearance, VR has been used in different fields, as for gaming ( Zyda, 2005 ; Meldrum et al., 2012 ), military training ( Alexander et al., 2017 ), architectural design ( Song et al., 2017 ), education ( Englund et al., 2017 ), learning and social skills training ( Schmidt et al., 2017 ), simulations of surgical procedures ( Gallagher et al., 2005 ), assistance to the elderly or psychological treatments are other fields in which VR is bursting strongly ( Freeman et al., 2017 ; Neri et al., 2017 ). A recent and extensive review of Slater and Sanchez-Vives (2016) reported the main VR application evidences, including weakness and advantages, in several research areas, such as science, education, training, physical training, as well as social phenomena, moral behaviors, and could be used in other fields, like travel, meetings, collaboration, industry, news, and entertainment. Furthermore, another review published this year by Freeman et al. (2017) focused on VR in mental health, showing the efficacy of VR in assessing and treating different psychological disorders as anxiety, schizophrenia, depression, and eating disorders.

There are many possibilities that allow the use of VR as a stimulus, replacing real stimuli, recreating experiences, which in the real world would be impossible, with a high realism. This is why VR is widely used in research on new ways of applying psychological treatment or training, for example, to problems arising from phobias (agoraphobia, phobia to fly, etc.) ( Botella et al., 2017 ). Or, simply, it is used like improvement of the traditional systems of motor rehabilitation ( Llorens et al., 2014 ; Borrego et al., 2016 ), developing games that ameliorate the tasks. More in detail, in psychological treatment, Virtual Reality Exposure Therapy (VRET) has showed its efficacy, allowing to patients to gradually face fear stimuli or stressed situations in a safe environment where the psychological and physiological reactions can be controlled by the therapist ( Botella et al., 2017 ).

Augmented Reality Concept

Milgram and Kishino (1994) , conceptualized the Virtual-Reality Continuum that takes into consideration four systems: real environment, augmented reality (AR), augmented virtuality, and virtual environment. AR can be defined a newer technological system in which virtual objects are added to the real world in real-time during the user’s experience. Per Azuma et al. (2001) an AR system should: (1) combine real and virtual objects in a real environment; (2) run interactively and in real-time; (3) register real and virtual objects with each other. Furthermore, even if the AR experiences could seem different from VRs, the quality of AR experience could be considered similarly. Indeed, like in VR, feeling of presence, level of realism, and the degree of reality represent the main features that can be considered the indicators of the quality of AR experiences. Higher the experience is perceived as realistic, and there is congruence between the user’s expectation and the interaction inside the AR environments, higher would be the perception of “being there” physically, and at cognitive and emotional level. The feeling of presence, both in AR and VR environments, is important in acting behaviors like the real ones ( Botella et al., 2005 ; Juan et al., 2005 ; Bretón-López et al., 2010 ; Wrzesien et al., 2013 ).

Augmented Reality Technologies

Technologically, the AR systems, however various, present three common components, such as a geospatial datum for the virtual object, like a visual marker, a surface to project virtual elements to the user, and an adequate processing power for graphics, animation, and merging of images, like a pc and a monitor ( Carmigniani et al., 2011 ). To run, an AR system must also include a camera able to track the user movement for merging the virtual objects, and a visual display, like glasses through that the user can see the virtual objects overlaying to the physical world. To date, two-display systems exist, a video see-through (VST) and an optical see-though (OST) AR systems ( Botella et al., 2005 ; Juan et al., 2005 , 2007 ). The first one, disclosures virtual objects to the user by capturing the real objects/scenes with a camera and overlaying virtual objects, projecting them on a video or a monitor, while the second one, merges the virtual object on a transparent surface, like glasses, through the user see the added elements. The main difference between the two systems is the latency: an OST system could require more time to display the virtual objects than a VST system, generating a time lag between user’s action and performance and the detection of them by the system.

Augmented Reality Applications

Although AR is a more recent technology than VR, it has been investigated and used in several research areas such as architecture ( Lin and Hsu, 2017 ), maintenance ( Schwald and De Laval, 2003 ), entertainment ( Ozbek et al., 2004 ), education ( Nincarean et al., 2013 ; Bacca et al., 2014 ; Akçayır and Akçayır, 2017 ), medicine ( De Buck et al., 2005 ), and psychological treatments ( Juan et al., 2005 ; Botella et al., 2005 , 2010 ; Bretón-López et al., 2010 ; Wrzesien et al., 2011a , b , 2013 ; see the review Chicchi Giglioli et al., 2015 ). More in detail, in education several AR applications have been developed in the last few years showing the positive effects of this technology in supporting learning, such as an increased-on content understanding and memory preservation, as well as on learning motivation ( Radu, 2012 , 2014 ). For example, Ibáñez et al. (2014) developed a AR application on electromagnetism concepts’ learning, in which students could use AR batteries, magnets, cables on real superficies, and the system gave a real-time feedback to students about the correctness of the performance, improving in this way the academic success and motivation ( Di Serio et al., 2013 ). Deeply, AR system allows the possibility to learn visualizing and acting on composite phenomena that traditionally students study theoretically, without the possibility to see and test in real world ( Chien et al., 2010 ; Chen et al., 2011 ).

As well in psychological health, the number of research about AR is increasing, showing its efficacy above all in the treatment of psychological disorder (see the reviews Baus and Bouchard, 2014 ; Chicchi Giglioli et al., 2015 ). For example, in the treatment of anxiety disorders, like phobias, AR exposure therapy (ARET) showed its efficacy in one-session treatment, maintaining the positive impact in a follow-up at 1 or 3 month after. As VRET, ARET provides a safety and an ecological environment where any kind of stimulus is possible, allowing to keep control over the situation experienced by the patients, gradually generating situations of fear or stress. Indeed, in situations of fear, like the phobias for small animals, AR applications allow, in accordance with the patient’s anxiety, to gradually expose patient to fear animals, adding new animals during the session or enlarging their or increasing the speed. The various studies showed that AR is able, at the beginning of the session, to activate patient’s anxiety, for reducing after 1 h of exposition. After the session, patients even more than to better manage animal’s fear and anxiety, ware able to approach, interact, and kill real feared animals.

Materials and Methods

Data collection.

The input data for the analyses were retrieved from the scientific database Web of Science Core Collection ( Falagas et al., 2008 ) and the search terms used were “Virtual Reality” and “Augmented Reality” regarding papers published during the whole timespan covered.

Web of science core collection is composed of: Citation Indexes, Science Citation Index Expanded (SCI-EXPANDED) –1970-present, Social Sciences Citation Index (SSCI) –1970-present, Arts and Humanities Citation Index (A&HCI) –1975-present, Conference Proceedings Citation Index- Science (CPCI-S) –1990-present, Conference Proceedings Citation Index- Social Science & Humanities (CPCI-SSH) –1990-present, Book Citation Index– Science (BKCI-S) –2009-present, Book Citation Index– Social Sciences & Humanities (BKCI-SSH) –2009-present, Emerging Sources Citation Index (ESCI) –2015-present, Chemical Indexes, Current Chemical Reactions (CCR-EXPANDED) –2009-present (Includes Institut National de la Propriete Industrielle structure data back to 1840), Index Chemicus (IC) –2009-present.

The resultant dataset contained a total of 21,667 records for VR and 9,944 records for AR. The bibliographic record contained various fields, such as author, title, abstract, and all of the references (needed for the citation analysis). The research tool to visualize the networks was Cite space v.4.0.R5 SE (32 bit) ( Chen, 2006 ) under Java Runtime v.8 update 91 (build 1.8.0_91-b15). Statistical analyses were conducted using Stata MP-Parallel Edition, Release 14.0, StataCorp LP. Additional information can be found in Supplementary Data Sheet 1 .

The betweenness centrality of a node in a network measures the extent to which the node is part of paths that connect an arbitrary pair of nodes in the network ( Freeman, 1977 ; Brandes, 2001 ; Chen, 2006 ).

Structural metrics include betweenness centrality, modularity, and silhouette. Temporal and hybrid metrics include citation burstness and novelty. All the algorithms are detailed ( Chen et al., 2010 ).

The analysis of the literature on VR shows a complex panorama. At first sight, according to the document-type statistics from the Web of Science (WoS), proceedings papers were used extensively as outcomes of research, comprising almost 48% of the total (10,392 proceedings), with a similar number of articles on the subject amounting to about 47% of the total of 10, 199 articles. However, if we consider only the last 5 years (7,755 articles representing about 36% of the total), the situation changes with about 57% for articles (4,445) and about 33% for proceedings (2,578). Thus, it is clear that VR field has changed in areas other than at the technological level.

About the subject category, nodes and edges are computed as co-occurring subject categories from the Web of Science “Category” field in all the articles.

According to the subject category statistics from the WoS, computer science is the leading category, followed by engineering, and, together, they account for 15,341 articles, which make up about 71% of the total production. However, if we consider just the last 5 years, these categories reach only about 55%, with a total of 4,284 articles (Table ​ (Table1 1 and Figure ​ Figure1 1 ).

Category statistics from the WoS for the entire period and the last 5 years.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g001.jpg

Category from the WoS: network for the last 5 years.

The evidence is very interesting since it highlights that VR is doing very well as new technology with huge interest in hardware and software components. However, with respect to the past, we are witnessing increasing numbers of applications, especially in the medical area. In particular, note its inclusion in the top 10 list of rehabilitation and clinical neurology categories (about 10% of the total production in the last 5 years). It also is interesting that neuroscience and neurology, considered together, have shown an increase from about 12% to about 18.6% over the last 5 years. However, historic areas, such as automation and control systems, imaging science and photographic technology, and robotics, which had accounted for about 14.5% of the total articles ever produced were not even in the top 10 for the last 5 years, with each one accounting for less than 4%.

About the countries, nodes and edges are computed as networks of co-authors countries. Multiple occurrency of a country in the same paper are counted once.

The countries that were very involved in VR research have published for about 47% of the total (10,200 articles altogether). Of the 10,200 articles, the United States, China, England, and Germany published 4921, 2384, 1497, and 1398, respectively. The situation remains the same if we look at the articles published over the last 5 years. However, VR contributions also came from all over the globe, with Japan, Canada, Italy, France, Spain, South Korea, and Netherlands taking positions of prominence, as shown in Figure ​ Figure2 2 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g002.jpg

Country network (node dimension represents centrality).

Network analysis was conducted to calculate and to represent the centrality index ( Freeman, 1977 ; Brandes, 2001 ), i.e., the dimension of the node in Figure ​ Figure2. 2 . The top-ranked country, with a centrality index of 0.26, was the United States (2011), and England was second, with a centrality index of 0.25. The third, fourth, and fifth countries were Germany, Italy, and Australia, with centrality indices of 0.15, 0.15, and 0.14, respectively.

About the Institutions, nodes and edges are computed as networks of co-authors Institutions (Figure ​ (Figure3 3 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g003.jpg

Network of institutions: the dimensions of the nodes represent centrality.

The top-level institutions in VR were in the United States, where three universities were ranked as the top three in the world for published articles; these universities were the University of Illinois (159), the University of South California (147), and the University of Washington (146). The United States also had the eighth-ranked university, which was Iowa State University (116). The second country in the ranking was Canada, with the University of Toronto, which was ranked fifth with 125 articles and McGill University, ranked 10 th with 103 articles.

Other countries in the top-ten list were Netherlands, with the Delft University of Technology ranked fourth with 129 articles; Italy, with IRCCS Istituto Auxologico Italiano, ranked sixth (with the same number of publication of the institution ranked fifth) with 125 published articles; England, which was ranked seventh with 125 articles from the University of London’s Imperial College of Science, Technology, and Medicine; and China with 104 publications, with the Chinese Academy of Science, ranked ninth. Italy’s Istituto Auxologico Italiano, which was ranked fifth, was the only non-university institution ranked in the top-10 list for VR research (Figure ​ (Figure3 3 ).

About the Journals, nodes, and edges are computed as journal co-citation networks among each journals in the corresponding field.

The top-ranked Journals for citations in VR are Presence: Teleoperators & Virtual Environments with 2689 citations and CyberPsychology & Behavior (Cyberpsychol BEHAV) with 1884 citations; however, looking at the last 5 years, the former had increased the citations, but the latter had a far more significant increase, from about 70% to about 90%, i.e., an increase from 1029 to 1147.

Following the top two journals, IEEE Computer Graphics and Applications ( IEEE Comput Graph) and Advanced Health Telematics and Telemedicine ( St HEAL T) were both left out of the top-10 list based on the last 5 years. The data for the last 5 years also resulted in the inclusion of Experimental Brain Research ( Exp BRAIN RES) (625 citations), Archives of Physical Medicine and Rehabilitation ( Arch PHYS MED REHAB) (622 citations), and Plos ONE (619 citations) in the top-10 list of three journals, which highlighted the categories of rehabilitation and clinical neurology and neuroscience and neurology. Journal co-citation analysis is reported in Figure ​ Figure4, 4 , which clearly shows four distinct clusters.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g004.jpg

Co-citation network of journals: the dimensions of the nodes represent centrality. Full list of official abbreviations of WoS journals can be found here: .

Network analysis was conducted to calculate and to represent the centrality index, i.e., the dimensions of the nodes in Figure ​ Figure4. 4 . The top-ranked item by centrality was Cyberpsychol BEHAV, with a centrality index of 0.29. The second-ranked item was Arch PHYS MED REHAB, with a centrality index of 0.23. The third was Behaviour Research and Therapy (Behav RES THER), with a centrality index of 0.15. The fourth was BRAIN, with a centrality index of 0.14. The fifth was Exp BRAIN RES, with a centrality index of 0.11.

Who’s Who in VR Research

Authors are the heart and brain of research, and their roles in a field are to define the past, present, and future of disciplines and to make significant breakthroughs to make new ideas arise (Figure ​ (Figure5 5 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g005.jpg

Network of authors’ numbers of publications: the dimensions of the nodes represent the centrality index, and the dimensions of the characters represent the author’s rank.

Virtual reality research is very young and changing with time, but the top-10 authors in this field have made fundamentally significant contributions as pioneers in VR and taking it beyond a mere technological development. The purpose of the following highlights is not to rank researchers; rather, the purpose is to identify the most active researchers in order to understand where the field is going and how they plan for it to get there.

The top-ranked author is Riva G, with 180 publications. The second-ranked author is Rizzo A, with 101 publications. The third is Darzi A, with 97 publications. The forth is Aggarwal R, with 94 publications. The six authors following these three are Slater M, Alcaniz M, Botella C, Wiederhold BK, Kim SI, and Gutierrez-Maldonado J with 90, 90, 85, 75, 59, and 54 publications, respectively (Figure ​ (Figure6 6 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g006.jpg

Authors’ co-citation network: the dimensions of the nodes represent centrality index, and the dimensions of the characters represent the author’s rank. The 10 authors that appear on the top-10 list are considered to be the pioneers of VR research.

Considering the last 5 years, the situation remains similar, with three new entries in the top-10 list, i.e., Muhlberger A, Cipresso P, and Ahmed K ranked 7th, 8th, and 10th, respectively.

The authors’ publications number network shows the most active authors in VR research. Another relevant analysis for our focus on VR research is to identify the most cited authors in the field.

For this purpose, the authors’ co-citation analysis highlights the authors in term of their impact on the literature considering the entire time span of the field ( White and Griffith, 1981 ; González-Teruel et al., 2015 ; Bu et al., 2016 ). The idea is to focus on the dynamic nature of the community of authors who contribute to the research.

Normally, authors with higher numbers of citations tend to be the scholars who drive the fundamental research and who make the most meaningful impacts on the evolution and development of the field. In the following, we identified the most-cited pioneers in the field of VR Research.

The top-ranked author by citation count is Gallagher (2001), with 694 citations. Second is Seymour (2004), with 668 citations. Third is Slater (1999), with 649 citations. Fourth is Grantcharov (2003), with 563 citations. Fifth is Riva (1999), with 546 citations. Sixth is Aggarwal (2006), with 505 citations. Seventh is Satava (1994), with 477 citations. Eighth is Witmer (2002), with 454 citations. Ninth is Rothbaum (1996), with 448 citations. Tenth is Cruz-neira (1995), with 416 citations.

Citation Network and Cluster Analyses for VR

Another analysis that can be used is the analysis of document co-citation, which allows us to focus on the highly-cited documents that generally are also the most influential in the domain ( Small, 1973 ; González-Teruel et al., 2015 ; Orosz et al., 2016 ).

The top-ranked article by citation counts is Seymour (2002) in Cluster #0, with 317 citations. The second article is Grantcharov (2004) in Cluster #0, with 286 citations. The third is Holden (2005) in Cluster #2, with 179 citations. The 4th is Gallagher et al. (2005) in Cluster #0, with 171 citations. The 5th is Ahlberg (2007) in Cluster #0, with 142 citations. The 6th is Parsons (2008) in Cluster #4, with 136 citations. The 7th is Powers (2008) in Cluster #4, with 134 citations. The 8th is Aggarwal (2007) in Cluster #0, with 121 citations. The 9th is Reznick (2006) in Cluster #0, with 121 citations. The 10th is Munz (2004) in Cluster #0, with 117 citations.

The network of document co-citations is visually complex (Figure ​ (Figure7) 7 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure ​ Figure8 8 shows the clusters, which are identified with the two algorithms in Table ​ Table2 2 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g007.jpg

Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g008.jpg

Document co-citation network by cluster: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing reports the name of the cluster with a short description that was produced with the mutual information algorithm; the clusters are identified with colored polygons.

Cluster ID and silhouettes as identified with two algorithms ( Chen et al., 2010 ).

The identified clusters highlight clear parts of the literature of VR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of VR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure ​ Figure9. 9 . It is clear that cluster #0 (laparoscopic skill), cluster #2 (gaming and rehabilitation), cluster #4 (therapy), and cluster #14 (surgery) are the most popular areas of VR research. (See Figure ​ Figure9 9 and Table ​ Table2 2 to identify the clusters.) From Figure ​ Figure9, 9 , it also is possible to identify the first phase of laparoscopic skill (cluster #6) and therapy (cluster #7). More generally, it is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past VR research to the current research.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g009.jpg

Network of document co-citation: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank and the red writing on the right hand side reports the number of the cluster, such as in Table ​ Table2, 2 , with a short description that was extracted accordingly.

We were able to identify the top 486 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top-ranked document by bursts is Seymour (2002) in Cluster #0, with bursts of 88.93. The second is Grantcharov (2004) in Cluster #0, with bursts of 51.40. The third is Saposnik (2010) in Cluster #2, with bursts of 40.84. The fourth is Rothbaum (1995) in Cluster #7, with bursts of 38.94. The fifth is Holden (2005) in Cluster #2, with bursts of 37.52. The sixth is Scott (2000) in Cluster #0, with bursts of 33.39. The seventh is Saposnik (2011) in Cluster #2, with bursts of 33.33. The eighth is Burdea et al. (1996) in Cluster #3, with bursts of 32.42. The ninth is Burdea and Coiffet (2003) in Cluster #22, with bursts of 31.30. The 10th is Taffinder (1998) in Cluster #6, with bursts of 30.96 (Table ​ (Table3 3 ).

Cluster ID and references of burst article.

Citation Network and Cluster Analyses for AR

Looking at Augmented Reality scenario, the top ranked item by citation counts is Azuma (1997) in Cluster #0, with citation counts of 231. The second one is Azuma et al. (2001) in Cluster #0, with citation counts of 220. The third is Van Krevelen (2010) in Cluster #5, with citation counts of 207. The 4th is Lowe (2004) in Cluster #1, with citation counts of 157. The 5th is Wu (2013) in Cluster #4, with citation counts of 144. The 6th is Dunleavy (2009) in Cluster #4, with citation counts of 122. The 7th is Zhou (2008) in Cluster #5, with citation counts of 118. The 8th is Bay (2008) in Cluster #1, with citation counts of 117. The 9th is Newcombe (2011) in Cluster #1, with citation counts of 109. The 10th is Carmigniani et al. (2011) in Cluster #5, with citation counts of 104.

The network of document co-citations is visually complex (Figure ​ (Figure10) 10 ) because it includes 1000s of articles and the links among them. However, this analysis is very important because can be used to identify the possible conglomerate of knowledge in the area, and this is essential for a deep understanding of the area. Thus, for this purpose, a cluster analysis was conducted ( Chen et al., 2010 ; González-Teruel et al., 2015 ; Klavans and Boyack, 2015 ). Figure ​ Figure11 11 shows the clusters, which are identified with the two algorithms in Table ​ Table3 3 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g010.jpg

Network of document co-citations: the dimensions of the nodes represent centrality, the dimensions of the characters represent the rank of the article rank, and the numbers represent the strengths of the links. It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g011.jpg

The identified clusters highlight clear parts of the literature of AR research, making clear and visible the interdisciplinary nature of this field. However, the dynamics to identify the past, present, and future of AR research cannot be clear yet. We analysed the relationships between these clusters and the temporal dimensions of each article. The results are synthesized in Figure ​ Figure12. 12 . It is clear that cluster #1 (tracking), cluster #4 (education), and cluster #5 (virtual city environment) are the current areas of AR research. (See Figure ​ Figure12 12 and Table ​ Table3 3 to identify the clusters.) It is possible to identify four historical phases (colors: blue, green, yellow, and red) from the past AR research to the current research.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-02086-g012.jpg

We were able to identify the top 394 references that had the most citations by using burst citations algorithm. Citation burst is an indicator of a most active area of research. Citation burst is a detection of a burst event, which can last for multiple years as well as a single year. A citation burst provides evidence that a particular publication is associated with a surge of citations. The burst detection was based on Kleinberg’s algorithm ( Kleinberg, 2002 , 2003 ). The top ranked document by bursts is Azuma (1997) in Cluster #0, with bursts of 101.64. The second one is Azuma et al. (2001) in Cluster #0, with bursts of 84.23. The third is Lowe (2004) in Cluster #1, with bursts of 64.07. The 4th is Van Krevelen (2010) in Cluster #5, with bursts of 50.99. The 5th is Wu (2013) in Cluster #4, with bursts of 47.23. The 6th is Hartley (2000) in Cluster #0, with bursts of 37.71. The 7th is Dunleavy (2009) in Cluster #4, with bursts of 33.22. The 8th is Kato (1999) in Cluster #0, with bursts of 32.16. The 9th is Newcombe (2011) in Cluster #1, with bursts of 29.72. The 10th is Feiner (1993) in Cluster #8, with bursts of 29.46 (Table ​ (Table4 4 ).

Our findings have profound implications for two reasons. At first the present work highlighted the evolution and development of VR and AR research and provided a clear perspective based on solid data and computational analyses. Secondly our findings on VR made it profoundly clear that the clinical dimension is one of the most investigated ever and seems to increase in quantitative and qualitative aspects, but also include technological development and article in computer science, engineer, and allied sciences.

Figure ​ Figure9 9 clarifies the past, present, and future of VR research. The outset of VR research brought a clearly-identifiable development in interfaces for children and medicine, routine use and behavioral-assessment, special effects, systems perspectives, and tutorials. This pioneering era evolved in the period that we can identify as the development era, because it was the period in which VR was used in experiments associated with new technological impulses. Not surprisingly, this was exactly concomitant with the new economy era in which significant investments were made in information technology, and it also was the era of the so-called ‘dot-com bubble’ in the late 1990s. The confluence of pioneering techniques into ergonomic studies within this development era was used to develop the first effective clinical systems for surgery, telemedicine, human spatial navigation, and the first phase of the development of therapy and laparoscopic skills. With the new millennium, VR research switched strongly toward what we can call the clinical-VR era, with its strong emphasis on rehabilitation, neurosurgery, and a new phase of therapy and laparoscopic skills. The number of applications and articles that have been published in the last 5 years are in line with the new technological development that we are experiencing at the hardware level, for example, with so many new, HMDs, and at the software level with an increasing number of independent programmers and VR communities.

Finally, Figure ​ Figure12 12 identifies clusters of the literature of AR research, making clear and visible the interdisciplinary nature of this field. The dynamics to identify the past, present, and future of AR research cannot be clear yet, but analyzing the relationships between these clusters and the temporal dimensions of each article tracking, education, and virtual city environment are the current areas of AR research. AR is a new technology that is showing its efficacy in different research fields, and providing a novel way to gather behavioral data and support learning, training, and clinical treatments.

Looking at scientific literature conducted in the last few years, it might appear that most developments in VR and AR studies have focused on clinical aspects. However, the reality is more complex; thus, this perception should be clarified. Although researchers publish studies on the use of VR in clinical settings, each study depends on the technologies available. Industrial development in VR and AR changed a lot in the last 10 years. In the past, the development involved mainly hardware solutions while nowadays, the main efforts pertain to the software when developing virtual solutions. Hardware became a commodity that is often available at low cost. On the other hand, software needs to be customized each time, per each experiment, and this requires huge efforts in term of development. Researchers in AR and VR today need to be able to adapt software in their labs.

Virtual reality and AR developments in this new clinical era rely on computer science and vice versa. The future of VR and AR is becoming more technological than before, and each day, new solutions and products are coming to the market. Both from software and hardware perspectives, the future of AR and VR depends on huge innovations in all fields. The gap between the past and the future of AR and VR research is about the “realism” that was the key aspect in the past versus the “interaction” that is the key aspect now. First 30 years of VR and AR consisted of a continuous research on better resolution and improved perception. Now, researchers already achieved a great resolution and need to focus on making the VR as realistic as possible, which is not simple. In fact, a real experience implies a realistic interaction and not just great resolution. Interactions can be improved in infinite ways through new developments at hardware and software levels.

Interaction in AR and VR is going to be “embodied,” with implication for neuroscientists that are thinking about new solutions to be implemented into the current systems ( Blanke et al., 2015 ; Riva, 2018 ; Riva et al., 2018 ). For example, the use of hands with contactless device (i.e., without gloves) makes the interaction in virtual environments more natural. The Leap Motion device 1 allows one to use of hands in VR without the use of gloves or markers. This simple and low-cost device allows the VR users to interact with virtual objects and related environments in a naturalistic way. When technology is able to be transparent, users can experience increased sense of being in the virtual environments (the so-called sense of presence).

Other forms of interactions are possible and have been developing continuously. For example, tactile and haptic device able to provide a continuous feedback to the users, intensifying their experience also by adding components, such as the feeling of touch and the physical weight of virtual objects, by using force feedback. Another technology available at low cost that facilitates interaction is the motion tracking system, such as Microsoft Kinect, for example. Such technology allows one to track the users’ bodies, allowing them to interact with the virtual environments using body movements, gestures, and interactions. Most HMDs use an embedded system to track HMD position and rotation as well as controllers that are generally placed into the user’s hands. This tracking allows a great degree of interaction and improves the overall virtual experience.

A final emerging approach is the use of digital technologies to simulate not only the external world but also the internal bodily signals ( Azevedo et al., 2017 ; Riva et al., 2017 ): interoception, proprioception and vestibular input. For example, Riva et al. (2017) recently introduced the concept of “sonoception” ( ), a novel non-invasive technological paradigm based on wearable acoustic and vibrotactile transducers able to alter internal bodily signals. This approach allowed the development of an interoceptive stimulator that is both able to assess interoceptive time perception in clinical patients ( Di Lernia et al., 2018b ) and to enhance heart rate variability (the short-term vagally mediated component—rMSSD) through the modulation of the subjects’ parasympathetic system ( Di Lernia et al., 2018a ).

In this scenario, it is clear that the future of VR and AR research is not just in clinical applications, although the implications for the patients are huge. The continuous development of VR and AR technologies is the result of research in computer science, engineering, and allied sciences. The reasons for which from our analyses emerged a “clinical era” are threefold. First, all clinical research on VR and AR includes also technological developments, and new technological discoveries are being published in clinical or technological journals but with clinical samples as main subject. As noted in our research, main journals that publish numerous articles on technological developments tested with both healthy and patients include Presence: Teleoperators & Virtual Environments, Cyberpsychology & Behavior (Cyberpsychol BEHAV), and IEEE Computer Graphics and Applications (IEEE Comput Graph). It is clear that researchers in psychology, neuroscience, medicine, and behavioral sciences in general have been investigating whether the technological developments of VR and AR are effective for users, indicating that clinical behavioral research has been incorporating large parts of computer science and engineering. A second aspect to consider is the industrial development. In fact, once a new technology is envisioned and created it goes for a patent application. Once the patent is sent for registration the new technology may be made available for the market, and eventually for journal submission and publication. Moreover, most VR and AR research that that proposes the development of a technology moves directly from the presenting prototype to receiving the patent and introducing it to the market without publishing the findings in scientific paper. Hence, it is clear that if a new technology has been developed for industrial market or consumer, but not for clinical purpose, the research conducted to develop such technology may never be published in a scientific paper. Although our manuscript considered published researches, we have to acknowledge the existence of several researches that have not been published at all. The third reason for which our analyses highlighted a “clinical era” is that several articles on VR and AR have been considered within the Web of Knowledge database, that is our source of references. In this article, we referred to “research” as the one in the database considered. Of course, this is a limitation of our study, since there are several other databases that are of big value in the scientific community, such as IEEE Xplore Digital Library, ACM Digital Library, and many others. Generally, the most important articles in journals published in these databases are also included in the Web of Knowledge database; hence, we are convinced that our study considered the top-level publications in computer science or engineering. Accordingly, we believe that this limitation can be overcome by considering the large number of articles referenced in our research.

Considering all these aspects, it is clear that clinical applications, behavioral aspects, and technological developments in VR and AR research are parts of a more complex situation compared to the old platforms used before the huge diffusion of HMD and solutions. We think that this work might provide a clearer vision for stakeholders, providing evidence of the current research frontiers and the challenges that are expected in the future, highlighting all the connections and implications of the research in several fields, such as clinical, behavioral, industrial, entertainment, educational, and many others.

Author Contributions

PC and GR conceived the idea. PC made data extraction and the computational analyses and wrote the first draft of the article. IG revised the introduction adding important information for the article. PC, IG, MR, and GR revised the article and approved the last version of the article after important input to the article rationale.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer GC declared a shared affiliation, with no collaboration, with the authors PC and GR to the handling Editor at the time of the review.


Supplementary Material

The Supplementary Material for this article can be found online at:

  • Akçayır M., Akçayır G. (2017). Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20 1–11. 10.1016/j.edurev.2016.11.002 [ CrossRef ] [ Google Scholar ]
  • Alexander T., Westhoven M., Conradi J. (2017). “Virtual environments for competency-oriented education and training,” in Advances in Human Factors, Business Management, Training and Education , (Berlin: Springer International Publishing; ), 23–29. 10.1007/978-3-319-42070-7_3 [ CrossRef ] [ Google Scholar ]
  • Andersen S. M., Thorpe J. S. (2009). An if–thEN theory of personality: significant others and the relational self. J. Res. Pers. 43 163–170. 10.1016/j.jrp.2008.12.040 [ CrossRef ] [ Google Scholar ]
  • Azevedo R. T., Bennett N., Bilicki A., Hooper J., Markopoulou F., Tsakiris M. (2017). The calming effect of a new wearable device during the anticipation of public speech. Sci. Rep. 7 : 2285 . 10.1038/s41598-017-02274-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Azuma R., Baillot Y., Behringer R., Feiner S., Julier S., MacIntyre B. (2001). Recent advances in augmented reality. IEEE Comp. Graph. Appl. 21 34–47. 10.1109/38.963459 [ CrossRef ] [ Google Scholar ]
  • Bacca J., Baldiris S., Fabregat R., Graf S. (2014). Augmented reality trends in education: a systematic review of research and applications. J. Educ. Technol. Soc. 17 133 . [ Google Scholar ]
  • Bailenson J. N., Yee N., Merget D., Schroeder R. (2006). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence 15 359–372. 10.1162/pres.15.4.359 [ CrossRef ] [ Google Scholar ]
  • Baños R. M., Botella C., Garcia-Palacios A., Villa H., Perpiñá C., Alcaniz M. (2000). Presence and reality judgment in virtual environments: a unitary construct? Cyberpsychol. Behav. 3 327–335. 10.1089/10949310050078760 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baños R., Botella C., García-Palacios A., Villa H., Perpiñá C., Gallardo M. (2009). Psychological variables and reality judgment in virtual environments: the roles of absorption and dissociation. Cyberpsychol. Behav. 2 143–148. 10.1089/cpb.1999.2.143 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Baus O., Bouchard S. (2014). Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: a review. Front. Hum. Neurosci. 8 : 112 . 10.3389/fnhum.2014.00112 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Biocca F. (1997). The cyborg’s dilemma: progressive embodiment in virtual environments. J. Comput. Mediat. Commun. 3 10.1111/j.1083-6101.1997 [ CrossRef ] [ Google Scholar ]
  • Biocca F., Harms C., Gregg J. (2001). “The networked minds measure of social presence: pilot test of the factor structure and concurrent validity,” in 4th Annual International Workshop on Presence , Philadelphia, PA, 1–9. [ Google Scholar ]
  • Blanke O., Slater M., Serino A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron 88 145–166. 10.1016/j.neuron.2015.09.029 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bohil C. J., Alicea B., Biocca F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12 : 752 . 10.1038/nrn3122 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Borrego A., Latorre J., Llorens R., Alcañiz M., Noé E. (2016). Feasibility of a walking virtual reality system for rehabilitation: objective and subjective parameters. J. Neuroeng. Rehabil. 13 : 68 . 10.1186/s12984-016-0174-171 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Botella C., Bretón-López J., Quero S., Baños R. M., García-Palacios A. (2010). Treating cockroach phobia with augmented reality. Behav. Ther. 41 401–413. 10.1016/j.beth.2009.07.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Botella C., Fernández-Álvarez J., Guillén V., García-Palacios A., Baños R. (2017). Recent progress in virtual reality exposure therapy for phobias: a systematic review. Curr. Psychiatry Rep. 19 : 42 . 10.1007/s11920-017-0788-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Botella C. M., Juan M. C., Baños R. M., Alcañiz M., Guillén V., Rey B. (2005). Mixing realities? An application of augmented reality for the treatment of cockroach phobia. Cyberpsychol. Behav. 8 162–171. 10.1089/cpb.2005.8.162 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brandes U. (2001). A faster algorithm for betweenness centrality. J. Math. Sociol. 25 163–177. 10.1080/0022250X.2001.9990249 [ CrossRef ] [ Google Scholar ]
  • Bretón-López J., Quero S., Botella C., García-Palacios A., Baños R. M., Alcañiz M. (2010). An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychol. Behav. Soc. Netw. 13 705–710. 10.1089/cyber.2009.0170 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown A., Green T. (2016). Virtual reality: low-cost tools and resources for the classroom. TechTrends 60 517–519. 10.1007/s11528-016-0102-z [ CrossRef ] [ Google Scholar ]
  • Bu Y., Liu T. Y., Huang W. B. (2016). MACA: a modified author co-citation analysis method combined with general descriptive metadata of citations. Scientometrics 108 143–166. 10.1007/s11192-016-1959-5 [ CrossRef ] [ Google Scholar ]
  • Burdea G., Richard P., Coiffet P. (1996). Multimodal virtual reality: input-output devices, system integration, and human factors. Int. J. Hum. Compu. Interact. 8 5–24. 10.1080/10447319609526138 [ CrossRef ] [ Google Scholar ]
  • Burdea G. C., Coiffet P. (2003). Virtual Reality Technology Vol. 1 Hoboken, NJ: John Wiley & Sons. [ Google Scholar ]
  • Carmigniani J., Furht B., Anisetti M., Ceravolo P., Damiani E., Ivkovic M. (2011). Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51 341–377. 10.1007/s11042-010-0660-6 [ CrossRef ] [ Google Scholar ]
  • Castelvecchi D. (2016). Low-cost headsets boost virtual reality’s lab appeal. Nature 533 153–154. 10.1038/533153a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cathy (2011). The History of Augmented Reality. The Optical Vision Site. Available at: [ Google Scholar ]
  • Chen C. (2006). CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature. J. Assoc. Inform. Sci. Technol. 57 359–377. 10.1002/asi.20317 [ CrossRef ] [ Google Scholar ]
  • Chen C., Ibekwe-SanJuan F., Hou J. (2010). The structure and dynamics of cocitation clusters: a multipleperspective cocitation analysis. J. Assoc. Inform. Sci. Technol. 61 1386–1409. 10.1002/jez.b.22741 [ CrossRef ] [ Google Scholar ]
  • Chen Y. C., Chi H. L., Hung W. H., Kang S. C. (2011). Use of tangible and augmented reality models in engineering graphics courses. J. Prof. Issues Eng. Educ. Pract. 137 267–276. 10.1061/(ASCE)EI.1943-5541.0000078 [ CrossRef ] [ Google Scholar ]
  • Chicchi Giglioli I. A., Pallavicini F., Pedroli E., Serino S., Riva G. (2015). Augmented reality: a brand new challenge for the assessment and treatment of psychological disorders. Comput. Math. Methods Med. 2015 : 862942 . 10.1155/2015/862942 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chien C. H., Chen C. H., Jeng T. S. (2010). “An interactive augmented reality system for learning anatomy structure,” in Proceedings of the International Multiconference of Engineers and Computer Scientists , Vol. 1 (Hong Kong: International Association of Engineers; ), 17–19. [ Google Scholar ]
  • Choi S., Jung K., Noh S. D. (2015). Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurr. Eng. 23 40–63. 10.1177/1063293X14568814 [ CrossRef ] [ Google Scholar ]
  • Cipresso P. (2015). Modeling behavior dynamics using computational psychometrics within virtual worlds. Front. Psychol. 6 : 1725 . 10.3389/fpsyg.2015.01725 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cipresso P., Serino S. (2014). Virtual Reality: Technologies, Medical Applications and Challenges. Hauppauge, NY: Nova Science Publishers, Inc. [ Google Scholar ]
  • Cipresso P., Serino S., Riva G. (2016). Psychometric assessment and behavioral experiments using a free virtual reality platform and computational science. BMC Med. Inform. Decis. Mak. 16 : 37 . 10.1186/s12911-016-0276-5 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cruz-Neira C. (1993). “Virtual reality overview,” in SIGGRAPH 93 Course Notes 21st International Conference on Computer Graphics and Interactive Techniques, Orange County Convention Center , Orlando, FL. [ Google Scholar ]
  • De Buck S., Maes F., Ector J., Bogaert J., Dymarkowski S., Heidbuchel H., et al. (2005). An augmented reality system for patient-specific guidance of cardiac catheter ablation procedures. IEEE Trans. Med. Imaging 24 1512–1524. 10.1109/TMI.2005.857661 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Lernia D., Cipresso P., Pedroli E., Riva G. (2018a). Toward an embodied medicine: a portable device with programmable interoceptive stimulation for heart rate variability enhancement. Sensors (Basel) 18 : 2469 . 10.3390/s18082469 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Lernia D., Serino S., Pezzulo G., Pedroli E., Cipresso P., Riva G. (2018b). Feel the time. Time perception as a function of interoceptive processing. Front. Hum. Neurosci. 12 : 74 . 10.3389/fnhum.2018.00074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Serio Á., Ibáñez M. B., Kloos C. D. (2013). Impact of an augmented reality system on students’ motivation for a visual art course. Comput. Educ. 68 586–596. 10.1016/j.compedu.2012.03.002 [ CrossRef ] [ Google Scholar ]
  • Ebert C. (2015). Looking into the future. IEEE Softw. 32 92–97. 10.1109/MS.2015.142 [ CrossRef ] [ Google Scholar ]
  • Englund C., Olofsson A. D., Price L. (2017). Teaching with technology in higher education: understanding conceptual change and development in practice. High. Educ. Res. Dev. 36 73–87. 10.1080/07294360.2016.1171300 [ CrossRef ] [ Google Scholar ]
  • Falagas M. E., Pitsouni E. I., Malietzis G. A., Pappas G. (2008). Comparison of pubmed, scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 22 338–342. 10.1096/fj.07-9492LSF [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Feiner S., MacIntyre B., Hollerer T., Webster A. (1997). “A touring machine: prototyping 3D mobile augmented reality systems for exploring the urban environment,” in Digest of Papers. First International Symposium on Wearable Computers , (Cambridge, MA: IEEE; ), 74–81. 10.1109/ISWC.1997.629922 [ CrossRef ] [ Google Scholar ]
  • Freeman D., Reeve S., Robinson A., Ehlers A., Clark D., Spanlang B., et al. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychol. Med. 47 2393–2400. 10.1017/S003329171700040X [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Freeman L. C. (1977). A set of measures of centrality based on betweenness. Sociometry 40 35–41. 10.2307/3033543 [ CrossRef ] [ Google Scholar ]
  • Fuchs H., Bishop G. (1992). Research Directions in Virtual Environments. Chapel Hill, NC: University of North Carolina at Chapel Hill. [ Google Scholar ]
  • Gallagher A. G., Ritter E. M., Champion H., Higgins G., Fried M. P., Moses G., et al. (2005). Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann. Surg. 241 : 364 . 10.1097/01.sla.0000151982.85062.80 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gigante M. A. (1993). Virtual reality: definitions, history and applications. Virtual Real. Syst. 3–14. 10.1016/B978-0-12-227748-1.50009-3 [ CrossRef ] [ Google Scholar ]
  • González-Teruel A., González-Alcaide G., Barrios M., Abad-García M. F. (2015). Mapping recent information behavior research: an analysis of co-authorship and co-citation networks. Scientometrics 103 687–705. 10.1007/s11192-015-1548-z [ CrossRef ] [ Google Scholar ]
  • Heeter C. (1992). Being there: the subjective experience of presence. Presence 1 262–271. 10.1162/pres.1992.1.2.262 [ CrossRef ] [ Google Scholar ]
  • Heeter C. (2000). Interactivity in the context of designed experiences. J. Interact. Adv. 1 3–14. 10.1080/15252019.2000.10722040 [ CrossRef ] [ Google Scholar ]
  • Heilig M. (1962). Sensorama simulator . U.S. Patent No - 3, 870. Virginia: United States Patent and Trade Office. [ Google Scholar ]
  • Ibáñez M. B., Di Serio Á., Villarán D., Kloos C. D. (2014). Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness. Comput. Educ. 71 1–13. 10.1016/j.compedu.2013.09.004 [ CrossRef ] [ Google Scholar ]
  • Juan M. C., Alcañiz M., Calatrava J., Zaragozá I., Baños R., Botella C. (2007). “An optical see-through augmented reality system for the treatment of phobia to small animals,” in Virtual Reality, HCII 2007 Lecture Notes in Computer Science Vol. 4563 ed. Schumaker R. (Berlin: Springer; ), 651–659. [ Google Scholar ]
  • Juan M. C., Alcaniz M., Monserrat C., Botella C., Baños R. M., Guerrero B. (2005). Using augmented reality to treat phobias. IEEE Comput. Graph. Appl. 25 31–37. 10.1109/MCG.2005.143 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kim G. J. (2005). A SWOT analysis of the field of virtual reality rehabilitation and therapy. Presence 14 119–146. 10.1162/1054746053967094 [ CrossRef ] [ Google Scholar ]
  • Klavans R., Boyack K. W. (2015). Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? J. Assoc. Inform. Sci. Technol. 68 984–998. 10.1002/asi.23734 [ CrossRef ] [ Google Scholar ]
  • Kleinberg J. (2002). “Bursty and hierarchical structure in streams,” in Paper Presented at the Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2002; Edmonton , Alberta, NT: 10.1145/775047.775061 [ CrossRef ] [ Google Scholar ]
  • Kleinberg J. (2003). Bursty and hierarchical structure in streams. Data Min. Knowl. Discov. 7 373–397. 10.1023/A:1024940629314 [ CrossRef ] [ Google Scholar ]
  • Korolov M. (2014). The real risks of virtual reality. Risk Manag. 61 20–24. [ Google Scholar ]
  • Krueger M. W., Gionfriddo T., Hinrichsen K. (1985). “Videoplace—an artificial reality,” in Proceedings of the ACM SIGCHI Bulletin Vol. 16 New York, NY: ACM, 35–40. 10.1145/317456.317463 [ CrossRef ] [ Google Scholar ]
  • Lin C. H., Hsu P. H. (2017). “Integrating procedural modelling process and immersive VR environment for architectural design education,” in MATEC Web of Conferences Vol. 104 Les Ulis: EDP Sciences; 10.1051/matecconf/201710403007 [ CrossRef ] [ Google Scholar ]
  • Llorens R., Noé E., Ferri J., Alcañiz M. (2014). Virtual reality-based telerehabilitation program for balance recovery. A pilot study in hemiparetic individuals with acquired brain injury. Brain Inj. 28 : 169 . [ Google Scholar ]
  • Lombard M., Ditton T. (1997). At the heart of it all: the concept of presence. J. Comput. Mediat. Commun. 3 10.1111/j.1083-6101.1997.tb00072.x [ CrossRef ] [ Google Scholar ]
  • Loomis J. M., Blascovich J. J., Beall A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behav. Res. Methods Instr. Comput. 31 557–564. 10.3758/BF03200735 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Loomis J. M., Golledge R. G., Klatzky R. L. (1998). Navigation system for the blind: auditory display modes and guidance. Presence 7 193–203. 10.1162/105474698565677 [ CrossRef ] [ Google Scholar ]
  • Luckerson V. (2014). Facebook Buying Oculus Virtual-Reality Company for $2 Billion. Available at: [ Google Scholar ]
  • Maurugeon G. (2011). New D’Fusion Supports iPhone4S and 3DSMax 2012. Available at: [ Google Scholar ]
  • Mazuryk T., Gervautz M. (1996). Virtual Reality-History, Applications, Technology and Future. Vienna: Institute of Computer Graphics Vienna University of Technology. [ Google Scholar ]
  • Meldrum D., Glennon A., Herdman S., Murray D., McConn-Walsh R. (2012). Virtual reality rehabilitation of balance: assessment of the usability of the nintendo Wii ® fit plus. Disabil. Rehabil. 7 205–210. 10.3109/17483107.2011.616922 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Milgram P., Kishino F. (1994). A taxonomy of mixed reality visual displays. IEICE Trans. Inform. Syst. 77 1321–1329. [ Google Scholar ]
  • Minderer M., Harvey C. D., Donato F., Moser E. I. (2016). Neuroscience: virtual reality explored. Nature 533 324–325. 10.1038/nature17899 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Neri S. G., Cardoso J. R., Cruz L., Lima R. M., de Oliveira R. J., Iversen M. D., et al. (2017). Do virtual reality games improve mobility skills and balance measurements in community-dwelling older adults? Systematic review and meta-analysis. Clin. Rehabil. 31 1292–1304. 10.1177/0269215517694677 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nincarean D., Alia M. B., Halim N. D. A., Rahman M. H. A. (2013). Mobile augmented reality: the potential for education. Procedia Soc. Behav. Sci. 103 657–664. 10.1016/j.sbspro.2013.10.385 [ CrossRef ] [ Google Scholar ]
  • Orosz K., Farkas I. J., Pollner P. (2016). Quantifying the changing role of past publications. Scientometrics 108 829–853. 10.1007/s11192-016-1971-9 [ CrossRef ] [ Google Scholar ]
  • Ozbek C. S., Giesler B., Dillmann R. (2004). “Jedi training: playful evaluation of head-mounted augmented reality display systems,” in Proceedings of SPIE. The International Society for Optical Engineering Vol. 5291 eds Norwood R. A., Eich M., Kuzyk M. G. (Denver, CO: ), 454–463. [ Google Scholar ]
  • Perry S. (2008). Wikitude: Android App with Augmented Reality: Mind Blow-Ing. Digital Lifestyles. [ Google Scholar ]
  • Radu I. (2012). “Why should my students use AR? A comparative review of the educational impacts of augmented-reality,” in Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on , (IEEE) , 313–314. 10.1109/ISMAR.2012.6402590 [ CrossRef ] [ Google Scholar ]
  • Radu I. (2014). Augmented reality in education: a meta-review and cross-media analysis. Pers. Ubiquitous Comput. 18 1533–1543. 10.1007/s00779-013-0747-y [ CrossRef ] [ Google Scholar ]
  • Riva G. (2018). The neuroscience of body memory: From the self through the space to the others. Cortex 104 241–260. 10.1016/j.cortex.2017.07.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riva G., Gaggioli A., Grassi A., Raspelli S., Cipresso P., Pallavicini F., et al. (2011). NeuroVR 2-A free virtual reality platform for the assessment and treatment in behavioral health care. Stud. Health Technol. Inform. 163 493–495. [ PubMed ] [ Google Scholar ]
  • Riva G., Serino S., Di Lernia D., Pavone E. F., Dakanalis A. (2017). Embodied medicine: mens sana in corpore virtuale sano. Front. Hum. Neurosci. 11 : 120 . 10.3389/fnhum.2017.00120 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riva G., Wiederhold B. K., Mantovani F. (2018). Neuroscience of virtual reality: from virtual exposure to embodied medicine. Cyberpsychol. Behav. Soc. Netw. 10.1089/cyber.2017.29099.gri [Epub ahead of print]. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenberg L. (1993). “The use of virtual fixtures to enhance telemanipulation with time delay,” in Proceedings of the ASME Winter Anual Meeting on Advances in Robotics, Mechatronics, and Haptic Interfaces Vol. 49 (New Orleans, LA: ), 29–36. [ Google Scholar ]
  • Schmidt M., Beck D., Glaser N., Schmidt C. (2017). “A prototype immersive, multi-user 3D virtual learning environment for individuals with autism to learn social and life skills: a virtuoso DBR update,” in International Conference on Immersive Learning , Cham: Springer, 185–188. 10.1007/978-3-319-60633-0_15 [ CrossRef ] [ Google Scholar ]
  • Schwald B., De Laval B. (2003). An augmented reality system for training and assistance to maintenance in the industrial context. J. WSCG 11 . [ Google Scholar ]
  • Serino S., Cipresso P., Morganti F., Riva G. (2014). The role of egocentric and allocentric abilities in Alzheimer’s disease: a systematic review. Ageing Res. Rev. 16 32–44. 10.1016/j.arr.2014.04.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Skalski P., Tamborini R. (2007). The role of social presence in interactive agent-based persuasion. Media Psychol. 10 385–413. 10.1080/15213260701533102 [ CrossRef ] [ Google Scholar ]
  • Slater M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364 3549–3557. 10.1098/rstb.2009.0138 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Slater M., Sanchez-Vives M. V. (2016). Enhancing our lives with immersive virtual reality. Front. Robot. AI 3 : 74 10.3389/frobt.2016.00074 [ CrossRef ] [ Google Scholar ]
  • Small H. (1973). Co-citation in the scientific literature: a new measure of the relationship between two documents. J. Assoc. Inform. Sci. Technol. 24 265–269. 10.1002/asi.4630240406 [ CrossRef ] [ Google Scholar ]
  • Song H., Chen F., Peng Q., Zhang J., Gu P. (2017). Improvement of user experience using virtual reality in open-architecture product design. Proc. Inst. Mech. Eng. B J. Eng. Manufact. 232 . [ Google Scholar ]
  • Sundar S. S., Xu Q., Bellur S. (2010). “Designing interactivity in media interfaces: a communications perspective,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , (Boston, MA: ACM; ), 2247–2256. 10.1145/1753326.1753666 [ CrossRef ] [ Google Scholar ]
  • Sutherland I. E. (1965). The Ultimate Display. Multimedia: From Wagner to Virtual Reality. New York, NY: Norton. [ Google Scholar ]
  • Sutherland I. E. (1968). “A head-mounted three dimensional display,” in Proceedings of the December 9-11, 1968, Fall Joint Computer Conference, Part I , (ACM) , 757–764. 10.1145/1476589.1476686 [ CrossRef ] [ Google Scholar ]
  • Thomas B., Close B., Donoghue J., Squires J., De Bondi P., Morris M., et al. (2000). “ARQuake: an outdoor/indoor augmented reality first person application,” in Digest of Papers. Fourth International Symposium on Wearable Computers , (Atlanta, GA: IEEE; ), 139–146. 10.1109/ISWC.2000.888480 [ CrossRef ] [ Google Scholar ]
  • Ware C., Arthur K., Booth K. S. (1993). “Fish tank virtual reality,” in Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems , (Amsterdam: ACM; ), 37–42. 10.1145/169059.169066 [ CrossRef ] [ Google Scholar ]
  • Wexelblat A. (ed.) (2014). Virtual Reality: Applications and Explorations. Cambridge, MA: Academic Press. [ Google Scholar ]
  • White H. D., Griffith B. C. (1981). Author cocitation: a literature measure of intellectual structure. J. Assoc. Inform. Sci. Technol. 32 163–171. 10.1002/asi.4630320302 [ CrossRef ] [ Google Scholar ]
  • Wrzesien M., Alcañiz M., Botella C., Burkhardt J. M., Bretón-López J., Ortega M., et al. (2013). The therapeutic lamp: treating small-animal phobias. IEEE Comput. Graph. Appl. 33 80–86. 10.1109/MCG.2013.12 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wrzesien M., Burkhardt J. M., Alcañiz M., Botella C. (2011a). How technology influences the therapeutic process: a comparative field evaluation of augmented reality and in vivo exposure therapy for phobia of small animals. Hum. Comput. Interact. 2011 523–540. [ Google Scholar ]
  • Wrzesien M., Burkhardt J. M., Alcañiz Raya M., Botella C. (2011b). “Mixing psychology and HCI in evaluation of augmented reality mental health technology,” in CHI’11 Extended Abstracts on Human Factors in Computing Systems , (Vancouver, BC: ACM; ), 2119–2124. [ Google Scholar ]
  • Zyda M. (2005). From visual simulation to virtual reality to games. Computer 38 25–32. 10.1109/MC.2005.297 [ CrossRef ] [ Google Scholar ]
  • 🇺🇦 #StandWithUkraine
  • Reviews / Why join our community?
  • For companies
  • Frequently asked questions

Popular related searches

Virtual reality, what is virtual reality.

Virtual reality (VR) is the experience where users feel immersed in a simulated world, via hardware—e.g., headsets—and software. Designers create VR experiences—e.g., virtual museums—transporting users to 3D environments where they freely move and interact to perform predetermined tasks and attain goals—e.g., learning.

To create great VR experiences, it ’ s vital to design with a first-person perspective in mind.

VR—Entering New Worlds Through Equipment

In VR design, your goal is for users to experience an alternative existence through whichever senses your design can access . The more your design reaches your users through—particularly—sight, hearing and touch, the more immersed they will be in virtual reality. You therefore want to isolate users as far as possible from the real world .

VR’s history began with the View-Master (a stereoscopic visual simulator) in 1939 and Morton Heilig’s 1950s’ Sensorama multi-experience theatre. The development of the first head-mounted display (HMD) followed in 1968. Then, designers focused on professionally geared applications in the 1970s and 1980s. With more sophisticated technology, they could tailor computerized VR experiences to the fields of military training, medicine and flight simulation. After 1990, just after “Virtual Reality” became popularly known, VR entered the wider consumer world through video-games. VR has since become progressively more affordable and sophisticated.

Virtual Reality vs Augmented Reality vs Mixed Reality

write an article about virtual reality

In virtual reality, you isolate the user from the real world and create presence in a virtual environment.

VR differs from augmented reality , where users remain anchored in the real world but experience computerized overlays . AR and VR—along with mixed reality (MR), where users interact with digital elements which are anchored to the real world—come under the umbrella term extended reality (XR). In AR, users employ devices (e.g., smartphones) to find parts of the real world (e.g., a room) overlaid with computer-generated input. Designers insert a range of digital elements such as graphics and GPS overlays which adjust to changes in the user’s environment (e.g., movement) in real time. In MR, users have a more sophisticated experience where digital interplays with real-world content—e.g., surgeons operating on patients via projected ultrasound images. In VR, users’ real-world movements translate fully to preprogrammed environments, letting them play along with convincing VR illusions. So, in VR design you offer users near-total escapism .

“Virtual Reality is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures .” — Mark Zuckerberg, Facebook ’s CEO

VR—Designing to Dupe the Senses

In VR, you have three “genres” to reach users:

  • Hyper-immersive or emotion-based designs (which can involve scents).
  • Live-action-style POV (first-person point-of-view) documentaries (e.g., exploring virtual rainforests).
  • Games and gamified experiences .

write an article about virtual reality

To design VR experiences, you must understand human physiology and psychology—users’ needs, limitations, etc.—and what makes VR experiences enjoyable versus unpleasant.

You should focus on:

  • Believability —Incorporate features (principally images and sound) to envelop users entirely in 3D environments.
  • Interactivity —Make designs intuitive; remove outside-world interference. While you’re presenting brand-new environments, how users interact with these must match what they’re used to doing in the real world (e.g., punches are still punches).
  • Explorability —Ensure users can freely move about and discover the “reality” offered.
  • Immersiveness —By combining the above factors, you achieve the goal: inserting users ’ presences in your design .

Throughout the design process, you should consider:

  • Safety and Comfort —Prevent virtual-reality sickness (like motion sickness, but stemming from sensory conflict/triggers from artificial environments). You want to immerse users in a— virtually —hermetically sealed environment. However, they can become disoriented . Users’ bodies are different. Where they experience VR can be just as varied. When they can move freely using your design, they can collide with/trip over things or fall. While some devices—e.g., the HTC Vive—warn users about objects, don’t overlook safety. Neck strains can arise from headset use. Additionally:
  • Let users see and use controls/menus .
  • Avoid changes in brightness and speed (don’t accelerate users; avoid flashing lights).
  • Keep frame rates high .
  • Keep peripheral motion minimal —users typically have 180-degree vision.
  • Interaction and Reaction —Design ergonomically for users’ natural movement. Systems’ head-tracking, motion-tracking and (possibly) eye-tracking sensors and hand controllers must respond dynamically . That means they must offer instant control which reflects real-world behavior. Users’ arms have 50–70-cm reach; so, place key interactions in this zone.
  • Image and Text Scale —Prevent eye strain and help user orientation with depth perception: your visuals keep changing, so make images more detailed as users approach them. Use eye-catching text. Comfortable focusing distances are typically 0.5–20 meters.
  • Sound— Use sound for atmosphere, and to give users a sense of place in the environment and cues.

As VR keeps advancing into the mainstream, a demographic shift is inevitable as more users expect to be teleported into exciting new experiences.The less they sense your interface, the more immersed they become.

Learn More about Virtual Reality

Learn how to design your own VR experiences with our course: How to Design for Augmented and Virtual Reality

An award-winning designer’s insights into VR UX, with tips and tools including frameworks:

Smashing Magazine ’s in-depth approach to VR UX design:

A well-stocked resource on VR design, including finer points (e.g., terrain features):

Literature on Virtual Reality

Here’s the entire UX literature on Virtual Reality by the Interaction Design Foundation, collated in one place:

Learn more about Virtual Reality

Take a deep dive into Virtual Reality with our course How to Design for Augmented and Virtual Reality .

Augmented reality (AR) and virtual reality (VR) are quickly becoming huge areas of technology, with giants like Apple, Microsoft and Google competing to provide the next big AR or VR experience. Statista predicts that the worldwide user base for AR and VR will reach 443 million by 2025, meaning that it is becoming increasingly important for UX designers to know how to create amazing VR and AR experiences . Designing for 3D experiences will require completely new ways of thinking about UX design—and the question is, are you well equipped to tackle this new field of design?

The good news is that while AR and VR hardware and software is changing dramatically, UX principles and techniques for 3D interaction design will remain consistent. It’s just that new opportunities and sensitivities will present themselves to designers and developers. This course will give you the 3D UX skills to remain relevant in the next decade and beyond. You’ll be able to create immersive experiences that tap into the novel opportunities that AR and VR generate . For example, you will need to bring together key UX concepts such as emotional design, social UX, and gamification in order to create an immersive AR or VR creation.

AR and VR need to be easy to use in order to provide users with experiences that wow. Avoiding common usability mistakes and applying the principles of storytelling will help you carefully craft 3D experiences that delight, intrigue, amuse, and most of all evoke the response you intended. You’ll need to engage users in first-person narratives by making use of spatially dynamic UI’s, including gaze, gesture, movement, speech, and sound—often used in combination.

During the course, you will come across many examples and case studies from spatial and holographic interface designers . You will master how to create immersive 3D content for AR and VR that provides rich user experiences. The course offers exercises and challenges throughout, all aimed at helping you and/or your team practice your emerging or existing AR/VR skills. You will be taught by Frank Spillers, who is a distinguished speaker, author, and internationally respected senior usability practitioner with over 15 years of experience in the field.

All Literature

Augmented reality – the past, the present and the future.

write an article about virtual reality

  • 2 years ago

Beyond AR vs. VR: What is the Difference between AR vs. MR vs. VR vs. XR?

write an article about virtual reality

Healthcare UX—Design that Saves Lives

write an article about virtual reality

How to Design Gesture Interactions for Virtual and Augmented Reality

write an article about virtual reality

  • 3 years ago

How to Create Design Plans for Virtual and Augmented Reality Experiences

write an article about virtual reality

  • 4 years ago

Less is (Also) More in Virtual and Augmented Reality Design

write an article about virtual reality

New to UX Design? We’re Giving You a Free ebook!

The Basics of User Experience Design

Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.

In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!

  • Random article
  • Teaching guide
  • Privacy & cookies

write an article about virtual reality

Virtual reality

by Chris Woodford . Last updated: August 14, 2023.

Y ou'll probably never go to Mars, swim with dolphins, run an Olympic 100 meters, or sing onstage with the Rolling Stones. But if virtual reality ever lives up to its promise, you might be able to do all these things—and many more—without even leaving your home. Unlike real reality (the actual world in which we live), virtual reality means simulating bits of our world (or completely imaginary worlds) using high-performance computers and sensory equipment, like headsets and gloves. Apart from games and entertainment, it's long been used for training airline pilots and surgeons and for helping scientists to figure out complex problems such as the structure of protein molecules. How does it work? Let's take a closer look! Photo: Virtual pilot. This US Air Force student is learning to fly a giant C-17 Globemaster plane using a virtual reality simulator. Picture by Trenton Jancze courtesy of US Air Force .

A believable, interactive 3D computer-created world that you can explore so you feel you really are there, both mentally and physically.

write an article about virtual reality

Photo: The view from inside. A typical HMD has two tiny screens that show different pictures to each of your eyes, so your brain produces a combined 3D (stereoscopic) image. Picture by courtesy of US Air Force.

Photos: EXOS datagloves produced by NASA in the 1990s had very intricate external sensors to detect finger movements with high precision. Picture courtesy of NASA Ames Research Center and Internet Archive .

Photo: This more elaborate EXOS glove had separate sensors on each finger segment, wired up to a single ribbon cable connected up to the main VR computer. Picture by Wade Sisler courtesy of NASA Ames Research Center .

Artwork: How a fiber-optic dataglove works. Each finger has a fiber-optic cable stretched along its length. (1) At one end of the finger, a light-emitting diode (LED) shines light into the cable. (2) Light rays shoot down the cable, bouncing off the sides. (3) There are tiny abrasions in the top of each fiber through which some of the rays escape. The more you flex your fingers, the more light escapes. (4) The amount of light arriving at a photocell at the end gives a rough indication of how much you're flexing your finger. (5) A cable carries this signal off to the VR computer. This is a simplified version of the kind of dataglove VPL patented in 1992, and you'll find the idea described in much more detail in US Patent 5,097,252 .

Photo: A typical handheld virtual reality controller (complete with elastic bands), looking not so different from a video game controller. Photo courtesy of NASA Ames Research Center.

If you liked this article...

Find out more, on this website.

  • 3D-television
  • Augmented reality
  • Computer graphics

News and popular science

  • Apple Is Stepping Into the Metaverse. Will Anyone Care? by Kellen Browning and Mike Isaac. The New York Times, June 2, 2023. Can Apple succeed with the Metaverse where Facebook has (so far) failed?
  • Everybody Into the Metaverse! Virtual Reality Beckons Big Tech by Cade Metz. The New York Times, December 30, 2021. The Times welcomes the latest push to an ambitious new vision of the virtual world.
  • Facebook gives a glimpse of metaverse, its planned virtual reality world by Mike Isaac. The Guardian, October 29, 2021. Facebook rebrands itself "Meta" as it announces ambitious plans to build a virtual metaverse.
  • Military trials training for missions in virtual reality by Zoe Kleinman. BBC News, 1 March 2020. How Oculus Rift and Unreal Engine software are being deployed in military training.
  • What went wrong with virtual reality? by Eleanor Lawrie. BBC News, 10 January 2020. Despite all the hype, VR still isn't a mainstream technology.
  • FedEx Ground Uses Virtual Reality to Train and Retain Package Handlers by Michelle Rafter. IEEE Spectrum, 8 November 2019. How VR could help reduce staff turnover by weeding out unsuitable people before they start work.
  • VR Therapy Makes Arachnophobes Braver Around Real Spiders by Emily Waltz. IEEE Spectrum, 24 January 2019. Can VR cure your fear of spiders?
  • Touching the Virtual: How Microsoft Research is Making Virtual Reality Tangible : Microsoft Blog, 8 March 2018. A fascinating look at Microsoft's research into haptic (touch-based) VR controllers.
  • Want to Know What Virtual Reality Might Become? Look to the Past by Steven Johnson. The New York Times, November 3, 2016. What can the history of 19th-century stereoscopic toys tell us about the likely future of VR?
  • A Virtual Reality Revolution, Coming to a Headset Near You by Lorne Manly. The New York Times, November 19, 2015. Musicians, filmmakers, and games programmers try to second-guess the future of VR.
  • Virtual Reality Pioneer Looks Beyond Entertainment by Jeremy Hsu. IEEE Spectrum, April 30, 2015. Where does Stanford VR guru Jeremy Bailenson see VR going in the future?
  • Whatever happened to ... Virtual Reality? by Science@NASA, June 21, 2004. Why NASA decided to revisit virtual reality 20 years after the technology first drew attention in the 1980s.
  • Virtual Reality: Oxymoron or Pleonasm? by Nicholas Negroponte, Wired, Issue 1.06, December 1993. Early thoughts on virtual worlds from the influential MIT Media Lab pioneer

Scholarly articles

  • The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature by Pietro Cipresso et al, Front Psychol. 2018; 9: 2086.
  • Virtual Reality as a Tool for Scientific Research by Jeremy Swan, NICHD Newsletter, September 2016.
  • Virtual Heritage: Researching and Visualizing the Past in 3D by Donald H. Sanders, Journal of Eastern Mediterranean Archaeology & Heritage Studies, Vol. 2, No. 1 (2014), pp. 30–47.

For older readers

  • Virtual Reality by Samuel Greengard. MIT Press, 2019. A short introduction that explains why VR and AR matter, looks at the different technologies available, considers social issues that they raise, and explores the likely shape of our virtual future.
  • Virtual Reality Technology by Grigore Burdea and Philippe Coiffet. Wiley-IEEE, 2017/2024. Popular VR textbook covering history, programming, and applications.
  • Learning Virtual Reality: Developing Immersive Experiences and Applications for Desktop, Web, and Mobile by Tony Parisi. O'Reilly, 2015. An up-to-date introduction for VR developers that covers everything from the basics of VR to cutting-edge products like the Oculus Rift and Google Cardboard.
  • Developing Virtual Reality Applications by Alan B. Craig, William R. Sherman, and Jeffrey D. Will. Morgan Kaufmann, 2009. More detail of the applications of VR in science, education, medicine, the military, and elsewhere.
  • Virtual Reality by Howard Rheingold. Secker & Warburg, 1991. The classic (though now somewhat dated) introduction to VR.

For younger readers

  • All About Virtual Reality by Jack Challoner. DK, 2017. A 32-page introduction for ages 7–9.

Current research

  • Advanced VR Research Centre, Loughborough University
  • Virtual Reality and Visualization Research: Bauhaus-Universität Weimar
  • Institute of Software Technology and Interactive Systems: Vienna University of Technology
  • Microsoft Research: Human-Computer Interaction
  • MIT Media Lab
  • Virtual Human Interaction Lab (VHIL) at Stanford University
  • WO 1992009963: System for creating a virtual world by Dan D Browning, Ethan D Joffe, Jaron Z Lanier, VPL Research, Inc., published June 11, 1992. Outlines a method of creating and editing a virtual world using a pictorial database.
  • US Patent 5,798,739: Virtual image display device by Michael A. Teitel, VPL Research, Inc., published August 25, 1998. A typical head-mounted display designed for VR systems.
  • US Patent 5,798,739: Motion sensor which produces an asymmetrical signal in response to symmetrical movement by Young L. Harvill et al, VPL Research, Inc., published March 17, 1992. Describes a dataglove that users fiber-optic sensors to detect finger movements.

Text copyright © Chris Woodford 2007, 2023. All rights reserved. Full copyright notice and terms of use .

Rate this page

Tell your friends, cite this page, more to explore on our website....

  • Get the book
  • Send feedback

write an article about virtual reality

Drawing from science

Blending storytelling with art and design

Shifting the field of view: science stories in virtual reality

Visual Science Communicator, Garvan Institute of Medical Research and Lab Research Fellow, 3D Visualisation and Aesthetics Lab UNSW Art and Design, UNSW Sydney

Disclosure statement

Kate Patterson works for The Garvan Institute of Medical Research. Previous animations were funded by Garvan Institute and Inspiring Australia Commonwealth Government Grant.

UNSW Sydney provides funding as a member of The Conversation AU.

View all partners

write an article about virtual reality

Since first donning a Virtual Reality (VR) headset only eight months ago, my personal relationship with this technology has progressed at lightning speed, way past the awkward getting-to-know-you phase.

write an article about virtual reality

In the broad scheme of things, I probably could be described as an early adopter of the technology and a VR content producer, rather than a mere passive consumer (although, one could argue that ‘passive’ and ‘VR’ shouldn’t be used in the same sentence).

I find being in this position exciting, yet, highly unlikely. I’m not a gamer and I’m sadly a complete movie inexperient. I think I spent most of my youth barefoot in the dirt rather than being immersed by screen-based storytelling. It’s ironic I guess, given that screen based science storytelling is my adult passion and profession.

‘ COLOSSE ’ was my first VR adventure, a real-time virtual reality storytelling experience, with a stylised, character-focused visual language. I spent a mere two minutes inside the beautiful, simple animated world but it was inexplicably transformative.

I won’t even try to explain how amazing it is inside a well-designed virtual story - there are no words that can describe the power of the immersive experience. VR is something you have to actually try in order to understand. Despite the simplified visual language and restricted colour palate, COLOSSE manages to physically and cognitively transport the viewer into a world that, thanks to VR technology is unbelievably believable.

write an article about virtual reality

VR technology has come a long way since the motion-sickness inducing early days. Gaming enthusiasts, developers and film studios have embraced the recent release of the fist consumer-ready VR headgear and the buzz has been huge. VR offers a truly immersive experience and promises to transform how we play, interact and learn but exactly how this will evolve is uncertain.

I am particularly interested in how VR can be utilised for education and awareness in medicine and science, beyond its superficial appeal as a mere gimmick. (Secretly, I believe science and medicine could lean on gimmicks more, serving as an ice breaker and communication platform… but that’s a story for another time).

write an article about virtual reality

In previous work, I created animated biomedical stories about cancer and epigenetics , the focus of research carried out at the Garvan Institute of Medical Research. The animations show how biological molecules behave inside our cells, and how things can go wrong in disease.

As the biomedical scientist-artist in this context, I present a visual ‘review’ of a science story. In my head I can see the scenes in three dimensions, then the animation software and editing process allows me to convert this for two dimensional viewing on a screen - providing a ‘window’ into the molecular world. One of the most common things people ask is how they can get beyond the field of view. People don’t just want to look through the window, they want to jump through the window.

With VR, I can make that happen. The question is, do I want to? what would this achieve?

Could VR facilitate embodied cognition , linking our thoughts to our physical experience? Pedagogical approaches that tie experience to thinking have been shown to enhance learning in a number of contexts, dating back to experiential learning theories of John Dewey (1938). Students of anatomy can now progress beyond looking at an illustration within a text book and interact with anatomical structures in 3D .

write an article about virtual reality

VR and education has also featured in the TED and TEDx speaker series over the past 12 months. Michael Bodekaer, founder of Labster which teaches life sciences through gamified education talks about immersive 3D virtual worlds and laboratories . Alex Faaborg, of Google Cardboard talks about how VR is providing incredible opportunities for the future of art, journalism, and education .

Working at the frontiers of interactive technology, Chris Milk stretches virtual reality into a new canvas for storytelling . You will notice that this last example does not have a strong focus on education, but I can’t help but imagine how science storytelling will be transformed by VR and then how those stories will influence education.

My challenge is to tell science stories about epigenetics, a topic that is complex, dynamic and extremely abstract to most people. Epigenetic structures and events are essentially invisible, smaller than the wavelength of light. Yet, epigenetic mechanisms are incredibly important. They underpin our health, tie us to our ancestors and commonly go haywire in disease.

Communicating this in a visually engaging, informative way can be hard, but biomedical animation has been able to bridge the gap for many through scientifically accurate, beautiful animations that can be awe inspiring. But, imagine being able to go past the field of view, through the window and be physically inside the cell.

Some early projects that have explored this include Molecule VR , one of many educational tools developed by Unimersiv that aims to support and integrate classic teaching and learning methods.

This 360 degree video demonstration from Nucleus Medical Media looks amazing even on a mobile device. Scientific communication studio Random 42 have also produced an incredible VR experience that uses signal transduction pathway to transport the viewer into the cell (starting at approximately 45s).

This is surely just the start of science visualisation in VR, but will immersive experiences actually translate to better understanding? What science stories are the most suitable for telling in VR? How do we overcome the technical limitations that still exist?

As a practitioner and visual communication researcher, while I love viewing science content in VR, it is the research questions and challenges that I find the most exciting.

write an article about virtual reality

Graphic Designer

write an article about virtual reality

Associate Curator

write an article about virtual reality

Director, Library Services (University Librarian)

write an article about virtual reality

Professor of Biosecurity Research

write an article about virtual reality

Provost and Chief Academic Officer

  • Trending Now
  • Data Structures
  • DSA to Development
  • Data Science
  • Topic-wise Practice
  • Machine Learning
  • Competitive Programming
  • Master Sheet
  • Write an Interview Experience
  • Share Your Campus Experience
  • Virtualization | Xen: Paravirtualization
  • Data Virtualization
  • Conditional Access System and its Functionalities
  • Scheduling Cron Job on Google Cloud
  • ASCII Table
  • What is Microsoft Publisher?
  • What is Mind Reading Device?
  • Artificial Intelligence – Temporal Logic
  • Applications of Computer Vision
  • Difference between Backward and Forward Chaining.
  • Weiler Atherton – Polygon Clipping Algorithm
  • Create Free Windows Virtual Machine in Azure
  • Protecting Excel WorkBook using Automation Anywhere
  • Artificial Intelligence Permeation and Application
  • What is Vagrant?
  • Getting Started with Google Actions
  • Overview of Personality Prediction Project using ML
  • What is Interferometric Modulator?
  • What is Ipad?
  • Case Based Reasoning – Overview
  • SDE SHEET - A Complete Guide for SDE Preparation
  • Naive Bayes Classifiers
  • Removing stop words with NLTK in Python
  • Linear Regression (Python Implementation)
  • Apriori Algorithm
  • Software Engineering | Coupling and Cohesion
  • Supervised and Unsupervised learning
  • Linear Regression in Machine learning
  • Reinforcement learning
  • Decision Tree Introduction with example

Virtual Reality – Introduction

Imagination is to Technology as Fuel is to Fire. Imagination and purpose together drive technology. It is due to these that technology today is evolving at an exponential rate. Virtual Reality on one hand places the viewer inside a moment or a place, made possible by visual and sound technology that maneuvers the brain into believing it is somewhere else. It is an experience of a world that does not exist. Sounds cool, right?! Virtual Reality tricks one’s mind using computers that allow one to experience and more interestingly, interact with a 3D world. This is made possible by putting on a head-mounted display that sends a form of input tracking. The display is split between the eyes and thus creates a stereoscopic 3D effect with stereo sound to give you a graphic experience. The technology feeds in the images of the objects taken at slightly different angles which creates an impression of depth and solidity. The LCD or OLED panels inside are refracted by lenses completely fill the field of vision with what is to be displayed and experienced. Together with the technology and the input tracking, it creates an immersive and exciting believable world that the computer generates. What we know today as VR, has been existing for decades now. Taking you back to when 360° paintings took the world by surprise, giving a virtual element. VR merely is ‘The Wise Guy’ of the digital world. It creates a world that neither functions according to you, nor does it respond to your actions. It gives you a first-hand experience with even the after-effects of an event along with the ability to interact and interrelate with the world created. This technology holds vast potential insights into the workings of the Human Brain. According to researchers and medical specialists, VRs have the ability to diagnose medical conditions from social anxiety to chronic pain. Though the use of VR to tweak the brain is still at a budding stage. While most people were too engrossed in its advancements and leap in gaming and exploring the industry, many are unaware of its achievements in the health sector. VRs have been successfully treating post-traumatic stress disorder since the 1990s, the new programs thus address a much broader range of conditions. The VR content exposes the patients to a virtual, safe, and controlled environment where they can explore and eventually learn that the threats they are worried about can be tackled patiently with time, thinking, and analyzing. VR displays are available in various forms. Ranging from the ones that already contain the display, splitting the feed for each eye using a cable to transfer the feed to the console, to the more affordable ones which depend upon the VR mode and applications on Smartphones. The HTC Vive, the Oculus Rift, and Sony PlayStation VR are a few of the head mounts that use this setup. One can create one’s own Virtual Reality Box at home, along with a smartphone compatible with the VR mode. Irrespective of the use, Virtual Reality produces a set of data that can be used to develop models, communication, training methods, and interaction. In simple words, the possibilities are endless.

Types of Virtual Reality (VR)

On the basis of the most important feature of VR i.e. immersion and the types of systems and interfaces used, The VR systems can be classified into 3 types :

Immersive Semi-immersive Non – immersive

1. Immersive VR system

Immersive VR system is closest to the virtual environment. It makes us experience the highest level of immersion. This VR system is expensive than others. It provides the closest feeling of being in virtual world. Tools and gadgets used in this system are advanced and not so common to use.

2. Semi – immersive VR system

Semi – immersive VR systems also make us to experience a high level of immersion but the tools and gadgets used are not so advanced and costly. Tools and gadgets used in this system are common to us and utilize physical models.

Non-immersive VR system

Non-immersive VR system is the least immersive and least immersive VR system. It is not expensive to use this system. It is also known as desktop VR system because the gadgets used are limited to glasses and display monitors and it uses the least expensive components.

What are the basic components for VR systems?

Input devices Output devices Software

1. Input Devices

Input devices in VR are the tools for the users to interact with virtual world. Using Input devices , the users communicates with the computer.

Example – 3D mouse.

2. Output devices

Output devices is used to represent the virtual world and it’s effect to the users. It generates the feeling of immersion to the users.

Example : LCD shutter glasses.

3. Software

Software has a  key role in VR  . It is used for the handling Input and output devices, data analysis and generate feedback. Software controls and synchronize the whole environment.

Please Login to comment...

Improve your coding skills with practice.

  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, a review of the application of virtual reality technology in higher education based on web of science literature data as an example.

write an article about virtual reality

  • Department of Educational Technology, Faculty of Education, Fujian Normal University, Fuzhou, China

In recent years, with the rapid development of information technology, the visualization and interaction of virtual reality technology has developed, making the application of VR technology in education more and more attractive to scholars. This paper adopts the literature analysis method, focusing on the application of VR technology in the field of higher education, selects 80 empirical studies in the Web of Science literature database, conducts in-depth reading and analysis of the papers, and summarizes the experience of applying VR technology in the field of higher education. In order to deepen the application of VR in higher education. The research results show that the research objects of VR application in higher education are mainly undergraduates, the main majors of application are science, engineering and medical related majors, and the application of humanities and social sciences is relatively rare. At present, the devices used for VR in higher education are mainly computers and headsets, which are not portable enough. In addition, students lack guidance and training in the use of VR equipment before class. Compared with traditional education, most of the studies show that the application of VR to higher education and teaching has positive effects, mainly by affecting students' behaviors to affect students' learning results. The researchers mainly use traditional evaluation methods to evaluate teaching effects, use questionnaires and tests to collect data, and use data analysis methods mainly difference analysis and descriptive analysis. Based on the research results, the researchers put forward some suggestions at the end of the paper.


VR is also called artificial environment. The definition of modern VR technology is using complex technology to form synthetic stimulation to replace real-world sensory information, which means that users enter the virtual scene and use special helmets, data gloves, or input devices such as keyboards and mice to interact with the virtual environment in real-time. VR can make the user feel like they are in a real environment ( Shin, 2018 ). The core VR originated in the 1960s. In the early 1990s, the Interactive

Systems Project Working Group funded by the National Science Foundation of the United States made a systematic discussion of VR. But due to the underdeveloped technical equipment at that time and the high cost of VR-related equipment, VR technology has not been developed for a long time. Later, with the rapid development of technology and the emergence of affordable VR headsets and other devices for games and entertainment, VR ushered in the second spring. At present, a new generation of information technology represented by VR technology has been widely used in the field of education. VR can provide students with teaching aids that are closer to real life, rich and diverse personalized learning environments, and change the boring traditional classrooms. Students are also provided with opportunities for active exploration and interactive communication to promote active learning. Additionally, VR has been described as a learning aid for the 21st century ( Rogers, 2019 ), with a study showing that students retain more information and are better able to apply what they learn after engaging in VR exercises ( Krokos et al., 2019 ). With the use of VR to enhance potential learning, it is understandable why researchers, organizations and educators are now paying close attention to the technology.

Although the application of VR technology in education is not new, the development of VR technology in visualization and interaction in recent years has made the application of VR technology in education more and more attractive to scholars, especially in higher education field. The core characteristics of VR are immersion, interactivity, and imagination, and these three characteristics make VR a huge advantage in the field of higher education ( Ryan, 2015 ). First of all, VR has immersion, using this technology can build a realistic virtual environment, students are immersed in it. Second, VR is interactive. In the VR environment, when students perform operations, the environment will give students corresponding feedback, and interaction can deepen students' impression of the classroom and master knowledge more efficiently. Third, VR has imagination, students can deeply understand related issues according to their own senses, cognitive methods and cognitive ability in the simulated VR environment, which also can expand students' innovative thinking awareness and effectively enhance students' creativity and imagination.

Due to the increasing attention of the academic community to VR technology, there have been some comprehensive overviews and systematic reviews of VR educational applications, but the research of many scholars has the following problems. First, many scholars have put together virtual reality, augmented reality and mixed reality to study the application of education in education ( Alalwan et al., 2020 ; Duarte et al., 2020 ). Second, most of the research on the application of virtual reality technology to higher education starts from a theoretical perspective and studies the possibility of virtual reality technology in certain disciplines in the field of higher education, and there are few empirical studies ( Moore, 1995 ; Hoffman and Vu, 1997 ). Finally, most of the current research reviews are aimed at a certain discipline or a certain course of higher education, and few researchers have comprehensively analyzed the overall application of VR technology in higher education.

This study focuses on the application of VR technology in the field of higher education. Through the analysis of the Web of Science database literature, it comprehensively analyzes the application and existing problems of VR technology in higher education, and summarizes the application of VR technology in higher education. Finally, some suggestions are put forward to deepen the application of VR in higher education. This research mainly focuses on the following questions:

Q1: Which majors in the field of higher education are VR mainly applied to, and what are the trends or characteristics?

Q2: What are the main VR devices currently used in higher education?

Q3: Compared with traditional education, what is the teaching effect of VR in higher education and what aspects mainly affect the teaching effect? What methods do scholars mainly use to evaluate the teaching effect of VR in higher education?

Q4: What research methods are used to study the application of VR in higher education, and what data collection methods and data analysis methods are mainly used by researchers?

Research methods

This paper's research method the literature analysis method. The literature analysis method refers to an analysis method through which the collected literature data of a specific subject is studied to find out the nature or situation of the research object, and draw the researcher's point of view from it. This research method is the most commonly used research method in literature review. The main steps of this research are: first, determine the topic of application of virtual reality technology in higher education; second step, collect and screen relevant literature according to the literature inclusion rules of this study; third step, carry out historical analysis and research context of the literature Analysis, analysis of research results and analysis of research methods; Finally, write a review.

Data sources and screening methods

This study reviewed the literature on the application of VR technology in higher education in the past 10 years (2012–2021), and the main database used in this study is Web of Science, and IEEE and Google Scholar are supplementary databases. The reason for choosing Web of Science as the main database is that the database has a strong interdisciplinary nature, which is consistent with the theme of this study. Two groups of keywords are used in literature retrieval. Keywords related to virtual reality technology are virtual reality, virtual teaching environment, virtual classroom. Keywords related to higher education is higher education. The keywords of the two groups are combined as search characters. The time span of the literature was set from January 1, 2012 to December 31, 2021, and 2590 related literatures were retrieved. During the analysis of this study, the following criteria were used for the inclusion and exclusion of literature: (1) Only empirical research on the application of VR in advanced fields was included, and research results such as review papers and conference abstracts were excluded. (2) The technology used in the research must be VR, and the literature on augmented reality and mixed reality is excluded. (3) The object of the research must be the object of higher education, which means that including junior college students, undergraduate students, master students, and doctoral students. According to the above principles, a total of five researchers participated in the literature screening. After three rounds of screening, the researchers finally took out the uncertain literature and discussed them together, and finally got 80 relevant literatures. The literature screening flowchart is shown in Figure 1 .

Figure 1 . Literature inclusion process.

Code analysis method

This paper uses the coding table shown in Table 1 to quantify the 80 documents obtained by screening. Table 1 is divided into three first-level dimensions, namely research context, related technology and teaching effect evaluation. Each first-level dimension is divided into 2–4 second-level dimensions.

Table 1 . Coding table.

Analysis of the annual distribution of literature

Statistical statistics of the 80 selected literatures were carried out, and the trend graph of the number of empirical literatures with years was obtained as shown in Figure 2 . The horizontal axis represents the year, and the vertical axis represents the number of articles per year. Observing Figure 2 , we can see that from 2012 to 2017, researchers were not enthusiastic about the application of VR in higher education, but from 2018, the number of papers published showed a clear upward trend. The number of papers published in 2020 has reached its peak in the past decade, reaching ten times the number of papers published in 2012. To analyze the reasons, the first is the development of VR technology. After 2018, VR technology has matured, and the price of related equipment has become relatively cheap, and many colleges and universities have been able to purchase equipment. Second, since the outbreak of the new crown epidemic from the end of 2019, many college students have been unable to enter the school to study due to other reasons such as being in a risk area or the school is in a risk area, but the practical skills of college students must be mastered in the syllabus, so the researchers turned their attention to VR technology. Its 3I characteristics can just solve the problem, allowing students to complete related skills training without returning to school.

Figure 2 . Annual distribution of empirical literature on VR applications in higher education.

Analysis of the research context

The empirical research papers are classified to obtain a statistical graph of VR applied to related majors in the field of higher education, as shown in Figure 3 . Observing Figure 3 , it is obvious that VR technology is widely used in medical education in the field of higher education. Among the 80 literatures, there are 53 empirical studies on medicine, accounting for 66%, followed by engineering, Science, Physical Education, Art, History and others. In addition, VR technology can also be applied to the teaching of higher education open courses and other skills training, such as ideological and political teaching, English teaching and speech training. After classifying the research objects, it is found that the research objects of VR technology application and higher education are mainly undergraduates. Among the 80 literatures, 77 research objects are undergraduates, 3 are graduate students. The number of empirical studies taking junior college students and doctors as research objects is 0. Higher medical education is an important part of higher education and shoulders the important mission of cultivating medical talents and maintaining and promoting human health. Using VR technology can help medical students to repeatedly train their skills without considering the issue of experimental resources. For example, Jung et al. integrated VR technology to learn basic skills of laparoscopy, students repeatedly learned the basic skills of operation through the laparoscopic simulator ( Jung et al., 2012 ). In addition, VR technology can also help medical students improve their empathy skills, so that they can treat patients in a more appropriate way. For example, some researchers have applied VR technology to elderly nursing teaching, and the results show that students show higher understanding and empathy for elderly people with aging diseases such as macular degeneration and hearing loss ( Dyer et al., 2018 ). Immersive VR courses have improved medical students' ability to assess patients with dyspnea ( Zackoff et al., 2020 ), allowing timely escalation of care for patients with signs of respiratory failure. There are some necessary skills that need to be mastered in the teaching objectives of science and engineering majors in higher education. Researchers can use VR technology to conduct virtual training or virtual experiments. Virtual practical training is skill training in a virtual environment, such as applying VR technology to welding courses. The research results show that most students' final exam scores for welding practice are significantly higher than their mid-term exam scores, and students' learning effect on VR-assisted welding courses is also high. Expressed an obvious affirmation ( Huang et al., 2020 ); Applying VR to machine tool training, the results show that the effect of machine tool training in virtual environment is greatly improved ( Chen et al., 2019 ). Compared with the training room in the real environment, virtual training can not only save teaching costs and avoid safety risks, but also stimulate students' interest in learning, turn students into subjects, and promote students' active learning. Virtual experiment is to imitate the virtual environment to conduct experiments, such as physics and chemistry experimental teaching due to site restrictions or dangers cannot carry out practical operations, students can use VR equipment to conduct experiments. It can be seen from the research results that the main target of VR application in the field of higher education is undergraduates, the majors of application are mainly science, engineering and medicine-related majors, and the application of humanities and social sciences such as history and art is relatively rare.

Figure 3 . The professional statistics situation of VR application in higher education.

Research on VR equipment

The main devices for VR applications in the field of higher education include computers, headsets, VR simulators, mobile phones and so on. Computers are the main equipment for VR applications in higher education. Through computers, virtual laboratories can be made, VR games can be developed, and so on. The VR head-mounted device uses the head-mounted display device to block people's vision and hearing of the external environment, and guide the user to have a feeling of being in a virtual environment. Head-mounted virtual reality devices can bring an excellent real experience, making the content in the book touchable, interactive, and perceptible. VR headsets can be divided into three categories: external headsets, all-in-one headsets, and mobile headsets. VR simulators are the main equipment for VR applications in higher education. The latest VR head-mounted displays, the HTC Vive and Oculus Rift, not only allow users to experience a high degree of immersion, but also provide interactivity. VR headsets generally have a Bluetooth connection function, and users can experience

interaction by operating Bluetooth devices such as external controllers. The advantage of this type of interaction is that the handle operation is more familiar to the user. In addition, the range of motion of such equipment is small, and the comfort level will be increased. However, the current type of external Bluetooth handle device also has some disadvantages. For example, there are too many buttons, and many buttons in the device cannot be used by the user; the operation is unreal. In the real world, if you want to pick up something on the ground, the user needs to squat down and open his palm, but in the virtual world, it can only be achieved by moving a finger and pressing a button, which is a bit contrary to the immersion of VR equipment. VR equipment is divided into three categories: non-immersive - desktop VR such as some 3D interactive animation; immersive - cave-based VR such as immersive experience under the surround wall screen in a closed room and fully immersive - full Immersive VR systems such as Google Cardboard. This study compares VR devices according to the classification of VR device immersion in the literature.

In this study, VR simulators mainly refer to medical simulators, which are mainly used for basic teaching and skill training in medical education. Typical simulators include laparoscopic simulators, oral simulators, clinical puncture simulators, and so on. Traditional medical training is based on see one, do one, teach one model. However, this traditional model does not work in the world of modern medical technology. For example, laparoscopic VR simulators allow medical students to perform training in laparoscopic and gynecological surgery in a hyper-realistic and risk-free environment ( Mulla et al., 2012 ); Oral simulator Moog simodon allows medical students to conduct repeated training of dental restoration skills in the event of accidental injury ( Murbay et al., 2020 ). Scientific studies have proven that the application of simulators greatly improves training results compared to traditional methods. A high-quality simulation experience allows trainees to lessen their doubts and immerse themselves in the training scenarios. For example, if a virtual patient bleeds suddenly during training, its performance can be visually and sensory enough for the doctor to feel real enough for the trainee to engage in the training to find a quick solution. The laparoscopic simulator can also record trainee behavior with millimeter precision, which can provide valuable feedback not only on how the trainee is performing during the procedure, but also on the potential behavioral outcomes of the simulated patient.

Compared with VR headsets, mobile phones are more portable. The combination of mobile phones and VR glasses creates a VR environment, and students can access the system through mobile terminals for learning. For example, Kader et al. created a VR crime scene activity through Web VR software such as Uptale, where students can participate remotely through mobile devices to investigate virtual reality crime scenes and collect evidence for subsequent analysis ( Kader et al., 2020 ). This paper conducts statistics and analysis on the VR devices used in the obtained 80 literatures, and the statistical results are shown in Figure 4 . Looking at Figure 4 , it can be seen that the number of VR devices in 2017–2021 has increased significantly compared to 2012–2016. In the 5 years from 2012 to 2016, the main device for VR application in higher education was the computer, followed by simulators, head-mounted devices, etc. However, in the 5 years from 2017 to 2021, headsets have become the main equipment for VR applications in higher education, and the proportion of teaching using VR simulators has also been greatly reduced. Analyzing the reasons, due to the rapid development of science and technology, researchers are more inclined to use devices with lower prices and higher interactivity, which can make users highly immersed in the virtual environment. In addition, according to the results of literature analysis, it is found that many studies are biased toward practical application, and lack relevant standards for technical index evaluation and operating environment evaluation. This part of the content deserves in-depth attention of researchers in the future.

Figure 4 . Statistics of VR teaching equipment. (A) VR Equipment (2012-2016). (B) VR Equipment (2017-2021). (C) Number of VR devices.

Research on the effect of VR teaching

The teaching effect is a common concern of researchers. As shown in Figure 5 , the researchers used a variety of evaluation methods to evaluate the teaching effect of VR in higher education, including traditional evaluation, embedded evaluation and hybrid evaluation. The traditional evaluation includes questionnaires, test papers and various tests, while the embedded evaluation can be completed in the VR environment. The mixed evaluation includes both traditional evaluation and embedded evaluation. Observing Figure 5 , it can be found that when studying the application of VR in higher education, researchers mostly use traditional evaluation to evaluate the teaching effect, a total of 64 papers, accounting for 80% of the research literature. For example, Samosorn et al. used questionnaires and tests to understand the effect of the intervention after training nursing students in the use of VR intervention for difficult airway management ( Samosorn et al., 2020 ). Followed by the embedded evaluation, a total of 9, accounting for 11.25% of the research literature. Four studies used mixed assessment to evaluate the teaching effect, and three studies did not evaluate the teaching effect. In the 80 literatures, the teaching effect of most of the research is positive. For example, the research results of Arif et al. show that in the virtual reality environment, students show stronger concentration ( Arif, 2021 ); However, some research results show that the teaching effect of VR in higher education is not so ideal. For example, Harrison et al. research results showed that in the teaching of surgical preparation, the application of VR technology did not show a perceptible advantage compared with traditional video teaching ( Harrison et al., 2017 ). In addition, this paper also studies the impact of VR technology teaching on students' learning results, and analyzes 80 literatures that were finally screened. The application of VR technology in higher education mainly affects students' learning results by affecting students' behavior ( n = 35), followed by affecting students' cognition ( n = 22), and finally affecting students' learning outcomes ( n = 6) by affecting students' emotional attitudes.

Figure 5 . Statistics of evaluation methods.

VR research methods

This paper also sorts out and categorizes the research methods of related literature. The number of these articles is larger than the total number of research literature after measurement, because an article may contain more than one research design category, one data collection method and one data analysis method. The research results show that there are 63 empirical quantitative studies, 13 empirical qualitative studies, 29 design-oriented studies, and 5 without methods in the literature studied. The data collection methods used in the research literature mainly include questionnaires, tests, interviews, observations and experiments. The main data collection methods are questionnaires ( n = 39) and tests ( n = 39). Semi-structured questionnaires are between structured questionnaires and unstructured questionnaires. The answers to the questions include both pre-set, fixed, and standard options types, as well as questions that the respondents can answer freely. Therefore, semi-structured questionnaires have the advantages of both structured and unstructured questionnaires, and such questionnaires are widely used in research surveys. In addition to questionnaires and tests, the data collection methods in some literatures are the interview method ( n = 8) and the observation method ( n = 5), most of which are used for qualitative research. The data analysis methods used in the research literature mainly include descriptive analysis ( n = 25), difference analysis ( n = 57) and correlation analysis ( n = 2). Difference analysis mainly includes t -test, ANOVA, ANCOVA, chi-square test, Mann-Whitney U -test and other analysis methods, correlation analysis methods include correlation coefficient, regression model, factor analysis and other methods. Among all the 57 articles that used the method of difference analysis, the t -test ( n = 24) was the most commonly used method for quantitative data analysis, and the other analysis methods averaged only 10–12 articles. Most of these findings are that VR-generated interventions are positive and improve teachers' teaching effectiveness.

Answer to questions

This paper organizes and analyzes the relevant literature on the application of VR technology in the field of higher education. The research results show that the main objects of VR application in the field of higher education are undergraduates, and the majors of application are mainly related to science, engineering and medicine, history, art and others. Humanities and social science majors have relatively few applications. The main devices for VR applications in the field of higher education include computers, headsets, VR simulators, mobile phones and so on. Due to the development of technology, head-mounted devices have become the main equipment for VR applications in higher education. Because researchers are more inclined to use devices that are less expensive and more interactive, allowing users to be highly immersed in a virtual environment. Compared with traditional education, the vast majority of studies show that the application of VR to higher education and teaching has positive effects, mainly by affecting students' behaviors to affect students' learning results, secondly by affecting students' cognition, and finally by affecting students' learning. Affect students' emotional attitudes to affect students' learning results. Analysis of the reasons, due to the immersion and interactivity of the VR environment, VR is often used in higher operability courses in higher education, such as medical anatomy courses, dental and dental repair courses, machine tool operation training courses, and so on. L In addition, scholars mainly use objective test questions and subjective questionnaires to evaluate the teaching effect of VR, and a small number of researchers use embedded evaluation to evaluate the teaching effect. The research results show that the main research method of the researchers to study VR application and higher education is quantitative research, the main data collection methods selected are questionnaires and tests, and most of the questionnaire surveys are mainly semi-structured questionnaires. The data analysis methods used in the research literature mainly include descriptive analysis and variance analysis. Among all 57 articles using variance analysis methods, t -test is the most commonly used quantitative data analysis method.


Based on the relevant findings of this study, researchers put forward the following suggestions, hoping to provide some references for the application of VR in the field of higher education. First of all, VR is mainly applied to undergraduates in higher education, and the applied majors are mainly science, engineering and medical related majors. The application of history, art and other humanities and social science majors is relatively less, so researchers can turn their attention to the teaching of humanities and social science majors. It is hoped that future researchers will make full use of these advantages and deepen the application of VR in the overall field of higher education.

Secondly, it can be seen from the review results that the current VR related devices applied in the field of higher education are mainly computers and head-mounted displays. Compared with the past, the immersion and interaction of devices are improved, but there is still a lot of room for development. The portability of the device is not enough, and students cannot use the device to study anytime, anywhere. Mobile devices such as mobile phones, although portable, must be combined with other devices such as VR glasses to simulate the VR environment. The immersion and interaction of this VR environment Sex will be greatly reduced, and it is expected that researchers in the future can break through the technical difficulties and solve this problem. Third, in the literature studied, only a few researchers have conducted VR training and teaching guidance for students before class. The lack of guidance will cause students to operate equipment irregularly and cause some unnecessary problems. First of all, if students do not receive relevant guidance in advance, they will aimlessly explore VR equipment in class, which not only wastes teaching time, but also has a negative impact on teaching effectiveness. In addition, the screen of the VR equipment covers most of the student's field of vision, and the immersion brought by the equipment is very strong, which is easy to cause 3D vertigo. Therefore, in the process of VR teaching, it is very necessary to provide training or guidance to students. Finally, most researchers need to develop, test, and continuously improve VR educational resources before applying them to teaching, which is time-consuming and labor-intensive. In the future, researchers can develop more professional and mature VR educational resources according to the teaching objectives of different majors in higher education, and directly apply them to the teaching of higher education, promoting the deep integration of the entire higher education teaching field and VR technology.


Due to the nature of the review, selection, and filtering process, our work has several limitations. First of all, this paper does not deeply explore the application of VR technology to the various majors in higher education mentioned in the text, but only elaborates on the application of some representative majors. Secondly, this study does not focus on some obstacles when VR is applied in teaching, such as adverse reactions after students use VR equipment, and network conditions of VR equipment. But these obstacles are related to technology, and we believe that the development of technology will solve these problems soon.

Conclusion and future research

In this paper, the researchers focus on the application of VR technology in higher education. The teaching contexts, VR devices, teaching effects, and research methods used in the recent literature on educational VR applications are studied. The results of the review show that people are very interested in the application of VR technology in higher education. Many researchers believe that VR technology is a very useful teaching tool, and they have conducted many experimental studies. But the maturity of VR use in higher education is still uncertain, including the VR technology used and the teaching resources developed by educators. In addition, most researchers use VR technology to teach teaching practice, which also implies that VR technology is mostly used in higher operability majors. This paper also points out the problems and puts forward some suggestions. Our work will continue with an analysis of VR technologies available for higher education, followed by an in-depth survey of higher education workers to gain a more detailed understanding of the current state of adoption. Our goal is to deepen the application of VR in higher education.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

XD and ZL contributed to conception and design of the study. XD organized the database, performed the statistical analysis, and wrote the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.

This paper was supported by Fujian Province Educational Science Plan 2021 key special project for high-quality development of basic education, project name is Fujian Province Robot Education Application Research under the National Intelligent Manufacturing Industry Policy (Grant No. FJWTZD21-06). This paper was supported by 2022 Fujian Provincial Social Science Fund, project title is Research on the construction of foreign language teaching model based on educational robots (Grant No. FJ2022BF015).


We thank ZL for his guidance on the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Alalwan, N., Cheng, L., Al-Samarraie, H., Yousef, R., Alzahrani, A. I., and Sarsam, S. M. (2020). Challengeand prospects of virtual reality and augmented reality utilization among primary school teachers: a developing country perspective. Stud. Edu. Evaluat. 66, 100876. doi: 10.1016/j.stueduc.2020.100876

CrossRef Full Text | Google Scholar

Arif, F. (2021). Application of virtual reality for infrastructure management education in civil engineering. Edu. Inform. Technol. 26, 3607–3627. doi: 10.1007/s10639-021-10429-y

Chen, L. W., Tsai, J. P., Kao, Y. C., and Wu, Y. X. (2019). Investigating the learning performances between sequence-and context-based teaching designs for virtual reality (VR)-based machine tool operation training. Comput. Appl. Eng. Edu. 27, 1043–1063. doi: 10.1002/cae.22133

Duarte, M. L., Santos, L. R., Júnior, J. G., and Peccin, M. S. (2020). Learning anatomy by virtual reality and augmented reality. A scope review. Morphologie 104, 254–266. doi: 10.1016/j.morpho.2020.08.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Dyer, E., Swartzlander, B. J., and Gugliucci, M. R. (2018). Using virtual reality in medical education to teach empathy. J. Med. Library Assoc. JMLA 106, 498. doi: 10.5195/jmla.2018.518

Harrison, B., Oehmen, R., Robertson, A., Robertson, B., De Cruz, P., Khan, R., et al. (2017). “Through the eye of the master: the use of virtual reality in the teaching of surgical hand preparation,” in 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH: IEEE). p. 1–6. doi: 10.1109/SeGAH.2017.7939269

Hoffman, H., and Vu, D. (1997). Virtual reality: teaching tool of the twenty-first century?. Acad. Med. J. Assoc. Am. Med. Colleges 72, 1076–1081. doi: 10.1097/00001888-199712000-00018

Huang, C. Y., Lou, S. J., Cheng, Y. M., and Chung, C. C. (2020). Research on teaching a welding implementation course assisted by sustainable virtual reality technology. Sustainability 12, 10044. doi: 10.3390/su122310044

Jung, E. Y., Park, D. K., Lee, Y. H., Jo, H. S., Lim, Y. S., and Park, R. W. (2012). Evaluation of practical exercises using an intravenous simulator incorporating virtual reality and haptics device technologies. Nurse Edu. Today 32, 458–463. doi: 10.1016/j.nedt.2011.05.012

Kader, S. N., Ng, W. B., Tan, S. W. L., and Fung, F. M. (2020). Building an interactive immersive virtual reality crime scene for future chemists to learn forensic science chemistry. J. Chem. Edu. 97, 2651–2656. doi: 10.1021/acs.jchemed.0c00817

Krokos, E., Plaisant, C., and Varshney, A. (2019). Virtual memory palaces: immersion aids recall. Virtual reality 23, 1–15. doi: 10.1007/s10055-018-0346-3

Moore, P. (1995). Learning and teaching in virtual worlds: implications of virtual reality for education. Aust. J. Edu. Technol. 11:92–102. doi: 10.14742/ajet.2078

Mulla, M., Sharma, D., Moghul, M., Kailani, O., Dockery, J., Ayis, S., et al. (2012). Learning basic laparoscopic skills: a randomized controlled study comparing box trainer, virtual reality simulator, and mental training. J. Surg. Edu. 69, 190–195. doi: 10.1016/j.jsurg.2011.07.011

Murbay, S., Chang, J. W. W., Yeung, S., and Neelakantan, P. (2020). Evaluation of the introduction of a dental virtual simulator on the performance of undergraduate dental students in the pre-clinical operative dentistry course. Eur. J. Dental Edu. 24, 5–16. doi: 10.1111/eje.12453

Rogers, S. (2019). “Virtual reality: the learning aid of the 21st century” in Secondary Virtual Reality: The Learning Aid of the 21st Century , New York.

PubMed Abstract | Google Scholar

Ryan, M. L. (2015). Narrative as Virtual Reality 2: Revisiting Immersion and Interactivity in Literature and Electronic Media . Baltimore: JHU press.

Google Scholar

Samosorn, A. B., Gilbert, G. E., Bauman, E. B., Khine, J., and McGonigle, D. (2020). Teaching airway insertion skills to nursing faculty and students using virtual reality: a pilot study. Clin. Simulat. Nursing 39, 18–26. doi: 10.1016/j.ecns.2019.10.004

Shin, D. (2018). Empathy and embodied experience in virtual environment: To what extent can virtual reality stimulate empathy and embodied experience? Comput. Hum. Behav. 78, 64–73. doi: 10.1016/j.chb.2017.09.012

Zackoff, M. W., Real, F. J., Sahay, R. D., Fei, L., Guiot, A., Lehmann, C., et al. (2020). Impact of an immersive virtual reality curriculum on medical students' clinical assessment of infants with respiratory distress. Pediatric Critical Care Med. 21, 477–485. doi: 10.1097/PCC.0000000000002249

Keywords: virtual reality, higher education, empirical research, virtual environment, immersion learning

Citation: Ding X and Li Z (2022) A review of the application of virtual reality technology in higher education based on Web of Science literature data as an example. Front. Educ. 7:1048816. doi: 10.3389/feduc.2022.1048816

Received: 20 September 2022; Accepted: 02 November 2022; Published: 17 November 2022.

Reviewed by:

Copyright © 2022 Ding and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhe Li,

This article is part of the Research Topic

Education and Innovative Perspectives in Higher Education

  • Cities of USA
  • San Francisco
  • Los Angeles
  • Philadelphia
  • Other Cities
  • Services in USA
  • Content Marketing
  • Digital Marketing
  • Digital Strategy
  • Email Marketing
  • PPC Marketing
  • Social Media Marketing
  • Other Services
  • Industries in USA
  • Fashion & Retail
  • Food & Beverage
  • Hospitality
  • IT & Technology
  • Legal Services
  • Other Industries
  • Agency of the Month


  • Cities of the UK
  • Bournemouth
  • Services in the UK
  • Industries in the UK


  • Cities of Canada
  • Services in Canada
  • Industries in Canada


  • Cities of Australia
  • Services in Australia
  • Industries in Australia


  • Cities of Europe
  • Services in Europe
  • Industries in Europe

write an article about virtual reality

  • Cities of Asia
  • Services in Asia
  • Industries in Asia


  • Agency News
  • Marketing Resources
  • Industry News


  • Digital Ad Campaigns
  • Case Studies
  • Social Media Campaigns


  • Mobile Marketing

write an article about virtual reality

  • AI Marketing
  • Startup Marketing


  • Digital Marketing Tools
  • Marketing Reporting Tools
  • Digital Marketing Analytics Tools
  • Email Marketing Tools
  • Other Tools
  • Social Media Management Tools
  • Social Media Marketing Tools
  • Social Media Analytics Tools
  • Social Media Monitoring Tools
  • Influencer Marketing Platforms
  • Web Design Tools
  • Landing Page Builders
  • UI / UX Design Tools
  • Website Builder Software
  • Front End Development Tools
  • Team Management Softw...
  • Project Management Tools
  • Agency Management Software
  • Productivity Management Software
  • Time Tracking Tools
  • Sales Tools
  • Sales Automation Tools
  • Product Feed Management Tools
  • Sales Enablement Tools
  • AI Design Tools
  • AI Content Tools
  • AI Analytics Tools
  • AI Marketing Tools
  • Performance & Software
  • Website Optimization Tools
  • Content Delivery Network Tools
  • Cybersecurity Software
  • Web Accessibility Tools

Market your SaaS Tools and reach digital agencies & marketing professionals worldwide.

Virtual Reality & Augmented Reality

  • Virtual Reality & Augmented Reality

Latest virtual reality & augmented reality articles in 2023

Browse the best VR & AR articles on latest strategies, trends, various tools and ideas for marketers. Read our thought leadership articles on best practices to stay up to date on virtual reality & augmented reality marketing trends in 2023.

  • Advertising
  • Development
  • SEO & SEM


How Will AR / VR Headsets Affect Media & Entertainment Industry

As innovations in technology advance, we often find ourselves wondering: Is virtual reality going to be important in entertainment? And the answer is a big YES. Augmented reality and virtual reality are two of the most exciting new technologies in …

write an article about virtual reality

Will Generative AI and Mixed Reality Technologies Change Our Lives Radically?

There is no doubt that in recent years, all players in the advertising and marketing industry have been discussing how artificial intelligence will play a big role in our lives. However, it seems that the story does not end with …

NFT and cryptocurrency usage in digital marketing

NFT and Cryptocurrency Usage in Digital Marketing With Successful Case Studies

NFT and cryptocurrency is not new for the folks in the digital marketing industry. In the last few years, we all have heard about the brands increasingly using NFTs and cryptocurrencies in their marketing strategies. It is not surprising because, …

VR in tourism

Applications Of Virtual Reality In Tourism Marketing Strategy

Virtual reality is a handy marketing tool in the tourism industry. As a field that opens up a vast space for creativity, tourism embraces VR-based marketing projects.  Marketing gets the most out of tech opportunities to boost engagement and build …

Is the Metaverse the Future of Digital Marketing?

In 2021, Facebook rebranded to ‘Meta’ and announced that the metaverse would be its new focus moving forward, with Mark Zuckerberg stating his intention to build a metaverse that would become ‘the next internet’. As marketers, we of course began …


NFT’s Explained: Everything You Need to Know

This cultural phenomenon has taken the world by storm and many of us are still just as confused as you are. Being around since 2014, but making their way into mainstream media in 2021, NFTs have become all the buzz …


How Can Digital Marketing Agencies Use NFTs?

Non-fungible tokens (NFTs) are rapidly taking over the digital marketplace as artists, musicians, and other creators strive to authenticate their virtual works. Media outlets and digital entrepreneurs like Gary Vaynerchuk rave about NFTs becoming the “next big thing” in terms …

The Ultimate Guide for NFT Marketing Strategy

The Ultimate Guide for NFT Marketing Strategy

NFT marketing strategy helps you extend your audience reach, build trustful connections and boost revenue.  NFTs are snowballing, and NFT marketing campaigns are becoming popular faster than greased lightning. It’s still new. Therefore, it’s pretty welcoming for new ideas. Let …

metaverse for digital marketing, metaverse for brands, metaverse marketing

What’s In Metaverse for Digital Marketing and Brands?

Metaverse marketing allows brands to maximise their creativity and reach their target audience most attractively. That’s why brands are turning their faces to the metaverse for digital marketing one by one.  Even though it seems to be a sudden hot …


Why Metaverse Is The Key For AR/VR And NFT-savvy Brands And Marketers

Back in the 1970s people had no idea just how immersive the internet would be. In fact, back in 1996, Robert Metcalfe (inventor of Ethernet) predicted the internet would soon die.  We can look back on these predictions now and laugh at …

write an article about virtual reality

12 Smart Virtual Reality Opportunities For Businesses in 2022

Virtual reality will have a wide and lasting impact on our work, education and home lives. The emergence of commercial VR technologies has led to an increase in innovation, with a wide range of businesses looking for Virtual Reality opportunities …


The Rise of Learning 4.0: How Online Learning and Training are Revolutionizing Education

Learning, as we all know, is ubiquitous. And this stands especially true for organizations aiming to flourish in today’s competitive market space – a space that requires employees to constantly enhance their skillset. This makes it essential for organizations to …

Agencies of the Month

The Charles

This website uses cookies. Continued use of this website indicates that you have read and agree to our Terms & Conditions and Privacy Policy .

To revisit this article, select My Account, then   View saved stories

To revisit this article, visit My Profile, then View saved stories

Virtual-Reality School Is the Next Frontier of the School-Choice Movement

By Emma Green

It’s 6 A.M. A little girl, who looks to be about ten years old, hits the button on her alarm clock. She eats a bowl of cereal and brushes her teeth and hair before going to school. In class, she takes notes while her teacher, Mrs. Marty, gives a lesson. Then everyone puts on spacesuits and helmets, and the class relocates to outer space.

This is the vision for a new kind of education sold in a promotional video for Optima Academy Online, an all-virtual school that was launched in 2022. The little girl, like most of her classmates and teachers, spends a good part of her day in a Meta Quest 2 headset—a set of one-pound white goggles that extends in a single band across her eyes. She wears the headset on and off for about three hours, removing it to read a book, eat a sandwich, and hot-glue some sort of tinfoil art. Her classmates are scattered across different towns, and her teachers live all over the country. In the video, the little girl doesn’t have a single in-person interaction.

The virtual school is part of OptimaEd, a company in Florida founded by Erika Donalds, a forty-three-year-old conservative education activist. During the past school year, the academy enrolled more than a hundred and seventy full-time students up to eighth grade from all over Florida—a number that OptimaEd will roughly double this fall. Starting in third grade, full-time students wear a headset for thirty to forty minutes at a time, for four or five sessions, with built-in pauses so that the students don’t experience visual fatigue. (Younger students do something closer to regular virtual school, using Microsoft Teams and Canvas.) In the afternoon, kids complete their coursework independently, with teachers available to answer questions digitally.

OptimaEd is possible because of Florida’s distinctive education-policy landscape. The state was one of the pioneers of the school-choice movement. Ever since Jeb Bush was governor, in the early two-thousands, Florida has provided various kinds of vouchers to students from poor families, and later to those with disabilities, allowing them to purchase courses from companies like OptimaEd. Governor Ron DeSantis expanded that program by making all students eligible for education vouchers, funded with the money that would otherwise go toward their public-school education. This legislation has made it even easier for parents to use state dollars for OptimaEd’s products. But the company is also quickly expanding beyond Florida. This fall, it’s providing V.R. services to students in Arizona—another state that has embraced school choice—and parts of Michigan.

OptimaEd bills its education as classical, with an emphasis on the intellectual traditions of Western civilization and the liberal arts. Younger students learn phonics and diagram sentences. Older ones read the great books and the Constitution. Teachers talk a lot about virtues, such as courage and self-government. “It’s a very traditional, back-to-basics education,” Donalds said on a podcast recently.

Donalds comes from the world of Florida school-choice activism. She’s well known in Florida political circles: a few of Donalds’s closest activist allies founded the group Moms for Liberty , which has become the leading conservative voice in the movement for parents’ rights in education, and Donalds serves on the group’s advisory board. She is also married to a congressman, Byron Donalds, a rising star in the Republican Party, who was briefly a contender for Speaker of the House in 2023. (A number of Republicans in Florida have encouraged him to run for governor once DeSantis is out of office.) The movements for school choice and parental rights sometimes dovetail with the classical-school movement, which has been experiencing a revival in America since the nineteen-eighties. Whereas the former often focusses on the shortcomings of public schools, the latter offers an alternative vision for education: a way of teaching students that calls back to the ancient wisdom and traditions of the Western world, instead of instructing them using progressive pedagogy and frameworks.

Erika Donalds has built an experiment in total parental control over education. “I see a huge and growing industry of à-la-carte education options—the ability to customize the experience both physically and geographically,” Donalds said. “We’ve been told that only certified teachers in a traditional classroom environment can deliver instruction. And we know that’s just not true.” She believes that virtual-reality school hits many of the benefits of in-person learning—real-time instruction, classmates, field trips—while letting families build the schedules and communities they want. If parents aren’t satisfied with whatever ideas their local public school is pushing, they can opt out by putting their kid in a headset. “If you entrust your child with us, you know that the curriculum is not going to be contrary to what you’re teaching at home,” she said.

Donalds is a certified public accountant, so it’s fitting that her radicalization began with something called “new math.” The Common Core, an effort to standardize grade-level learning across the country, was being rolled out in 2010. The bipartisan Common Core initiative was led by policy wonks who wanted to make American students more globally competitive, partly in response to George W. Bush’s No Child Left Behind legislation, which had created a patchwork of different standards across the states. Common Core leaders adopted techniques used in other industrialized countries, such as new strategies of mathematical reasoning—students, for example, might be required to draw out a multi-step number line in order to complete a simple subtraction problem. The oldest of the Donaldses’ three sons was in elementary school at the time, and, like many others, he found the new process confusing. Donalds started attending anti-Common Core rallies, wearing a T-shirt that read “Stop Common Core,” featuring a stop sign and an apple with a worm inside. Critics on the left objected to the initiative’s emphasis on standardized testing; those on the right saw it as an example of federal overreach into local schools, led by faraway bureaucrats in Washington. Parts of the anti-Common Core movement were associated with the Tea Party, which eventually helped launch Byron’s political career.

Erika Donalds ran for her local school board in 2014 and served for four years. But, during that time, she discovered a more wide-reaching way to change the education landscape in Florida. Her husband had received an invitation to join the board of a new classical charter school, Mason Classical Academy, that was opening in Naples. The family was interested in the educational model, which they saw as an improvement over their public-school experience because of its rigor and focus on direct encounters with original texts. Byron Donalds joined Mason’s board, and Erika Donalds took on an unpaid position doing accounting and administrative work to get the school launched. All three of their sons eventually enrolled. Their activism became more focussed. They weren’t just advocating school choice. They wanted to expand the classical model across America.

Throughout the next few years, conflicts started emerging at Mason. Erika and Byron thought the school lacked proper oversight and planning. In 2019, a special counsel from the county school district found mismanagement and asked for two of Mason’s board members to step down. (Mason has called this report a “sham,” and another review, conducted by a law firm that the school hired, found no mismanagement.) Mason sued Erika Donalds and various others, alleging a conspiracy to take over Mason; Donalds filed a motion to dismiss the suit, which she’s called frivolous. By then, the couple had pulled their kids out of the school—and Erika Donalds had started laying the groundwork for Optima. “I hate to see poor decision-making by a few bad actors damage our movement,” she wrote in a letter to “Friends and Colleagues.”

Donalds wanted to found and run classical charter schools, using the curriculum provided by Hillsdale College , a small liberal-arts school in Michigan. (The virtual academy, which was founded later, does not use Hillsdale’s curriculum.) So far, Optima has launched five brick-and-mortar schools, providing them with administrative services. (Two of the schools have since decided to become independent.) “These organizations are like multimillion-dollar businesses,” Donalds told me. “I saw an opportunity to bring my skills into that industry.” Today, Optima has figured out a number of ways to advance its ideas beyond its charter schools. It recently started doing professional-development training for the Tennessee Department of Education as a subcontractor, and hopes to consult with more state and local governments.

The open marketplace created by school-choice policies can occasionally blur the purpose of public money. The state of Mississippi is suing a virtual-reality company called Lobaki for using funds allocated to welfare to create a virtual-reality school. (Lobaki denies that it misused state welfare funds.) This is one piece of a dizzying corruption case embroiling a former governor, a retired pro-wrestler, and the football Hall of Famer Brett Favre. One of Lobaki’s co-founders, Vince Jordan, is now OptimaEd’s chief technology officer. (Jordan denied all wrongdoing and emphasized that he is no longer involved with Lobaki.) When I brought this up to Donalds, she said that she had no knowledge of the lawsuit. The laws regulating charter schools can also be complicated. This summer, the Florida Department of Education wrote to Optima Academy, saying that the virtual school had over-enrolled non-local students and would have to pay a penalty of some four hundred and seventy thousand dollars. Donalds countered that the law doesn’t apply to charter schools, and said that they haven’t yet had to pay any fines.

In Florida, school choice was given a huge boost by the pandemic. Many families came away dissatisfied with the country’s forced experiment in remote learning, but some found that they liked the model. OptimaEd has capitalized on that interest, pitching itself directly to homeschooling families and to churches. “Pastors, are you ready to take a more active role in providing quality school choice options to your congregation and community?” one flyer reads.

Support for school choice does not necessarily follow predictable racial and political lines. Black families tend to be more supportive of charter schools, education savings accounts, and vouchers and scholarships for private school than other racial groups, according to the journal Education Next . “Lots of Black families and Latino families are happy to send their kids to charter schools, because, in general, they want their kids to go to what they think is the best school, and they’re going to leverage the options that are available to them,” Liz Cohen, the policy director of FutureEd, a think tank at Georgetown, said. Optima’s online academy reflects this diversity: last year, forty-six per cent of its students were nonwhite, and a fifth were economically disadvantaged. Erika Donalds’s own family is mixed race: her husband is Black, and she is white. She says that she often gets stereotyped in ways that don’t reflect her family life. “The race card is played against people who espouse conservative values,” she told me. “People who oppose our ideas try to discredit us by questioning our motives.”

Donalds is deeply involved in the flourishing classical-education world. She’s closely connected to Hillsdale, which has led a significant expansion of K-12 classical schools. She’s on the board of the Classic Learning Test, a standardized exam that students can take instead of the SAT or ACT, mostly when applying to religious colleges or certain small liberal-arts schools—and, if they live in Florida, when applying for a state-funded college scholarship. Along with her husband, she has also helped raise the profile of classical education with right-wing figures who have significant public platforms. She has given tours of the virtual academy to the Fox News commentator Greg Gutfeld, the former Secretary of Education Betsy DeVos , and the former Florida Governor Jeb Bush. The Donaldses hosted a 2022 winter fund-raiser called A Classical Christmas, featuring Donald and Melania Trump as the star guests; all tickets came with a photo op with the former President and First Lady.

And yet Donalds has also discovered that V.R. education, in particular, can be a hard sell in the classical-ed world, whose members generally pride themselves on preserving tradition, both in terms of curriculum and form. She agrees that in-person classical schooling is still the best option for kids. But, she says, “We cannot scale in-person, classical-style schools as quickly as the demand would like us to.” (Great Hearts, a prominent network of classical schools, reported that nearly eighteen thousand students were on wait lists for its schools in the 2019-20 school year, for example.) Allergies to new technologies are not new: Donalds said that Adam Mangana, her co-founder, often talks about how Socrates was skeptical of writing because he thought it was inferior to oration. “Yes, this is an innovation,” she said, of the virtual academy. “But it will allow more people to access what we believe is the greatest type of education that’s ever been offered.”

There are about two hundred and fifty custom environments in which Optima Academy Online students and teachers can gather for lessons. These places do not exist in real life; they were built by OptimaEd’s staff using virtual furniture, buildings, and natural elements. (This is one of the things OptimaEd sells: independent schools, for example, can pay to have access to these custom-built environments.) According to Donalds, Jeb Bush, the former governor of Florida and an early school-choice promoter, was wowed by his virtual-school demo, asking with wonder, “Where is this?” Nowhere, Jeb. It’s nowhere.

This being a classical virtual-reality school, Optima’s environments include settings in ancient Greece and Rome. Recently, the head of the online academy, Dan Sturdevant, and its academic dean, Kim Abel, took me on a tour of an “early Roman outpost.” The images were closer to an animated video game than to documentary footage. We teleported past a Roman official’s house, decked out with red-clay roof tiling, up some stairs to an open patio of black-and-white-checkered marble floors, surrounded by Ionic columns and an ivy-covered railing. Here, a teacher might spawn a set of bleachers for students to sit during a lecture on a subject such as history or Latin. The head of the virtual school’s history department, Jonathan Olson, has a Ph.D. in American religious history, and is responsible for verifying the historical fidelity of the ancient sites, bleachers notwithstanding.

Toward the end of the school year, I joined a sixth-grade science class on a field trip to an Everest base camp. The scene was elaborately staged: our group was surrounded by gray tents held up by bright orange poles; a sleeping bag was carefully tucked inside each tent, even though the students wouldn’t actually be sleeping there. A whiteboard stood to our left in the snow, covered with colorful Post-its bearing scientific terms. The teacher had taken a selfie of his avatar wearing an orange mountain-explorer jumpsuit and put it on the board. Wind whistled quietly somewhere in the background of my headset.

The teachers had set up weather-station equipment that a researcher might use, such as a compass and a barometer. The kids struggled with the lesson. When a teacher asked them where air pressure would be greatest—on the beach or at the top of a mountain—they weren’t certain how to answer. The session was chaotic. On a normal day, teachers might press a button and forcibly “seat” the students to prevent them from moving around during a lesson. But, since this was supposed to be an interactive field trip, the kids were free to zoom around at will, which they did; one of them, who had styled his avatar with a helmet, walked right through me. Roughly a dozen sixth-graders were there, but it was hard to keep track of them with all the fidgeting.

In the next activity, we attempted to scale the Khumbu Icefall, which in real life is a deadly stretch on one route up Everest. The format was a cross between a quiz show and a video game. Using the teleport function on our handheld controllers, we moved along a line of chairs set up along the icefall, occasionally passing a floating notecard with a review question. But there was also a physical rule put in place: no one could move the controller in their left hand. Otherwise, they’d fall from the mountain and force the whole group back to base camp, where we’d have to start all over again.

At first, the students discussed the various dangers of the mountain. One suggested that we should all be quiet to prevent an avalanche, and then started screaming to demonstrate what not to do; a teacher quickly muted him. As the exercise got under way, the kids grew increasingly frustrated. “My teleport’s broken!” one of them shouted. Another couldn’t find the next chair in the line up the mountain. People must have been using their left-hand controllers, because the whole group kept falling and getting reset to the beginning of the activity. Every time, a prerecorded message from one of the teachers would say, “Oh, man! Well, hopefully we won’t make that mistake again.” Several kids sped ahead of the rest of the group; it wasn’t clear who was actually looking at the review questions.

Afterward, when I asked Sturdevant about what happened on the mountain, he described it as an opportunity for “virtue development.” “There’s a virtue in serving your peers well by not being a distraction,” Sturdevant said. “There’s a virtue in having self-discipline and being able to control your emotions. Those are the follow-up conversations that we will have with the students.” Mangana, the virtual academy’s chief innovation officer, acknowledged that the gaming aspect of the lesson may have overtaken the content. “Struggle is O.K.,” he said. “Distraction is not. If there are ways in which the technology gets in the way, we’ll correct those things.”

If the lesson wasn’t exactly polished, that may be because there are few schools trying to do what OptimaEd is doing. “If you look at V.R. in the classroom, it is very much still the Wild West,” Aditya Vishwanath, the co-founder and C.E.O. of Inspirit, another V.R. education company, told me. In our conversation, Mangana and Sturdevant argued that the students would remember much more of the material after reviewing it in a context like the Everest field trip, where they were moving their bodies through a distinctive landscape. It’s true that the students’ ability to move around in a V.R. lesson could possibly offer a learning advantage over traditional school. “A lot of people think what makes V.R. incredibly special as a medium is the fidelity of the visuals,” Jeremy Bailenson, the director of Stanford’s Virtual Human Interaction Lab, told me. “Really, what makes V.R. special is the fact that the scene responds naturally to your body.” And in specific circumstances—like when an in-person experience would be dangerous, impossible, counterproductive, or wildly expensive—V.R. can be great, he says.

Still, there are clear downsides. Bailenson, who has taught courses in V.R. to hundreds of college students, said that “it’s hard to fathom a world in which V.R. is the all-the-time medium of teaching just yet.” In the K-12 setting, he added, “Nobody has any idea, scientifically, about what happens when a child wears a V.R. headset for hours per day, over weeks and months on end.” In his lab, where adult researchers study V.R., there is a thirty-minute rule: everyone is required to take a break after thirty minutes in the headset, to take a drink or talk to a friend. V.R. use can cause simulator sickness, a kind of motion sickness that gives some people a nauseous, headachy feeling, like they might get when they’re riding in a car. Long-term V.R. use can also lead to something called reality blurring; Bailenson told me that, in some studies, when people stay in headsets for long periods, they have trouble distinguishing between real life and V.R. Olson, the virtual school’s history expert, told me as much, describing his attempts to move furniture around with a wave of his hand after a full eight-hour workday building environments for the academy. “If anything, I sometimes forget the laws of physics!” he said.

The other potential issue with V.R. school is the lack of community. Optima tries to foster school spirit: kids can decorate their avatar’s clothing with Optima’s logo and mascot (an owl), and they’re sorted into houses, Harry Potter-style. They also have a weekly virtual social hour. But, with the students scattered across Florida, they’re not going to make friends in the way they would at a traditional school; they can’t hang out after class, go to one another’s birthday parties, or host their classmates for sleepovers on the weekends.

Reducing the social importance of school in kids’ lives is perhaps a feature of the virtual school, not a bug. Several people I spoke with mentioned that many of the students had previously been bullied, and that V.R. school can be a haven for children with social anxiety. Diana Hill, a mom in the Orlando area whose son Rylan was a sixth-grader at the virtual school last year, said that he struggled to make friends in traditional public school: “Not one time did he ask to invite someone to a birthday party.” At Optima’s virtual school, Rylan has thrived, his mom said. His parents work from home, and they like having him around, especially because Rylan used to be so anxious about the school day. “He doesn’t have to worry about a school shooter coming in,” Hill said. “He doesn’t even have to worry about the drills anymore.”

Hill told me that Rylan has plenty of in-person friends—but he met them at golf, or church, or in their neighborhood, rather than at school. As Mangana told me, “The schoolhouse is expected to provide so much. A good life for a student is more decentralized.” ♦

New Yorker Favorites

The killer who got into Harvard .

A thief who stole only silver .

The light of the world’s first nuclear bomb .

How Steve Martin learned what’s funny .

Growing up as the son of the Cowardly Lion .

Amelia Earhart’s last flight .

Fiction by Milan Kundera: “ The Unbearable Lightness of Being .”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

write an article about virtual reality

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

“The Preparatory School”

By Hebe Uhart

Classes You’re Allowed to Take in Florida

By Skyler Higley

Why “Alone” Is the Best Reality Show Ever Made

By Jay Caspian Kang

The Virtues and the Sins of Big-Time High-School Football

By Ben McGrath

  • Main content

This company thinks the future of education is virtual reality schools that offer students a more 'decentralized' experience

  • Florida-based OptimaEd is enrolling students in virtual schools, The New Yorker reported.
  • The company enrolled about 170 students in its academy this past school year, per the report.
  • OptimaEd's cofounder Adam Mangana told The New Yorker a good student life is "more decentralized."

Back in 2015, Oculus founder Palmer Luckey predicted that — sooner or later — virtual reality headsets would find their way into the classroom and enable a new, more immersive future for education. 

"Classrooms are broken. Kids don't learn the best by reading books," Luckey told the Dublin Web Summit that year.

"There's clearly value in real-world experiences: going to do things. That's why we have field trips. The problem is that the majority of people will never be able to do the majority of those experiences."

Now, an online school called Optima Academy Online seems to be answering those queries, and bringing Luckey's vision to fruition.

The school, which opened last year, is using the Meta Quest 2 headset to take students on "field trips" to far off locations such as a Mount Everest base camp, according to a recent report by The New Yorker. 

OptimaEd, the Florida-based company behind Optima Academy, is helmed by conservative education activist Erika Donalds, wife of Republican Congressman Byron Donalds .

Erika Donalds is an adherent of the "classical school movement" that advocates a return to older, established learning traditions of the Western world, and OptimaEd labels its education as "classical." 

"I see a huge and growing industry of à-la-carte education options — the ability to customize the experience both physically and geographically," she told The New Yorker. 

Due to Florida's school choice program , which offers students vouchers to attend alternatives beyond their district's public school, students can elect to attend Optima over their local option. In April, Florida governor Ron DeSantis also signed a new law that eliminates financial eligibility restrictions in the state's voucher program. 

Over the past year, Optima Academy enrolled over more than 170 full-time students across Florida. That number could double this fall as Optima expands its virtual reality services to Arizona and parts of Michigan, The New Yorker reported. 

A field trip to Everest

The school instructs students through a combination of virtual reality sessions and online classes. Those from third to eighth grade are given Meta Quest 2 headsets that they wear for 30 to 40-minute sessions up to five times a day, the publication reported. 

Outside these sessions, students spend their days completing coursework independently and correspond with teachers online. Instruction for kindergarten through second grade is more similar to virtual school where classes are both live and pre-recorded, according to Optima's website. 

Optima Academy offers about 250 custom virtual environments, and also sells access to these environments to other independent schools, The New Yorker reported. 

In one episode recounted in the report, students in a sixth grade science class were taken on a virtual field trip to a Everest base camp.

While the virtual environment was "elaborately staged" with gray tents, sleeping bags, and sounds of wind in the background, the trip didn't go as planned, the New Yorker reported. The students struggled with the lesson, and had difficulties coordinating with each other through various activities. 

Research also shows that VR headsets can cause "simulator sickness" or "cybersickness" similar to motion sickness. Studies have also shown that prolonged periods of VR use could even result in "reality blurring," where users have trouble distinguishing between virtual reality and real life, Jeremy Bailenson, the director of Stanford's Virtual Human Interaction Lab, told The New Yorker. 

The founders suggest that school shouldn't be a student's whole life

One of the drawbacks of virtual learning — and online, remote learning on the whole — can be the absence of human interaction. 

Those attending Optima, though, told The New Yorker that it can be a reprieve for students with social anxiety or have been bullied. 

Optima's co-founder Adam Mangana told The New Yorker: "The schoolhouse is expected to provide so much. A good life for a student is more decentralized."

write an article about virtual reality

Watch: A group of Hasidic Jews in Brooklyn became experts in online learning before the coronavirus. Now educators are seeking their help.

write an article about virtual reality

  • Live Newscasts
  • 13 Investigates
  • Absolutely Colorado
  • On the Lookout
  • School Buzz
  • Colorado Springs News
  • Pueblo News
  • Illicit Spas: Hiding In Plain Sight
  • Closures & Delays
  • Weather Maps and Forecasts
  • Live HD Doppler
  • Neighborhood Weather Network
  • Viaero Wireless Network Cameras
  • Weather Video
  • Weather Photo Galleries
  • Friday Night Blitz
  • Livestream Special Coverage
  • Immigration in Colorado
  • Black Forest Fire: 10 Years Later
  • Colorado Springs Sesquicentennial
  • Waldo’s Inferno: 10 Years Later
  • KRDO NewsRadio Traffic
  • Listen Live
  • Pikes Peak Hill Climb ’23
  • Radio Program Guide
  • Radio Contests
  • Pet of the Week
  • Healthy Colorado
  • Healthy Kids
  • Healthy Seniors
  • Healthy Women
  • Healthy Men
  • Centura Health
  • Telemundo Programacion
  • Victory For Veterans
  • The Military Family
  • Your House & Home
  • Southern Colorado Jobs
  • Wear Red Friday’s
  • 2023 Colorado Springs Mayoral Runoff Election Results
  • KRDO hosts Colorado Springs Mayoral debate ahead of runoff election
  • Broadcast Contests
  • Entertainment
  • Advertise with Us
  • Contact KRDO
  • Meet the Team
  • Closed Captioning
  • Download our Apps
  • EEO Public Filing
  • FCC Public File
  • Newsletters/Alerts
  • TV Listings

“Very enlightening,” Virtual reality headsets allow Massachusetts students to learn new trades

write an article about virtual reality


Click here for updates on this story

    SALEM, Massachusetts ( WBZ ) — Due to the skyrocketing cost of a college degree, more and more students are looking for vocational education. With that in mind, some students in Massachusetts are using virtual reality to try out different careers.

Students in several North Shore communities will be able to explore different trades this fall thanks to a program run by Mass Hire North Shore Career Center in Salem.

“Just let them know that college isn’t for everybody and there are other options out there. Because a lot of kids think it’s either college or I don’t know,” Valarie Milardo of Youth Career Council said.

The software by Transfer-VR runs on the popular Oculus virtual reality gaming headset and controls.

“I personally have one of these headsets at my house so I am a bit familiar with getting that learning curve out of the way,” Mason Walsh of Lynn said.

Walsh had some experience in a machine shop, but not specifically with the tools he was learning about.

“The environment in the headset still very much looks like a machine shop,” Walsh said. “So being able to put on the headset and then learn from that, how those tools work that I see very day in the shop, that was very enlightening for me.”

The software allows students to explore careers in aviation maintenance, automotive repair, hospitality and coming for the first time this fall health care.

“We are designing more curriculum around what we have to offer and to show the kids,” Milardo said.

Mass Hire North Shore Career Center is also now offering these career exploration opportunities to adults as well.

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

Jump to comments ↓

' src=

CNN Newssource

Related articles.

write an article about virtual reality

Sports Physicals – Schedule Yours Today!

write an article about virtual reality

The Military Family: A classroom constant amid constant change

write an article about virtual reality

KRDO NewsChannel 13 and SHIELD616 Protect our Protectors Telethon

write an article about virtual reality

The Military Family: Rising to the Housing Assignment

KRDO NewsChannel 13 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here .

Development of immersive virtual reality-based hand rehabilitation system using a gesture-controlled rhythm game with vibrotactile feedback: an fNIRS pilot study

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.


  1. History of VR (Infographic)

    write an article about virtual reality

  2. Stats You Should Know About Virtual Reality 19

    write an article about virtual reality

  3. Apple Plans Standalone AR, VR Gaming Headset by 2022 and Glasses Later

    write an article about virtual reality

  4. 19 Best Books on Virtual Reality

    write an article about virtual reality

  5. Tech Explained: Virtual Reality

    write an article about virtual reality

  6. Pin on Virtual Reality (VR)

    write an article about virtual reality


  1. 12th English के 5 महत्वपूर्ण Articles// 12th English important article 2023//Article kaise likhen

  2. SEO के साथ ऐसे लिखे article तभी होगा Rank

  3. Example Vs Reality 💀

  4. Virtual reality opening entertainment, research possibilities


  1. Virtual reality (VR)

    virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment.

  2. Using AI to create better virtual reality experiences

    At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer's eyes, instead of 3D - or holographic - images ...

  3. What is Virtual Reality (VR)? The Complete WIRED Guide

    Culture Mar 8, 2020 7:00 AM The WIRED Guide to Virtual Reality Everything you ever wanted to know about VR headsets, Oculus, Vive, and simulator sickness. Illustrations by Radio All hail the...

  4. Virtual reality News, Research and Analysis

    Articles on Virtual reality Displaying 1 - 20 of 183 articles August 8, 2023 Virtual reality has negative side effects - new research shows that can be a problem in the workplace Alexis...

  5. Augmented reality and virtual reality displays: emerging ...

    251 Citations 26 Altmetric Metrics Abstract With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation...

  6. Virtual Reality

    The article presents the main applications of virtual reality, with a focus on behavioral experiments. The very first application of virtual reality was the training of army pilots in flight simulators. Today, virtual reality is a standard method for manipulating the action—perception—cycle underlying normal behavior in all autonomous agents.

  7. Virtual Reality

    The article presents the main applications of virtual reality, with a focus on behavioral experiments. The very first application of virtual reality was the training of army pilots in flight simulators. Today, virtual reality is a standard method for manipulating the action—perception—cycle underlying normal behavior in all autonomous agents.

  8. Virtual Reality: The Promising Future of Immersive Technology

    Virtual reality technology lets you explore new avenues and methods to create 3D content for your audience, thus improving engagement and attribution. Gamification: Virtual reality is a powerhouse when it comes to creating game-style mechanics that educate people in a stress-free environment.

  9. The Future Of Virtual Reality (VR)

    Spatial, which creates a tool best described as a VR version of Zoom, reported a 1,000% increase in the use of its platform since March 2020. In total, the value of the market for VR business ...

  10. The Past, Present, and Future of Virtual and Augmented Reality Research

    Abstract The recent appearance of low cost virtual reality (VR) technologies - like the Oculus Rift, the HTC Vive and the Sony PlayStation VR - and Mixed Reality Interfaces (MRITF) - like the Hololens - is attracting the attention of users and researchers suggesting it may be the next largest stepping stone in technological innovation.

  11. (PDF) Virtual Reality: An Overview

    next article, "Virtual Reality: The New Reality" by Shreeveshith K. and Snigdha Sen delves into the working of virtual reality. The research front showcases, "Interactive Virtual Reality

  12. Full article: Virtual reality as a clinical tool in mental health

    Abstract. Virtual reality (VR) is a potentially powerful technology for enhancing assessment in mental health. At any time or place, individuals can be transported into immersive and interactive virtual worlds that are fully controlled by the researcher or clinician. This capability is central to recent interest in how VR might be harnessed in ...

  13. Inspired, magical, connected: How virtual reality can make you well

    Virtual reality is a young medium and we are still learning how to design an awe-inspiring or emotionally-moving experience. ... Want to write? Write an article and join a growing community of ...

  14. Full article: Learning with virtual reality: a market analysis of

    Virtual reality, or VR, is a technology that creates an artificial digital environment, an interactive computer-generated experience with the purpose to create a simulated environment. This technology can create an environment similar to the real world, or it can be a fantastic world, creating an experience that is not possible in conventional ...

  15. How virtual reality technology is changing the way students learn

    Virtual and augmented reality technology. ... Want to write? Write an article and join a growing community of more than 169,800 academics and researchers from 4,723 institutions.

  16. What is Virtual Reality?

    What is Virtual Reality? Virtual reality (VR) is the experience where users feel immersed in a simulated world, via hardware—e.g., headsets—and software. Designers create VR experiences—e.g., virtual museums—transporting users to 3D environments where they freely move and interact to perform predetermined tasks and attain goals—e.g ...

  17. What is virtual reality?

    First, a plausible, and richly detailed virtual world to explore; a computer model or simulation, in other words. Second, a powerful computer that can detect what we're going and adjust our experience accordingly, in real time (so what we see or hear changes as fast as we move—just like in real reality).

  18. The main problem with virtual reality? It's almost as humdrum as real life

    Virtual reality, literal headache. ... Want to write? Write an article and join a growing community of more than 169,700 academics and researchers from 4,721 institutions. Register now.

  19. Shifting the field of view: science stories in virtual reality

    My challenge is to tell science stories about epigenetics, a topic that is complex, dynamic and extremely abstract to most people. Epigenetic structures and events are essentially invisible ...

  20. How a Virtual-Reality World Could Change the Workplace

    Apple's Vision Pro and Meta's Quest headsets are poised to make the virtual workspace a reality — if they can get past the buyer hurdles on price and quality. A Meta virtual-reality meeting ...

  21. Virtual reality

    An operator controlling The Virtual Interface Environment Workstation (VIEW) at NASA Ames Virtual reality (VR) is a simulated experience that employs pose tracking and 3D near-eye displays to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment (particularly video games), education (such as medical or military training) and business (such as ...

  22. Virtual Reality

    VR merely is 'The Wise Guy' of the digital world. It creates a world that neither functions according to you, nor does it respond to your actions. It gives you a first-hand experience with even the after-effects of an event along with the ability to interact and interrelate with the world created.

  23. Frontiers

    SYSTEMATIC REVIEW article Front. Educ., 17 November 2022 Sec. Higher Education Volume 7 - 2022 | This article is part of the Research Topic Education and Innovative Perspectives in Higher Education View all 23 Articles

  24. Latest VR & AR Articles in 2023

    12 Smart Virtual Reality Opportunities For Businesses in 2022. Virtual reality will have a wide and lasting impact on our work, education and home lives. The emergence of commercial VR technologies has led to an increase in innovation, with a wide range of businesses looking for Virtual Reality opportunities ….

  25. Virtual-Reality School as the Ultimate School Choice

    Emma Green writes on Optima Academy Online, an all-virtual school, and OptimaEd, founded by Erika Donalds, and on the use of V.R. in education and the classical-schooling movement.

  26. Florida Company Offers Virtual Reality Alternative to Public Schools

    Florida-based OptimaEd is enrolling students in virtual schools, The New Yorker reported. The company enrolled about 170 students in its academy this past school year, per the report. OptimaEd's ...

  27. The Effects of Spherical Video-Based Virtual Reality and Conventional

    This study compares the effects of Spherical Video-Based Virtual Reality (SVVR) and Conventional Video (CV) on students' writing achievement and motivation. A quasi-experimental method was used in a primary school's Chinese Descriptive Article Writing courses. Twenty-eight fourth-grade students were randomly divided into two groups.

  28. "Very enlightening," Virtual reality headsets allow ...

    SALEM, Massachusetts ( WBZ) — Due to the skyrocketing cost of a college degree, more and more students are looking for vocational education. With that in mind, some students in Massachusetts are ...

  29. Development of immersive virtual reality-based hand rehabilitation

    Recently, virtual reality (VR) has been widely utilized with rehabilitation to promote user engagement, which has been shown to induce brain plasticity. In this study, we developed a VR-based hand rehabilitation system consisting of a personalized gesture-controlled rhythm game with vibrotactile feedback and investigated the cortical activation pattern induced by our system using functional ...