Advanced holographic technology is tremendously close to reality.
In the last decade, VR and AR headset hype has sprawled across our timelines, but they have yet to gain more traction than TVs or computer screens as the conventional interface for digital media. Besides the cost, a major reason for this is simply the disorienting nature of wearing a device that simulates a 3D environment, which makes a lot of people sick. But the tides of technology are rapidly revamping a 60-year-old technology for the screaming 2020s: holograms.
Most recently, MIT researchers devised a new way of generatingholograms with near real-time fidelity, using a learning-based method with ultra-high efficiency. Efficiency is key to this discovery, because its new neural-net system allows holograms to run on a laptop, and possibly even a newer smartphone.
Researchers have worked to create viable computer-generated holograms for a long time, but most models called for a supercomputer to slug through the physics simulations. This takes a lot of time, and typically produces holograms of underwhelming fidelity. So the MIT researchers' work focused on overcoming these obstacles. "People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations," said the study's lead author Liang Shi, who is also a doctoral student at MIT's department of electrical engineering and computer science (EECS), in an MIT blog post. "It's often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades."
Shi thinks the new method, called "tensor holography," will make the near-future promise of holograms finally bear fruit. If the researchers' new approach works, the advance might create a technological revolution in fields like 3D printing and VR. And it's been a long time coming. In 2019, scientists created a "tactile hologram" that humans can see and hear. The system, called a Multimodel Acoustic Trap Display (MATD), employs an LED projector, a foam bead, and a speaker array. The speakers emit waves in ultrasound levels that hold the bead in the air, and move it fast enough to appear as if it moves and reflects light from the projector. Humans can't hear it, but the mechanical motion of the bead can be captured and focused to stimulate the human ears for audio, "or stimulate your skin to feel content," explained Martinez Plasencia, co-creator of the MATD and a researcher of 3D user interfaces at the University of Sussex, in a University of Sussex blog post.
To read more, click here.